Posts Tagged ‘methodology’

Digital.Humanities@Oxford Summer School 2011

After visiting Dave de Roure for the afternoon yesterday, I found myself with the chance to attend the final talk of the day at the Digital Humanities Summer School. Ray and Lynne Siemens were speaking on “The Uneasy Pursuit of the Future of the Book” and “Building and Maintaining a Team Approach in a Rapidly-Advancing Area of Research and Development”.

I found Ray’s talk a nice alternative perspective on how we (do not) understand the properties of books as traditional artefacts, let alone electronic books or iPads and similar. I googled the speakers to try and see whether they’ve taken any work to the Hypertext conference as yet, but they have not: perhaps this is some fresh digital humanities blood to recruit 🙂

Ray also asked some very ‘Web Science’ questions, including a pondering about measuring the impact of the web is on how we read and experience information. I asked how he’d document the features of textual forms: he spoke about the ‘architecture’ of the book, and the meaning of aspects such as indexes and page numbers… lots of interesting subjective things going on here.

Meanwhile, Lynne spoke about mechanisms to conduct multidisciplinary work. Like Dave, for me the main takeaway was “Wow, I’m really privileged that such work is a relatively normal affair to me” — it was a good reminder that such work is not necessarily everyday, and that approaches to it are not obvious to everyone.

Advertisements

WebSci’11 videos online

The good organisers of WebSci’11 recorded the talks, which are now available online. (videolectures.net is new to me: quite a slick site.)

My talk on Teasing Apart with Meta Analysis, an approach for understanding user experiences online, is here. Meanwhile, this past post describes the gist of it, and links to the paper and slides.

Bernie Hogan at the WebSci summer school

Bernie Hogan spoke this morning on ‘Facebook as a Research Environment’. He opened by describing Facebook as an excellent source of data, albeit (let’s face it, inevitably) one with legal, ethical and technological constraints. He described academic uses of it, including:

  • capturing user/network data and pushing that to a survey
  • comparing claims about products with people’s recommendations
  • studying relationship strength against trace data

Facebook, data and controversy

Hogan outlined a few studies that resulted in embarrassment in one way or another:

  • Lewis et al, Taste, Ties and Time: it was possible to figure out who was whom in a dataset made available (as reported online yesterday)
  • Warden, American Cultures: Pete Warden captured friend links from all publicly available Facebook profiles (about 40% of the Facebook population). He planned to release the data to the public, but got squashed by Facebook’s lawyers…
  • The Facebook 100: Porter et al released personal data from the first 100 schools to join Facebook in 2005… accidentally including IDs, making it possible to identify people in that dataset.

Yes, we’re back to the Problem for Web Science.

The FB100 dataset is still available in a pseudonymised format: I piped up to ask whether it’s still possible to figure out who is whom, and apparently you can figure out schools but not people. Really? I’d like to find out more about that.

Social capital

Hogan also remarked that Facebook provides exceptional access to social capital, which he defined as ‘the ability of individuals / groups to access resources from their social network’. He pointed to a few papers (including, of course, Granovetter’s The Strength of Weak Ties (pdf)), and asked questions such as:

  • Do community clusters affect perceptions of social capital? (I.e. does a diverse network make you feel you have access to broad support?)
  • Is the effect mediated by participation in the network or is it independent of network activity?

He touched on identity markers (e.g. in the context of reddit and wikipedia), and how they lead to differences in behaviour/topology. This reminded me of work by Michael Bernstein et al on Anonymity and Ephemerality on 4chan and /b/.

The complexity of relationships, and handling that online

The above topic came up, too. I’d say it’s a whole other blog post…

$n-disciplinary: meaning(lessness) and competing terms

I stumbled across some blurb that included the word ‘transdisciplinary’ today. I’d already been thinking about the meaning of ‘interdisciplinary’ as opposed to ‘multidisciplinary’. In a typical “dear lazywebs” moment, I posed the question to Twitter: what is the difference?

Max Wilson was good enough to respond, remarking that to him, ‘inter’ is a point of union between disciplines, and ‘multi’ presumes many disciplines. So far, so good: interdisciplinary refers to the intertwingling of disciplines, multidisciplinary means drawing on techniques and views from multiple fields.

And transdisciplinary? I’m still not sure. Max pointed me to this PDF that provides definitions:

  • Multidisciplinary: relating to, or making use of several disciplines at once
  • Cross-disciplinary: coordinated effort involving two or more academic disciplines
  • Transdisciplinary: approaches that transcend boundaries of conventional disciplines
  • Interdisciplinary: combining two or more disciplines, fields of study or professions

I’m not convinced. I see multi and I see trans. But how is the above definition of cross-disciplinary meaningfully different from that of multidisciplinary? What does it mean to ‘transcend boundaries’ of conventional disciplines? How do people make these arbitrary decisions about which words to hyphenate, and have I repeated ‘disciplinary’ enough in this post that the word is beginning to lose meaning?

Teasing Apart with Meta Analysis

I’m presenting Teasing Apart with Meta Analysis (TAMA) at WebSci’11 today: this post is a very brief summary, plus links to further information.

In a nutshell, TAMA is a method for understanding user experiences. It grew from Teasing Apart, Piecing Together (TAPT), a method I built during my EngD. TAPT is about analysis and then redesign of experiences across contexts: for example, moving from traditional to mobile web, or from physical to digital spaces. The evidence showed that the first phase of TAPT — Teasing Apart — is a very strong way to elicit information on people’s subjective experiences, particularly emotional and social aspects. As well as yielding rich data, it also turned out to be rapid to apply.

This led to the question: can we use the Teasing Apart phase of TAPT for analysis in its own right, rather than analysis leading towards redesign?

Some meta analysis materials

Some meta analysis materials

The answer is yes. In my WebSci paper I report on the method and results of a case study conducted last year: we ran two focus groups with users of two geosocial networks (Gowalla and geocaching).

Quick summary: just as with TAPT, using Teasing Apart in TAMA yields rich and relevant data on user experiences, and is quick to apply (lending itself to repeated use — good for corroborating results). It’s a flexible tool: it has been used with focus groups, directly by researchers, and by individual anonymous participants. The meta analysis approach is likewise flexible.

Interested in reading more? Here are a few links:

This post was about the TAMA method and not the results of the case study, which examined experiences with geosocial networks. I’ll be presenting those results at this year’s MobileHCI conference, in the Please Enjoy workshop on playful interactions. I shall blog it at the time of that conference, but if you’re intrigued to know more beforehand, that paper is also online!

Methods and methodology in WebSci

Last week I promised a post about the talk I gave at the Montpellier Web Science meet-up. This is that post!

I considered using the slot to discuss the specific research I’ve been doing recently, but decided instead to talk in a more holistic way about methods and methodology. I think this fit well with one of the themes of the day — WebSci and education.

I took a mixed methods approach during my EngD: by ‘mixed methods’ I refer not just to using qualitative and quantitative methods, but also to conducting lab work and field work. Doing that was tough, but very rewarding: my results were much stronger as a result.

However, traditional disciplines often leave us unequipped to deal with mixed methods: for example, as a Computer Scientist, I had access to lots of information and support for conducting quantitative work, but far fewer resources in the qualitative arena. In addition to issues of education about specific methods, there’s also the question of how we understand overall methodology: it’s all very well to comprehend ten different research approaches, but if I don’t know how to combine them in a sensible way, I’m not as well equipped as I may think.

So: Web Scientists need a diverse palette of methods to choose from, and an understanding of what methods are appropriate when.

This is especially important when you consider that in the field of Web Science we’re trying to draw on tools and techniques from multiple disciplines. Communicating across disciplines is hard: consider the assumptions we make on how to do research (for example: engineers like to build things!), and subtle and not-so-subtle differences in the language we use (try talking about ‘deconstruction’ to people from multiple disciplines: they will understand the word in very different ways!). The Q&A after my talk turned into a discussion of how to facilitate such cross-disciplinary communication.

And how do we do that? Well, it’s essential of course to go in with an open mind and some humility: if you think you know everything, you aren’t going to learn a lot. In addition to that, I suspect that actively conducting cross-disciplinary research is a sound approach: I’m talking about learning by doing. For example, how better can you learn about how a sociologist will tackle a problem than by working with one?

What about Web Science and education? I know that Southampton Web Science PhD students have two supervisors from two different disciplines, which seems like a very sensible way to expose students to different epistemologies. Meanwhile, I think it’s important that we as a community engage in a dialogue towards identifying some core methods and methodological approaches. Perhaps this is a topic for discussion at next month’s Web Science Curriculum workshop