What are the characteristics of a crowdsourced investigation? A case study in crowdsourcing investigative journalism part 5

Continuing the serialisation of the research underpinning a new Help Me Investigate project, in this fifth part I explore the characteristics of crowdsourcing outlined in the literature. Previous parts are linked below:

What are the characteristics of a crowdsourced investigation?

Tapscott and Williams (2006, p269) explore a range of new models of collaboration facilitated by online networks across a range of industries. These include:

  • Peer producers creating “products made of bits – from operating systems to encyclopedias”
  • “Ideagoras … a global marketplace of ideas, innovations and uniquely qualified minds”
  • Prosumer – ‘professional consumer’ – communities which can produce value if given the right tools by companies
  • Collaborative science (“The New Alexandrians”)
  • Platforms for participation
  • “Global plant floors” – physical production lines split across countries
  • Wiki workplaces which cut across organisational hierarchies

Most of these innovations have not touched the news industry, and some – such as platforms for participation – are used in publishing, but rarely in news production itself (an exception here can be made for a few magazine communities, such as Reed Business Information’s Farmer’s Weekly).

Examples of explicitly crowdsourced journalism can be broadly classified into two types. The first – closest to the ‘Global plant floors’ described above – can be described as the ‘Mechanical Turk’ model (after the Amazon-owned web service that allows you to offer piecemeal payment for repetitive work). This approach tends to involve large numbers of individuals performing small, similar tasks. Examples from journalism would include The Guardian’s experiment with inviting users to classify MPs’ expenses in order to find possible stories, or the pet food bloggers inviting users to add details of affected pets to their database.

The second type – closest to the ‘peer producers’ model – can be described as the ‘Wisdom of Crowds’ approach (after James Surowiecki’s 2005 book of the same name). This approach tends to involve smaller numbers of users performing discrete tasks that rely on a particular expertise. It follows the creed of open source software development, often referred to as Linus’ Law, which states that: “Given enough eyeballs, all bugs are shallow” (Raymond, 1999). The Florida News Press example given above fits into this category, relying as it did on users with specific knowledge (such as engineering or accounting) or access. Another example – based explicitly on examples in Surowiecki’s book – is that of an experiment by The Guardian’s Charles Arthur to predict the specifications of Apple’s rumoured tablet (Arthur, 2010). Over 10,000 users voted on 13 questions, correctly predicting its name, screen size, colour, network and other specifications – but getting other specifications, such as its price, wrong.

Help Me Investigate fits into the ‘Wisdom of Crowds’ category: rather than requiring users to complete identical tasks, the technology splits investigations into different ‘challenges’. Users are invited to tag themselves so that it is easier to locate users with particular expertise (tagged ‘FOI’ or ‘lawyer’ for example) or in a particular location, and many investigations include a challenge to ‘invite an expert’ from a particular area that is not represented in the group of users.

Some elements of Tapscott and Williams’s list can also be related to Help Me Investigate’s processes: for example, the site itself was a ‘platform for participation’ which allowed users from different professions to collaborate without any organisational hierarchy. There was an ‘ideagora’ for suggesting ways of investigating, and the resulting stories were examples of peer production.

One of the first things the research analysed was whether the investigation data matched up to patterns observed elsewhere in crowdsourcing and online activity. An analysis of the number of actions by each user, for example, showed a clear ‘power law’ distribution, where a minority of users accounted for the majority of activity.

This power law, however, did not translate into a breakdown approaching the 90-9-1 ‘law of participation inequality’ observed by Jakob Nielsen (2006). Instead, the balance between those who made a couple of contributions (normally the 9% of the 90-9-1 split) and those who made none (the 90%) was roughly equal. This may have been because the design of the site meant it was not possible to ‘lurk’ without being a member of the site already, or being invited and signing up. Adding in data on those looking at the investigation page who were not members may shed further light on this.

In Jon Hickman’s ethnography of a different investigation (into the project to deliver a new website for Birmingham City Council) he found a similar pattern: of the 32 ‘investigators’, thirteen did nothing more than join the investigation. Others provided “occasional or one-off contributions”, and a few were “prolific” (Hickman, 2010, p10). Rather than being an indication of absence, however, Hickman notes the literature on lurking that suggests it provides an opportunity for informal learning. He identifies support for this in his interviews with lurkers on the site:

“One lurker was a key technical member of the BCC DIY collective: the narrative within Help Me Investigate suggested a low level of engagement with the process and yet this investigator was actually quite prominent in terms of their activism; the lurker was producing pragmatic outcomes and responses to the investigation, although he produced no research for the project. On a similar note, several of the BCC DIY activists were neither active nor lurking within Help Me Investigate. For example, one activist’s account of BCC DIY shows awareness of, and engagement with, the connection between the activist activity and the investigation, even though he is not an active member of the investigation within Help Me Investigate.” (Hickman, 2010, p17)

In the next part I explore what qualities made for successful crowdsourcing in the specific instance of Help Me Investigate.

Advertisement

8 thoughts on “What are the characteristics of a crowdsourced investigation? A case study in crowdsourcing investigative journalism part 5

  1. Toby

    Enjoying this series and finding it interesting – but it would be helpful to have a bibliography to go with the citations in-text. I have no idea where to find ‘Arthur 2010’, for example.

    Reply
    1. Paul Bradshaw Post author

      There’ll be a bibliography in the final section, published tomorrow (Thursday). Will try to go back and link references where appropriate too.

      Reply
  2. Pingback: What made the crowdsourcing successful? A case study in crowdsourcing investigative journalism part 6 | Online Journalism Blog

  3. Pingback: Journalisme d’investigation en réseau « Ressources ESPO

  4. Pingback: Wisdom of the Crowds – Guest Post | Angela Maiers Educational Services, Inc.

  5. Pingback: Help Me Investigate is on ice (but I’ll still help you investigate) | The Help Me Investigate Blog

  6. Pingback: Help Me Investigate is on ice (but I’ll still help you investigate) | Online Journalism Blog

  7. Pingback: A case study in crowdsourcing investigative journalism (part 4): The London Weekly | Online Journalism Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.