Earlier this year I and Andy Brightwell conducted some research into one of the successful investigations on my crowdsourcing platform Help Me Investigate. I wanted to know what had made the investigation successful – and how (or if) we might replicate those conditions for other investigations.
I presented the findings (presentation embedded above) at the Journalism’s Next Top Model conference in June. This post sums up those findings.
The investigation in question was ‘What do you know about The London Weekly?‘ – an investigation into a free newspaper that was (they claimed – part of the investigation was to establish if this was a hoax) about to launch in London.
The people behind the paper had made a number of claims about planned circulation, staffing and investment that most of the media reported uncritically. Martin Stabe, James Ball and Judith Townend, however, wanted to dig deeper. So, after an exchange on Twitter, Judith logged onto Help Me Investigate and started an investigation.
A month later members of the investigation had unearthed a wealth of detail about the people behind The London Weekly and the facts behind their claims. Some of the information was reported in MediaWeek and The Media Guardian podcast Media Talk; some formed the basis for posts on James Ball’s blog, Journalism.co.uk and the Online Journalism Blog. Some has, for legal reasons, remained unpublished.
A note on methodology
Andrew conducted a number of semi-structured interviews with contributors to the investigation. The sample was randomly selected but representative of the mix of contributors, who were categorised as either ‘alpha’ contributors (over 6 contributions), ‘active’ (2-6 contributions) and ‘lurkers’ (whose only contribution was to join the investigation). These interviews formed the qualitative basis for the research.
Complementing this data was quantitative information about users of the site as a whole. This was taken from two user surveys – one when the site was 3 months’ old and another at 12 months – and analysis of analytics taken from the investigation (such as numbers and types of actions, frequency, etc.)
What are the characteristics of a crowdsourced investigation?
One of the first things I wanted to analyse was whether the investigation data matched up to patterns observed elsewhere in crowdsourcing and online activity. An analysis of the number of actions by each user, for example, showed a clear ‘power law’ distribution, where a minority of users accounted for the majority of activity.
This power law, however, did not translate into a breakdown approaching the 90-9-1 ‘law of participation inequality‘ observed by Jakob Nielsen. Instead, the balance between those who made a couple of contributions (normally the 9% of the 90-9-1 split) and those who made none (the 90%) was roughly equal. This may have been because the design of the site meant it was not possible to ‘lurk’ without being a member of the site already, or being invited and signing up.
Adding in data on those looking at the investigation page who were not members may have shed further light on this.
What made the crowdsourcing successful?
Clearly, it is worth making a distinction between what made the investigation successful as a series of outcomes, and what made crowdsourcing successful as a method.
What made the community gather, and continue to return? One hypothesis was that the nature of the investigation provided a natural cue to interested parties – The London Weekly was published on Fridays and Saturdays and there was a build up of expectation to see if a new issue would indeed appear.
I was curious to see if the investigation had any ‘rhythm’. Would there be peaks of interest correlating to the expected publication?
The data threw up something else entirely. There was indeed a rhythm but it was Wednesdays that were the most popular day for people contributing to the investigation.
Why? Well, it turned out that one of the investigation’s ‘alpha’ contributors – James Ball – set himself a task to blog about the investigation every week. His blog posts appeared on a Wednesday.
That this turned out to be a significant factor in driving activity tells us one important lesson: talking publicly and regularly about the investigation’s progress is key.
This data was backed up from the interviews. One respondent mentioned the “weekly cue” explicitly.
More broadly, it seems that the site helped keep track of a number of discussions taking place around the web. Having been born from a discussion on Twitter, further conversations on Twitter resulted in further people signing up, along with comments threads and other online discussion. This fit the way the site was designed culturally – to be part of a network rather than asking people to do everything on-site.
But the planned technical connectivity of the site with the rest of the web (being able to pull related tweets or bookmarks, for example) had been dropped during development as we focused on core functionality. This was not a bad thing, I should emphasise, as it prevented us becoming distracted with ‘bells and whistles’ and allowed us to iterate in reaction to user activity rather than our own assumptions of what users would want. This research shows that user activity and informs future development accordingly.
The presence of ‘alpha’ users like James and Judith was crucial in driving activity on the site – a pattern observed in other successful investigations. They picked up the threads contributed by others and not only wove them together into a coherent narrative that allowed others to enter more easily, but also set the new challenges that provided ways for people to contribute. The fact that they brought with them a strong social network presence is probably also a factor – but one that needs further research.
The site has always been designed to emphasise the role of the user in driving investigations. The agenda is not owned by a central publisher, but by the person posing the question – and therefore the responsibility is theirs as well. In this sense it draws on Jenkins’ argument that “Consumers will be more powerful within convergence culture – but only if they recognise and use that power.” This cultural hurdle may be the biggest one that the site has to address.
Indeed, the site is also designed to offer “Failure for free”, allowing users to learn what works and what doesn’t, and begin to take on that responsibility where required.
The investigation also suited crowdsourcing well, as it could be broken down into separate parts and paths – most of which could be completed online: “Where does this claim come from?” “Can you find out about this person?” “What can you discover about this company?”. One person, for example, used Google Streetview to establish that the registered address of the company was a postbox.
Other investigations that are less easily broken down may be less suitable for crowdsourcing – or require more effort to ensure success.
A regular supply of updates provided the investigation with momentum. The accumulation of discoveries provided valuable feedback to users, who then returned for more. In his book on Wikipedia, Andrew Lih (2009 p82) notes a similar pattern – ‘stigmergy‘ – that is observed in the natural world: “The situation in which the product of previous work, rather than direct communication [induces and directs] additional labour”. An investigation without these ‘small pieces, loosely joined’ might not suit crowdsourcing so well.
One problem, however, was that those paths led to a range of potential avenues of enquiry. In the end, although the core questions were answered (was the publication a hoax and what were the bases for their claims) the investigation raised many more questions.
These remained largely unanswered once the majority of users felt that their questions had been answered. Like any investigation, there came a point at which those involved had to make a judgement whether they wished to invest any more time in it.
Finally, the investigation benefited from a diverse group of contributors who contributed specialist knowledge or access. Some physically visited stations where the newspaper was claiming distribution to see how many copies were being handed out. Others used advanced search techniques to track down details on the people involved and the claims being made, or to make contact with people who had had previous experiences with those behind the newspaper.
The visibility of the investigation online led to more than one ‘whistleblower’ approach providing inside information.
What can be done to make it better?
Looking at the reasons that users of the site as a whole gave for not contributing to an investigation, the majority attributed this to ‘not having enough time’. Although at least one interviewee, in contrast, highlighted the simplicity and ease of contributing, it needs to be as easy and simple as possible for users to contribute in order to lower the perception of effort and time needed.
Notably, the second biggest reason for not contributing was a ‘lack of personal connection with an investigation’, demonstrating the importance of the individual and social dimension of crowdsourcing. Likewise, a ‘personal interest in the issue’ was the single largest factor in someone contributing. A ‘Why should I contribute?’ feature on each investigation may be worth considering.
Others mentioned the social dimension of crowdsourcing – the “sense of being involved in something together” – what Jenkins (2006) would refer to as “consumption as a networked practice”.
This motivation is also identified by Yochai Benkler in his work on networks. Looking at non-financial reasons why people contribute their time to online projects, he refers to “socio-psychological reward”. He also identifies the importance of “hedonic personal gratification”. In other words, fun. (Interestingly, these match two of the three traditional reasons for consuming news: because it is socially valuable, and because it is entertaining. The third – because it is financially valuable – neatly matches the third reason for working).
While it is easy to talk about “Failure for free”, more could be done to identify and support failing investigations. We are currently developing a monthly update feature that would remind users of recent activity and – more importantly – the lack of activity. The investigators in a group might be asked whether they wish to terminate the investigation in those cases, emphasising their role in its progress and helping ‘clean up’ the investigations listed on the first page of the site.
That said, there is also a danger is interfering too much in reducing failure. This is a natural instinct, and I have to continually remind myself that I started the project with an expectation of 95-99% of investigations ‘failing’ through a lack of motivation on the part of the instigator. That was part of the design. It was the 1-5% of questions that gained traction that would be the focus of the site (this is how Meetup works, for example – most groups ‘fail’ but there is no way to predict which ones. As it happens, the ‘success’ rate of investigations has been much higher than expected). One analogy is a news conference where members throw out ideas – only a few are chosen for investment of time and energy, the rest ‘fail’.
In the end, it is the management of that tension between interfering to ensure everything succeeds – and so removing the incentive for users to be self-motivated – and not interfering at all – leaving users feeling unsupported and unmotivated – that is likely to be the key to a successful crowdsourcing project. More than a year into the project, this is still a skill that I am learning.