Conclusions

Looking at the reasons that users of the site as a whole gave for not contributing to an investigation, the majority attributed this to ‘not having enough time’. Although at least one interviewee, in contrast, highlighted the simplicity and ease of contributing, it needs to be as easy and simple as possible for users to contribute (or appear to be) in order to lower the perception of effort and time needed.

Notably, the second biggest reason for not contributing was a ‘lack of personal connection with an investigation’, demonstrating the importance of the individual and social dimension of crowdsourcing. Likewise, a ‘personal interest in the issue’ was the single largest factor in someone contributing. A ‘Why should I contribute?’ feature on crowdsourcing projects may be worth considering.

Others mentioned the social dimension of crowdsourcing – the “sense of being involved in something together” – what Jenkins (2006, p244) would refer to as “consumption as a networked practice”, a motivation also identified by Yochai Benkler in his work on networks (2006). Looking at non-financial motivations behind people contributing their time to online projects, he refers to “socio-psychological reward”. He also identifies the importance of “hedonic personal gratification”. In other words, fun.

Although positive feedback formed part of the design of the site, no consideration was paid to negative feedback: users being made aware of when they were not succeeding. This element also appears to be absent from game mechanics in other crowdsourcing experiments such as The Guardian’s MPs’ expenses app.

While it is easy to talk about “Failure for free”, more could be done to identify and support failing investigations. A monthly update feature that would remind users of recent activity and – more importantly – the lack of activity might help here. The investigators in a group might be asked whether they wish to terminate the investigation in those cases, emphasising their responsibility for its progress and helping ‘clean up’ the investigations listed on the first page of the site.

However, there is also a danger in interfering too much in reducing failure. This is a natural instinct, and the establishment of a reasonable ‘success rate’ at the outset – based on the literature around crowdsourcing – helps to counter this. That was part of the design of Help Me Investigate: it was the 1-5% of questions that gained traction that would be the focus of the site. One analogy is a news conference where members throw out ideas – only a few are chosen for investment of time and energy, the rest ‘fail’.

It is the management of that tension between interfering to ensure everything succeeds (and so removing the incentive for users to be self-motivated) and not interfering at all (leaving users feeling unsupported and unmotivated) that is likely to be the key to a successful crowdsourcing project. More than a year into the project, this tension was still being negotiated.

In summing up the research into Help Me Investigate it is possible to identify five qualities which successful investigations shared: ‘Alpha users’ (highly active, who drove investigations forward); modularity (the ability to break down a large investigation into smaller discrete elements); public-ness (the ability for others to find out about an investigation); feedback (game mechanics and the pleasure of using the site); and diversity of users.

Relating these findings to other research into crowdsourcing more generally it is possible to make broader generalisations regarding how future projects might be best organised. Leadbeater (2008, p68), for example, identifies five key principles of successful collaborative projects, summed up as ‘Core’ (directly comparable to the need for alpha users identified in this research); ‘Contribute’ (large numbers, comparable to public-ness); ‘Connect’ (diversity); ‘Collaborate’ (self governance – relating indirectly to modularity); and ‘Create’ (creative pleasure – relating indirectly to feedback). Similar qualities are also identified by US investigative reporter and Knight fellow Wendy Norris in her experiments with crowdsourcing (Lavrusik, 2010).

The most notable connections here are the indirect ones. While the technology of Help Me Investigate allowed for modularity, for example, the community structure was rather flat. Leadbeater’s research (2008) and that of Lih (2009) into the development of Wikipedia and Tsui (2010, PDF) into Global Voices indicate that ‘modularity’ may be part of a wider need for ‘structure’. Conversely ‘feedback’ provides a specific, practical way for crowdsourcing projects to address users’ need for creative pleasure.

As Help Me Investigate reached its 18th month a number of changes were made to test these ideas: the code was released as open source, effectively crowdsourcing the technology itself, and a strategy was adopted to recruit niche community managers who could build expertise in particular fields, along with an advisory board that was similarly diverse. The Help Me Investigate design was replicated in a plugin which would allow anyone running a self-hosted WordPress blog to manage their own version of the site.

This separation of technology from community was a key learning outcome of the project. While the site had solved some of the technical challenges of crowdsourcing and identified the qualities of successful crowdsourced investigation, it was clear that the biggest challenge lay in connecting the increasingly networked communities that wanted to investigate public interest issues – and in a way that was both sustainable and scalable beyond the level of individual investigations.