Categories

Meta

Tag: crowdsourcing

Where is crowdsourcing at in 2010? How is crowdsourcing different from open source journalism, and which is appropriate for what types of stories? This is listing of links to try and illustrate the differences and similarities between crowdsourcing and open source journalism. How you structure a project with many participants will have a significant impact on the end results.

The Jane’s incident takes Slashdot’s evolution one major step forward. Slashdot readers are now actively shaping media coverage of the topics near and dear to their geeky little hearts. They are helping journalists get the story right, which is a far cry from exerting censorship. Just as open source programmers would critique a beta release of software filled with bugs, the Slashdot readers panned the first release of Jane’s journalistic offering — and the upgrade, apparently, will be quick to follow.

The original article.

Why the open source way trumps the crowdsourcing way
In essence, open source projects have many contributors and many beneficiaries while crowdsourcing projects have many contributors and few beneficiaries. Open source is advantageous because “everyone who contributes also benefits.” When crowdsourcing is a competition, there are limited beneficiaries and the effort of everyone else can be wasted.

What I Learned from Assignment Zero
Jay Rosen debriefs on Assignment Zero, a distributed trend project in partnership with Wired.com, with the goal of tracking “the spread of peer production and wisdom-of-the-crowd efforts across the social landscape, including the practice of crowdsourcing.” They learned they needed to: understand and articulate the different styles of labor, grok contributors’ motivations, and plan for unexpected levels of participation. Also see Derek Powazek’s review.

I talked a few steps ago about the retention of knowledge and the speed of spreading knowledge but what I’m really talking about here is the creation of knowledge. And I’d like to be able to come up with a better term than crowdsourcing which, as I mentioned, uses less than 1% of the population working on these problems. Since there will be problems in the future that we haven’t even thought of, in the face of that, what we want to do is maximize our problem-solving machinery. What I think we want to do as we democratize education is move from crowdsourcing really to something like ‘societysourcing’ where we’re getting 10%, 50% of the population involved with solving problems. It goes without saying that vast numbers of people on the planet will not take the opportunity to get an Ivy League education but, for the first time in history, it’s widely available. […] We need to get everyone involved with solving problems.

Across every industry and system of human society, this is where there is opportunity.

I’m curious to see if there is a reputation system built into it. As they say, this works based on the participation of experts and non-experts. How do you gauge the expertise of a sweeper? And I don’t mean to imply as a journalist that I think that journalists are ‘experts’ by default. For instance, I know a lot about US politics but consider myself a novice when it comes to British politics.

To take a step back, Swift River is a project to “crowdsource the filter” for real-time crisis reporting. Ushahidi provides a platform for aggregating the information around a crisis but, when a crisis situation explodes metaphorically or literally, the information coming in can quickly overwhelm the people trying to make sense of it. Swift River will enable an observer to create a new instance for a given situation, add RSS feeds from various sources including news publications and Twitter, and then additional users will be able to come in as “sweepers” to curate those incoming bits of information and float the most important to the top.

In the comments, Jon mentions that the three “most critical aspects are the trust algorithm (veracity), predictive tagging and filtering out redundancies and inaccuracies.” The first, in my opinion, will be the most challenging, and hopefully most rewarding, piece of the riddle. They’ll be able to scale their ability to float accurate information if they focus on identifying the trustworthy people instead of the trustworthy information.

A couple of weeks ago on Twitter, I observed that the crowd is the least important part of crowdsourcing. More often than not, you could care less about the opinion of the crowd on a whole. What you really want is an authoritative answer, or field report, from the most knowledgeable person in that crowd.

There’s talk on the town about adding a journalism session to BarCamp Portland. This should be a time to brainstorm and collaborate on the future of news in the Portland-area, instead of just being a space for journalists and bloggers to come together and try and resolve their issues. Let’s have an idea-generating session on what the journalism needs of Portland are, how we’ll be able to fill those news from the grassroots if/when The Oregonian implodes because of their terrible CMS, and then, in turn, how we’ll be able to monetize that. This is something where perspectives from both camps, the journalists and the bloggers, would offer value to the conversation.

To provide fodder for this discussion, listen to the most recent installment of Dave Winer and Jay Rosen’s Rebooting the News. One of the ideas that I think will “save journalism” is the digital assignment desk Jay starts talking about near the end. His part of the idea is this: a tool to map out all of the particulars that might need to be reported on in the coverage of any given issue. Once the editorial team has this laid out, they can then decide what resources they want to apply and where.