Investigative Data Science: The Rise of Computer-Assisted Reporting

Here's a dirty little secret about the news business: If you walk into any newsroom today and flag down a passing journalist, the odds that they will know the difference between a median and a mode; or know how to multiply two fractions; or calculate percentage change, are probably worse than 70/30. It's something journalists wear like a badge of honor. There's even a canned response many reporters will give you, which they no doubt first heard in journalism school: something along the lines of "I became a journalist because I suck at math."

We're not all like this, fortunately. In many newsrooms, there's a small but growing number of journalists that has embraced math, computer programming, data visualization and the other tools of data science in order to uncover trends; investigate waste, fraud and abuse; and reveal complex trends to our audience. We're known as data journalists. You might have seen our work in the data visualizations of The New York Times, the interactive applications of the Guardian and ProPublica, and for decades among winners and finalists for the Pulitzer Prize.

But when it comes to hard data science skills like machine learning and big data analysis, even the best of us are more like street fighters than professional boxers. We've picked up a few things along the way, but rarely do we have much formal, in-depth training. And that's where our Follow the Money Prospect challenge comes in: We want to see how the real professionals would approach the problems we run into on a day-to-day basis.

The contest is brought to you by two organizations. First, the Center for Investigative Reporting: the nation's largest non-profit investigative reporting organization, conveniently based in the Bay Area. A Pulitzer Prize finalist this year for a series of stories about earthquake safety that made heavy use of data analysis, CIR typically syndicates its investigations in established news outlets, including NPR, Newsweek and The New York Times.

The second sponsor, Investigative Reporters and Editors, Inc., is the leading trade group for investigative and data journalists. Among many other things, they provide training in data journalism skills like web scraping, data cleaning and simple statistical analysis. Last year they coordinated a workshop on natural language processing. And every year they host the leading conference for data journalists, known as the NICAR Conference, to which the winner of our competition will receive a free trip.

Still, most of the techniques we've employed barely scratch the surface of what the Kaggle community can do. The data we've submitted for our Prospect challenge may be the single most used dataset in the history of data journalism: federal campaign contributions. Every contribution greater than $50 made by an individual, corporation or political action committee directly to a political candidate or cause during the 2012 election campaign is recorded in the dataset. And with spending this election expected to exceed $5.8 billion, there's plenty of contribution data to go around.

We want to know what ideas emerge when this data is looked at with fresh and expert eyes. How can unsupervised learning help us find interesting patterns, or point us to interesting contributions? How can social network analysis be used to show relationships and coordination between donors? How can can sophisticated data cleaning techniques help solve migraine-inducing tasks like standardizing names and addresses? We don't know. You tell us what's possible.

We also owe a great debt of thanks to Kaggle for helping us arrange and launch this contest. They've taken a genuine interest in the public service potential of this project, not to mention the craft of data journalism in general. Our community has a lot to learn from the world of data science, and we look forward to seeing where this challenge takes us. Good prospecting!