Search form

The Democratization of Big Data

Already a major technology trend, 2012 promises to be a watershed for "big data." A shorthand term for the proliferation of large datasets, big data also refers to the expansion of analytic techniques for teasing meaning from the vast archives of information produced by the digital world. The New York Times' Steve Lohr declared we have entered the "age of big data" in a recent article that compared it with another revolutionary research tool -- the microscope.

As I observed last year, big data is beginning to filter into the urban planning world. Here are a few examples of the intersection of cities and big data (two from this PlaceMatters blog post):

finally, the collection and analysis of data lies at the core of "smarter cities" initiatives by IBM, Cisco, and Siemens.

What do all these exciting examples of big data have in common? If you have modest technical skills and work for a local government or community-based organization, you probably do not have access to the data and skills necessary to replicate the projects.

Inequalities of data access are not new in planning. Sixteen years ago David Sawicki and William Craig argued in a Journal of the American Planning Association article titled "The Democratization of Data" that the most important ingredients to expanded access to the first generation of data wasn't advances of computing power or analysis skills, but the rise of data intermediaries that worked with community groups in low-income communities to ensure they had access to quality data and skills. Whether nonprofits, local governments, or university-led projects, these intermediaries helped equalize access to data in the public sphere.

However, as the size of datasets has increased, so have the skills necessary to manage and analyze the data. No longer is mastery of a few desktop applications sufficient for analysis, since wrangling today's large datasets requires database servers and analysts skilled at statistical and algorithmic data mining techniques. Although government datasets may have been the original big data, many of the new datasets are provided by corporations, introducing a morass of ethical and practical challenges. Frequently collected at the individual level, negotiating access requires navigating privacy and security concerns. Even when companies provide public access, extracting and using their data requires programming skills to tap application programming interfaces (APIs) or manipulate unusual data formats.

Finally, lurking beneath the big data hype are problematic unstated assumptions about the nature of truth. In the 1980s, the so-called quantitative-qualitative debate raged across several social science fields among scholars arguing the merits of various research methods. Some researchers stressed the need to collect empirical evidence and rely solely on quantitative analysis for research. Others argued social science required qualitative analysis such as interviews and observation to understand society. Although the debate is different today, important differences of opinion remain.

We should be cautious about claims that big data will necessarily answer important or relevant research or policy questions. Are cell phone traces sufficient to intuit travel behavior, or are surveys or interviews required to understand how people make choices? Can postings to social networking websites provide as much insight as a windshield survey, or an in-depth interview of community residents? The big data hype also runs counter to important developments in social science that stress the role of experiments and counterfactual reasoning, instead of relying on ever-more-complicated statistical models to explain the world.

What are some practical steps that big data could take to expand access by community-based organizations? A start might be to provide data in formats and sizes (perhaps through summary versions) that they can be analyzed in common software packages, such as ArcMap, Excel, and Google Earth. Data providers should provide documentation about the source, variables, and assumptions used to collect and process the data. Existing data intermediaries should explore the new datasets, and strategically expand their expertise where it seems appropriate. Although the proliferation of broadband and Internet-connected smartphones has reduced the prominence of the "digital divide," we must take steps now to reduce the emergence of a new "data divide" between sophisticated analysts and communities seeking to plan for their futures.

Robert Goodspeed is a PhD student at the MIT Department of Urban Studies and Planning.

Comments

Comments

Machine readability is a bigger issue than public access.

Government has a long tradition of putting large amounts of information into the public record—"speedy and public trials," repository libraries, etc. Machine readability is the value-added feature that allows private companies to charge money for access to "public records," for the purpose of digging up dirt on individuals, or information about the real estate market, or whatever else it is they're selling. I don't know what is in data.gov, but I doubt it's anything that will challenge the "public records" business model.

API's are I suppose a step forward from lookup-oriented websites which dispense a single data point, but if companies' real interest were in distributing data, wouldn't they simply give you an address to telnet to for an SQL prompt, or even offer large data files in CSV or some other nonproprietary format for download? The API's are a way to get people to work for them for free. The end users provide social content that "adds value" to their company, and developers provide "apps."

Someone needs to do for data what open source did for software. I call it "pubwan."