Government | People | Design | Technology

Author: Mark Headd

Mark Headd is the former Chief Data Officer for the City of Philadelphia, serving as one of the first municipal Chief Data Officers in the United States, and was also Director of Government Relations at Code for America. He currently works with civic technologists and open data advocates as a Developer Evangelist for Accela, Inc.
A coder and civic hacking veteran, he has worked as both a hands-on technologist and as a high-level policy advisor. Self taught in programming, he holds a Master’s degree in Public Administration from the Maxwell School of Citizenship and Public Affairs at Syracuse University, and is a former adjunct instructor at the University of Delaware’s Institute for Public Administration teaching a course in electronic government.

“Anonymous access to the data must be allowed for public data, including access through anonymous proxies. Data should not be hidden behind ‘walled gardens.’”
– 8 Principles of Open Government Data

In the world of open data, there are few things that carry more weight than the original 8 principles of open data.

Drafted by a group of influential leaders on open data that came together in Sebastopol, Calif., in 2007, this set of guidelines is the defacto standard for evaluating the quality of data released by governments, and is used by activists regularly to prod public organizations to become more open.

With this in mind, it was intriguing to hear a well-known champion of open data at the Sunlight Foundation’s recent TransparencyCamp in Washington, D.C. raise some interesting questions about one of these principles, typically considered sacrosanct in the open data community.

Andrew Nicklin (formerly at the helm of open data efforts for both the City and State of New York, and now Open Data Director for the Center for Government Excellence at Johns Hopkins University) asked TransparencyCamp attendees to consider some of the implications of the sixth principle on open data – which calls for non-discriminatory access to data. This principle is generally taken to mean that users of open data should be able to access it anonymously and that governments should not require users to identify who they are or what they plan to do with the data as a condition of accessing it.

While there is obvious merit to this principle, Andrew observed that when governments know who is using their data and how they are using it, there are enormous opportunities to enhance the data and make it more useful for data consumers. If governments don’t understand what user’s want, providing useful data that can meet their needs is difficult – strictly enforcing anonymous access to data may end up being be an impediment to better understanding what data users actually need.

Without being directly critical of the principle or the original intentions behind it, Andrew made a thoughtful suggestion for open data advocates at TransparencyCamp to consider. To me, these comments highlight an important issue facing the civic technology community and governments themselves – one that almost no one is talking about.

When it comes to building the infrastructure of open data – putting in place the pieces of technology that users will leverage to find and use government open data – very little thought seems to be given to what users – data consumers – want or need.

The idea of “build with, not for” has become a central tenant to how civic technology solutions are designed and implemented. Yet this idea seldom applies to the platforms that governments use to make open data available, which form the foundation of many civic technology solutions.

A recent collaborative effort between the University of Southern California’s Annenberg Center on Communication Leadership & Policy and the USC Price School of Public Policy produced a hugely valuable report on the current state of open data in the 88 incorporated cities comprising Los Angeles County.

Based on surveys and interviews with city officials on their open data efforts, this report provides unique insights into the ways that government leaders view open data. Among the findings – government officials surveyed for the report consider funding to be the most significant barrier to expanding work on open data. This isn’t a surprise, and this sentiment is likely not unique to the Los Angeles County area.

But when taken together with other findings, it can seem counterintuitive. Along with citing funding as a constraint, government officials expressed a preference for commercial open data catalogs over open source (or free) alternatives. These commercial solutions – some of which impose non-trivial costs on local governments – appear to meet a perceived need on the part of government officials in that they are viewed as making it “easier to publish [data] and put it in the hands of the citizens.”

Commercial software generally tends to fare better in the government procurement process than open source software, so this outcome isn’t all that shocking. But it’s worth noting this contradiction in the findings of the USC report between the cost constraints limiting more progress on open data and the reported preference for (sometimes pricy) commercial open data catalogs.

Cost aside, there are a few reasons why upfront investment in a commercial open data catalog may not be the best way to start a new open data effort.

Architecting participation

The web … took the idea of participation to a new level, because it opened participation not just to software developers but to all users of the system.
– Tim O’Reilly, The Architecture of Participation

First, and somewhat ironically, public information on the cost of commercial open data portals can be hard to come by. Another report on municipal open data efforts in southern California found a wide disparity in what different governments – some just a few miles apart, and almost identical in population – pay for commercial open data catalogs. This can make it difficult for governments to know if they are getting good value for the price being paid.

In addition, commercial open data catalogs often come with visualization, mapping and charting tools out of the box. This can make it easier for governments to augment open data offerings by showing what can be done with it. Though these offerings may come at an additional price, some may view them as a way to help advocate open data to internal skeptics – a picture (or a graph, or a chart) is worth a thousand words as the saying goes.

From a user needs perspective, this approach feels very unidirectional – this is government telling the data community what it believes is important, not the other way around. There are a host of examples of sophisticated visualizations and applications being built with government data by outside data users. And while this approach requires outreach and engagement, there is an ever-increasing abundance of tools available for members of the data community to use to create maps, visualizations and new applications.

These two approaches – out of the box vs. community built – are not mutually exclusive. We can see a number of examples of governments using commercial open data catalogs to engage with external data users that produce useful, valuable visualizations and apps – New York City, the City of Los Angeles, Chicago and San Francisco are all great examples of this dual approach.

However, open data efforts in all of those cities have benefited from robust technology and startup communities and often visionary leadership. Almost all of these cities have a long tradition of civic hacking. For cities that don’t have these assets (or have them in smaller quantities), outreach and engagement to nurture and build a data community will be a crucial factor in the long-term success of an open data program. These cities – many of them smaller and with more limited resources – may also feel the cost constraints of implementing an open data effort more acutely than larger cities.

It’s fair to say that the next wave of cities that adopt open data programs may face a very different set of challenges than the cities that have come before them.

Open data in this country is still – almost exclusively – a big city phenomenon.

Efforts to address this imbalance are underway – the What Works Cities initiative (of which the Center for Government Excellence at Johns Hopkins is a key part) is now working to bring open data and data-driven decision making to 100 mid-sized cities. More and more, small and mid-sized cities are starting to look at open data as a key driver of government innovation.

We are now at a juncture where we can not only help a new cohort of cities adopt open data, but to help ensure that these efforts embrace the principle of “build with, not for” from the ground up. If we’re going to be successful, it’s important that we question long-held beliefs – like the original 8 principles of open data – to ensure our efforts are most efficiently aligned with the outcomes we desire.

It’s worth considering whether commercial open data catalogs provide the best option for the next wave of cities that are embracing open data to succeed and build a healthy data culture, both inside and outside of government.

But whatever foundation we choose to lay for the next phase of open data, we’ll need to make sure we’re putting user’s needs first.

(Note – the term “cult of catalogs” is not my own. I first heard it used by Friedrich Lindenberg, though others may have used it as well.)

The time of year-end reviews and top 10 lists is now upon us, so I’m compiling the details of a watershed year for open data and civic hacking in two cities where I’ve seen huge leaps made in 2011 – Philadelphia and Baltimore.

Several months ago, with the unveiling of the OpenDataPhilly website, the City of Philadelphia joined the growing fraternity of cities across the country and around the world to release municipal data sets in open, developer friendly formats.

The civic hackathon – a gathering (either virtual or physical) of technologists for a few days or weeks to build civic-themed software – remains one of the more durable manifestations of the open government movement.

A lot of my open gov energy of late has been focused on replicating a technique pioneered by Max Ogden (creator of PDXAPI) to convert geographic information in shapefile format into an easy to use format for developers.

One of the more striking ironies of the Gov 2.0 movement is that despite the development of scores of new technologies, protocols, platforms and networks for enabling sophisticated interactions between citizens and their governments, a large number of people prefer to interact with their government the way they have for a long time – using the telephone.

There has been some pretty good discussion lately going around the Interwebs about what Gov 2.0 and open government looks like. I can’t say that I agree with everything that has been thrown out there with a Gov 2.0 label on it, but I can say without equivocation that this is the opposite of OpenGov and Gov 2.0.

An increasing number of people are starting to suggest that the concept of the â€œapp contestâ€ (where governments challenge developers to build civic applications) is getting a bit long in the tooth.

There have been lots of musings lately about the payoff for governments that hold such contests and the long term viability of individual entries developed for these contests. Even Washington DC – the birthplace of the current government app contest craze – seems the be moving beyond the framework it has employed not once, but twice to engage local developers.

Earlier this year, I had an idea to build a Twitter application that would allow a citizen to start a 311 service request with their city.

At the time, there was no way to build such an application as no municipality had yet adopted a 311 API that would support it (although the District of Columbia did have a 311 API in place, it did not – at the time – support the type of application I envisioned).

That changed recently, when San Francisco announced the deployment of their Open311 API. I quickly requested an API key and began trying to turn my idea into reality.

Social media enthusiasts (myself included) let out a big huzzah recently at the results of a study conducted by the Pew Internet and American Life Project entitled Government Online.

The report, like a similar one several years ago, looks at how citizens communicate and interact with their government. This study focused specifically on online contact with government, the use of social media to interact with government and citizen use of open government data.

311 is an abbreviated dialing designation set up for use by municipal governments in both the U.S. and Canada. Dialing 311 in communities where it is implemented will typically direct a caller to a call center where an operator will provide information in response to a question, or open a service ticket in response to report of an issue. The difference between 311 and other abbreviated dialing designations (like 911) can be summed up by a promotional slogan for the service used in the City of Los Angeles: â€œBurning building? Call 911. Burning question? Call 311.â€

So, as a prelude to a talk Iâ€™ll be giving at eComm next month, I wanted to write a post surveying the landscape of recent government API developments, and also to describe evolving efforts to construct standards for government APIs.