Archive

data

Huge and impenetrable government databases – technically public but inaccessible in practice – have long hidden critical health and safety information. Consumers and patients need emerging knowledge about product defects, drug side effects and service flaws to choose safe cars, cribs, doctors, medicines and much else. Government has long collected this kind of information from us: manufacturers, retailers, medical experts and consumers send in millions of stories every year about unexpected problems that cause deaths or injuries. But, despite efforts toward more open government, shoppers and patients often can’t get access to this developing knowledge to make smart choices.

Now software developers are filling the gap – making important clues about health and safety risks hidden in government-gathered information easily available to consumers – and hoping to make a profit, of course. Two examples to watch: AdverseEvents Inc. and Clarimed LLC are firms that translate dense data about unexpected drug side effects and medical device malfunctions into usable information for patients and doctors. Can such transparency save lives and improve product safety? The CEO of Clarimed told Melinda Beck of the Wall Street Journal : “The best way to drive quality improvements is to make things crystal clear and transparent as possible.”

A much talked-about innovation in public policy has been the push to achieve greater transparency and accountability through open government strategies, where the public has access to government information and can participate in co-producing public services. At the Transparency Policy Project we have been investigating the dynamics behind one of the most successful implementations of open government: the disclosure of data by public transit agencies in the United States. In just a few years, a rich community has developed around this data, with visionary champions for disclosure inside transit agencies collaborating with eager software developers to deliver multiple ways for riders to access real-time information about transit.

Transit agencies have long used intelligent systems for scheduling and monitoring the location of their vehicles. However, this real-time information had previously been available only to engineers inside agencies, leaving riders with printed timetables and maps, that, at best, represent the stated intentions of an complex system that can be disturbed by traffic, weather, personnel issues and even riders themselves.

Recognizing the need to be able to access this information on-the-go and in digital format, Bibiana McHugh of Portland’s TriMet agency worked with Google in 2006 to integrate timetable data into Google Maps, eventually becoming Google Transit. McHugh went further, publicly releasing TriMet’s operations data: first the static timetables, and eventually real-time, dynamic data feeds of vehicle locations and arrival predictions. Local programmers have responded with great ingenuity, building 44 different consumer-facing applications for the TriMet system, at no cost to the agency.

Transit Apps and Ridership by City

Other transit agencies have adopted this open data approach with varying outcomes. The most successful agencies work closely with local programmers to understand which data is in demand, troubleshoot and improve the quality of the data feeds. Programmers also make the link between end users and transit agencies by filtering up comments from apps users. This iterative feedback loop relies on a champion within the agency to build strong relationships with the local developer community. Of the five transit agencies we studied, Portland’s TriMet and Boston’s MBTA exemplify this approach and have generated the highest ratio of apps per transit rider (see table). Meanwhile, the most reluctant agency to adopt open data, Washington DC’s WMATA, only has eleven applications serving its customers.

The number of apps built by independent developers is important, indicating the variety of options riders have in selecting which interfaces (mobile, desktop, map-based, text, audio) and platforms best fit their needs to access transit information. As we have learned from our research on what makes transparency effective, simply providing information is not enough. Format and content matter, and should address the needs of a targeted audience. What we have seen in our study of transit transparency is that local programmers have been the critical intermediaries, taking raw data and generating a variety of information tools that transit agencies could not have imagined on their own. For other open government initiatives to spark this level of innovation and public benefit, they must identify their audience of information intermediaries and foster those relationships.

Pedro Daire is in charge of technology for Chile’s Fundación Ciudadano Inteligente (“Smart Citizen”), a leader in employing technology for transparency in Latin America. Notable projects include Vota Inteligente to monitor the Chilean parliament, the Freedom of Information portal Acceso Inteligente, and most recently, a conflict of interest tracker that shines a light on parliamentarians, the legislation they support, and their personal investments. I spoke with Pedro via Skype on December 16, 2011 to capture his reflections on the Bridging Transparency and Technology (TABridge) workshop (which TPP participated in as well) held in Glen Cove, New York in early December. Pedro’s answers to my questions were so insightful on the matter of bridging tech and transparency that I decided to share a condensed version of our interview through this blog post.

Francisca: What was most valuable about participating in the TABridge event to the work you are doing and want to do in the future?

Pedro: The most useful insight from the event was learning from the experiences of Kert Davies of Greenpeace and Heather White from the Environmental Working Group (EWG). They have been working with information to advance their advocacy work since before technology became such a powerful tool, and a result, they really know how to use information in strategic ways. They use technology as an amplifier, as a tool, and not and end in itself. In contrast, for younger people like me, technology is content, something in and of itself.

That insight, combined with the imperative to have an explicit theory of change behind our projects, an idea which Archon Fung guided us through at the workshop, is very useful for us. We are planning to analyze every project we do through the theory of change framework to guide our strategy and planning.

Did the event give you new ideas as to the role of technology in transparency efforts?

We have a very creative and innovative team at Ciudadano Inteligente, which means that we often have too many creative ideas! Our challenge is implementing those ideas and thinking about the impact chain of each project. We are very adept at developing tools for transparency, but don’t think about how that tool will be used to change or promote our end goals. We are too often focused on the tool and assume people will react to the information in impactful ways. But that’s not true. What I learned in the event is to attach a cause behind each tool.

We try to actively engage people with the information we provide, but we haven’t made a point to teach them what the next step is for them in terms of mobilizing around the information. With the conflict of interest tool for example, we should have considered incorporating an element to mobilize people to hold their legislators accountable from the beginning, but instead, we improvised a last-minute online petition when we realized this could be useful. I’ve come to see that facilitating these interactions is very important because otherwise, we have this citizen, but she’s only a spectator of the information. Only the most motivated people will react to the information. We need to say “hey, you’ve seen the info, now you should share this with five people,” and that’s much more than what we’ve done so far.

What should the TABridge program focus on as we work to support this network of practice?

The group that gathered at Glen Cove is like a ‘dream team’ of NGOs! Everyone who was there is very capable, enthusiastic and smart. The shared sense of purpose that we felt there – knowing that we are all concerned about the same issues and looking for the best ways to impact change – is very powerful. But there is risk that we’ll forget that we’ve been together thinking about how to use tech for transparency in strategic ways.

We need the organizers to keep reminding us that we have common issues, challenges and problems. I don’t think they should be helping me to implement projects, however. I didn’t expect to return to Chile from the workshop with solutions. Rather, I expected to come home with doubts. We have already been successful with our work here in Chile, but it’s risky for us to keep holding those minor successes as triumphs. The main value of TABridge is to help us reflect on our work.

Unfortunately the series was based on data dinosaurs — so the map’s results are two to four years out of date. That’s not the reporters’ fault. It’s the latest government information available. Since 1988 when Congress first required annual reporting of toxic pollution, factory by factory, it has taken 18 months or so for the Environmental Protection Agency to process industry reporting and make public the results. And EPA’s overlay that models risk to human health is based on factories’ toxic releases in 2007 — and it cannot be used to determine real risks anyway, according to EPA’s website.

What families and businesses need and technology could now deliver is real time information – especially when there are spikes in dangerous toxic pollutants. But look for more data dinosaurs soon. The government has cancelled its most important annual compilation of data — The Statistical Abstract — and austerity is producing many other data casualties. But does it really make sense to take big chunks out of the factual foundation for public and private choices that took so long to build?