Search form

Main menu

Let Us See Under the Hood

Matt Stempeck

Research Assistant

Matt's a Research Assistant at the Center. He has spent his career at the intersection of technology and social change. He graduated with high honors from the University of Maryland College Park, where he wrote a thesis on the disruptive role of political blogs in journalism. He went on to join the strategy team at EchoDitto, a boutique consulting firm building cool technology for nonprofits, startups, and socially responsible businesses.

Then Matt attempted to save democracy by directing new media at Americans for Campaign Reform, a bi-partisan grassroots effort to enact voluntary public financing of federal campaigns. Right before Citizens United v. FEC hit, he joined the New Organizing Institute, where he helped to train the next generation of organizers. For most of this time, he also ran one of the most popular NetSquared groups in the world.

Matt's interested in pretty much everything, particularly the everything taking place at the Media Lab.

Let Us See Under the Hood

Our machines can do amazing things. Our mapping and travel tools can span numerous transit agencies and modes of transport to conveniently navigate us across the land. They still mess up, which is acceptable. But when they fail, we don't even know that they have errored, or how, and this is less OK.

On an intermediary leg of a marathon journey from Washington, DC to Nairobi that included a DC Metrobus, a ZipCar, a BoltBus, a commuter train, an airtram, two 6+ hour flights, I needed to simply get from Penn Station to JFK Airport. I already knew that the Long Island Railroad was the best combination of price and speed for my needs, and HopStop's website confirmed it. Unfortunately my BoltBus ran an hour late, and I found myself recalculating the trip from my phone using HopStop's mobile app. For whatever reason, whether an errant filter or another limitation of the mobile app, HopStop no longer showed me any LIRR options. In this case, I knew I wasn't seeing the results I needed. I just couldn't do anything about it.

Eli Pariser talks about the societal implications of opaque social algorithms in The Filter Bubble, where we don't know what we don't know, and couldn't see it if we did. The ability to understand what we aren't seeing is also a simple usability affordance. A few apps break the general trend in this department:

Hipmunk intelligently sorts the best flights available by eliminating the obviously bad choices (70% of possible results, according to cofounder Steve Huffman in this Forbes piece extolling Hipmunk's many virtues). But the site also wisely allows the user to re-expose similar flights, and dive into the larger world of possibilities when your price or time is severely constrained.

Gmail's Priority Inbox attempts to order your email based on your rules and habits. In my experience, it's not quite there yet, but by hovering over the Priority icons, you can at least see why the feature sorted your email as it did, and correct for future cases.

If you're not going to share the secret sauce of how decisions are made, you should at least let users circumnavigate when the decisions are poorly made. An admittedly small group of users care about this sort of thing. And maybe the apps we build will get smarter and smarter and smarter and the exposure of results the machine guesses are wrong will be considered an in-between technology as the machine's guesses become more perfect. But I think it's more likely that we'll still want a grayer, more complicated version of what the machine tells us is possible, even as the machine's computational abilities exceed our continuously evolving definition of magic. Let us see.