We’re at a unique time in history in terms of technologists having so much direct power. There’s just something about the picture of an engineer in Silicon Valley pushing a feature live at the end of a week, and then heading out for some beer, while people halfway around the world wake up and start using the feature and trusting their lives to it. It gives you pause.

So true. I’ve been thinking about this issue a lot recently, especially as good technologists in the Valley are in exceptionally good financial / career health, while the rest of the country, and sometimes even the other half of our cities, are suffering through a long and deep recession.

Here’s one story that blew my mind a few months ago. Facebook (and I don’t mean to pick on Facebook, they just happen to have a lot of data) introduced a feature that shows you photos from your past you haven’t seen in a while. Except, that turned out to include a lot of photos of ex-boyfriends and ex-girlfriends, and people complained. But here’s the thing: Facebook photos often contain tags of people present in the photo. And you’ve told Facebook about your relationships over time (though it’s likely that, even if you didn’t, they can probably guess from your joint social network activity.) So what did Facebook do? They computed the graph of ex-relationships, and they ensured that you are no longer proactively shown photos of your exes. They did this in a matter of days. Think about that one again: in a matter of days, they figured out all the romantic relationships that ever occurred between their 600M+ users. The power of that knowledge is staggering, and if what I hear about Facebook is correct, that power is in just about every Facebook engineer’s hands.

Here’s another story. I used to lecture MIT Undergraduates about web security. My approach was basically: (a) hack a few of the student project web sites, then (b) hack a few public web sites to make the students understand how widespread the problems are. In late 2003, I showed students how to buy movie tickets for free (the price of the ticket was held in a hidden variable in a web form… duh). I ended my lecture with “but just because you can do this, doesn’t mean you should. Please don’t do this.” Over the years, I’ve received a few emails from former students to the tune of “hey Ben, you gave an awesome lecture, I still remember how a bunch of us went out to see Matrix 3 for free that weekend!”

I shudder to think about what happens when you put those two stories together. While the earliest hackers may have had a particularly well developed ethical sense, I get the sense that our profession’s average ethical sense doesn’t nearly measure up to the incredible power we have gained precipitously over the last 15 years.

And then there’s the additional point Arvind makes, which I’ve observed directly too:

I often hear a willful disdain for moral issues. Anything that’s technically feasible is seen as fair game and those who raise objections are seen as incompetent outsiders trying to rain on the parade of techno-utopia.

Yes! There’s this continued and surprisingly widespread delusion that technology is somehow neutral, that moral decisions are for other people to make. But that’s just not true. Lessig taught me (and a generation of other technologists) that Code is Law, or as I prefer to think about it, that Code defines the Laws of Physics on the Internet. Laws of Physics are only free of moral value if they are truly natural. When they are artificial, they become deeply intertwined with morals, because the technologists choose which artificial worlds to create, which defaults to set, which way gravity pulls you. Too often, artificial gravity tends to pull users in the direction that makes the providing company the most money.

A parting thought. In 2008, the world turned against bankers, because many profited by exploiting their expertise in a rapidly accelerating field (financial instruments) over others’ ignorance of even basic concepts (adjustable-rate mortgages). How long before we software engineers find our profession in a similar position? How long will we shield ourselves from the responsibility we have, as experts in the field much like experts in any other field, to guide others to make the best decision for them?

Post navigation

7 thoughts on “with great power…”

We are living in interesting time, where ethics in this new information driven world are coming into play, again. I guess it’s the same as the biotech industry is also facing (not that I know much about that particular sector) in a more “evident” way. Great post.

Great post. Haven’t seen your blog before, but I will be following. The comparison of technological instruments with financial ones really made me think that I need to know more about the technology I use every day…

Your Facebook example is great, and I wish it were more widely known. It is also a great illustration of a related point that I’ve been thinking about: Facebook and Google have several-fold more behavioral data than the entirety of what is available in the public domain to social science researchers. While these companies occasionally publish some papers based on this data [1], it is a tiny, tiny fraction of what could potentially be done with it. (We’re talking about publishing analyses and aggregate statistics, so there are no real privacy issues here.) I think there needs to be some sort of external pressure on them to do more. If they don’t want to hire in-house scientists to do it, fine, collaborate with academic researchers.

As to your other point, the possibility of public opinion turning against tech as a whole, the way it did with banking, is indeed scary to contemplate. Facebook’s “summer of discontent” last year was an indication that this is not outside the realm of possibility.

I particularly enjoy what you said about the laws of physics with respect to morality and neutrality. It’s next to impossible to be neutral or objective. Even if someone is sitting on Walden Pond, there level of objectivity is suspect.

In my own thinking about data privacy and the future of sharing my realization is that technologists are successful in spite of themselves. That is, they build something “cool”, push it out there, and let the market decide if it is worth continuing.

What they don’t do is also think long and hard about how to communicate to non technologists what the software is, what it does, and how people should think about using it.

Imagine if Facebook (again, picking on the biggest name around) thought as hard about communicating the facial recognition feature it released in the EU last week as they did to build the feature?

The future of data, privacy, sharing, security etc… CAN be incredible.

BUT… the industry will need to figure out how to sell and communicate the benefits if it can grow future.

It is what traditional CPG companies (P&G for example) do extremely well. They just don’t create a new toothpaste or household cleaner and “put it out there”. Because of the economics of their category (they manufacture things and have a supply chain to worry about) they have to think harder about the communication around products than on the R&D around product development.

Technologists (particularly software developers) don’t have to “think” beyond the code and pushing it to the customer. If it doesn’t work, so what, we don’t have 100,000 copies sitting in the warehouse waiting to be disposed of.