Arab springs and AI winters

Remember how we imagined, full of triumphal optimism, that social media would become the web that knit the oppressed masses together, would empower them to join forces and overthrow their oppressors and stride shoulder-to-shoulder together into a better world?

Remember the Arab Spring? "Revolution 2.0"? Remember how we imagined, full of triumphal optimism, that social media would become the web that knit the oppressed masses together, would empower them to join forces and overthrow their oppressors and stride shoulder-to-shoulder together into a better world?

Yeah, those were the days. But now -- "disillusioned" hardly begins to describe it. I write to you from Tunisia, the Arab Spring's poster child, now a secular democracy; but even here, in this lovely country full of hospitable people, whose downtown hipsters and students thronging the Carthage Film Festival could be teleported to Brooklyn or the Mission and not look one whit out of place, today's headlines inform me that the nationwide state of emergency has been extended yet again.

Elsewhere of course the revolutions ended utterly disastrously. Egypt ultimately just replaced an iron-fisted dictatorship with iron-fisted military rule. Syria and Libya descended into blood-drenched civil war. Since then, Brexit and recent Western elections have made a caustic mockery of notion that social media might bring nations together, rather than split them apart. Where is our optimism now?

...Machine learning, is the cautious answer. It lets us address whole new categories of problems that were previously inaccessible. It will drive cars for us, design and construct buildings for us, identify incipient food shortages in the developing world before they happen, pick meaningful patterns out of clouds of data and find solutions in them.

Yes, there's concern that machine learning will be biased when (not if) it learns from biased data. But, contrary to the increasingly popular image of the tech industry as callow blinkered hotheads, I doubt there's a single serious AI researcher on the planet who isn't already very aware of this problem, many-to-most of whom are already pondering ways to make their algorithms equitable, transparent, and accountable.

What concerns me more is that machine learning algorithms will serve business models first and civil society second. (Well, maybe civil society more like fifth, or seventh, or ninth.) The best minds of our generation will no longer be working on ways to make users click on ads; instead, they'll be evolving AI algorithms which optimize for how many users click on ads.

Does sowing discord lead to higher engagement metrics? Then they'll sow away, without worrying about the reaping. That's not what they're built for, after all. And, moreso than noticing and adjusting for bias, it's extremely hard for researchers to even identify, much less measure, the long-term emergent social implications of whatever technical systems they build.

Regulation won't be the answer, unfortunately, because in the same way that generals are always preparing to fight the last war, regulators are always aiming their guns at last year's problem, which will be generations obsolete by the time their prescriptions finally come into force.

Maybe instead of an ever-crescendoing parabolic rise in machine learning we should hope for a kind of punctuated equilibrium, periods of hypergrowth alternating with AI winters, giving us time to reflect and adjust. It seems odd to be musing about the merits of an AI winter when we're just at the beginnings of an AI spring; but when I consider the so-called Arab spring, I cannot help but wonder.