Speaking of algorithms

Artificial intelligence raises thorny questions that will be keeping human brains very busy.

The subject is surely as important as global warming, but few people are talking about it. The reason may be the fear of sounding paranoid, or of looking like nutcases who’ve seen too many Hollywood movies. Yet the risk of a world-scale cataclysm caused by algorithms is real. Since the beginning of the decade the intelligence of machines has been accelerating, and human structures are becoming ever more dependent on them.

One might worry, for example, about the devastating consequences of an attack on the programs that orchestrate world finance, or on those that control air traffic. But a fear of a completely different scale has now joined the debate: what about a total takeover of humanity by machines? The progress of artificial intelligence has revived old science-fiction scenarios in which machines decide to attack humans in order to better accomplish the missions they’ve been assigned.

Scientists like Stephen Hawking and entrepreneurs like Elon Musk and Jaan Tallinn, co-founder of Skype, have taken this fear seriously enough to publish an open letter and join the Future of Life Institute, which is trying to promote public discussion. Yet away from the glare of media, researchers who work on artificial intelligence are more sanguine. They consider an extreme scenario unlikely because it implies the total autonomy of machines, which is contrary to the logic behind algorithms.

Before threatening people, if it ever does, AI will have destroyed lots of jobs.

Researchers know that their goal is not to make artificial intelligence more effective but to make it useful for humans, integrating a strong ethical dimension. Seen in this light, the prospects for AI are the stuff of dreams, if only in the medical field, where they allow us to imagine spectacular progress in diagnoses and treatment. It’s obviously important to ask questions about long-term risks, even existential ones, but the initial challenges are much more practical.

Before threatening humans, if it ever does, artificial intelligence will have destroyed lots of jobs. It will have allowed a radical makeover of industry, transport, medicine and countless aspects of daily life. The most urgent questions are these: How can we use this additional intelligence to derive all its benefits? How can we prevent abuses and undesirable effects? Legal scholars and ethicists, but also teachers, designers and politicians must explore these questions without delay. Responses must circulate in our biological brains as quickly and efficiently as the data in digital circuits.

All democracies are concerned because artificial intelligence confers exceptional power on those who master its tools. The vote on Brexit and the election of Donald Trump have triggered a debate on the role of social media and specifically on the algorithms that select information according to the user’s tastes. Did this create an unintended amplification of certain views? One cannot exclude that the way algorithms were conceived may have steered the choices of certain voters. The simple fact that this possibility exists should be enough to get the attention of democracies. Let’s talk about algorithms.