Manage your subscription

Why you should worry about intelligent machines

Artificial intelligence itself isn't a problem – the threat lies in what humans might do with it

Giacomo Gambineri

THEY started off by wounding our pride. Will AI end up taking our jobs – or even our lives?

Twenty years ago, IBM’s Deep Blue beat Garry Kasparov at chess – then seen as the gold standard of human intellect. Now a new wave of AI seems poised to take over a wide range of human tasks, potentially putting huge numbers of people out of work. And an unlikely alliance of philosophers, technologists and movie-makers has stoked fears that the next generation of AI might snuff out humanity.

A reality check is needed. We are nowhere near the creation of a machine that replicates the full suite of a human’s intellectual capabilities. And the threat of extinction by superintelligences, if and when they arrive, is only one of a number of esoteric possibilities (see “Forget killer robots: This is the future of supersmart machines“).

So where does paranoia about AI come from? In part, it’s the challenge smart machines present to long-standing ideas about human exceptionalism, which survived the Copernican and Darwinian revolutions but may be fatally undermined by intelligent – or even conscious – machines. It’s also a type of techno-pessimism: we can foresee the potential downsides, but the upsides aren’t yet clear.

That doesn’t mean the boom in AI gives us nothing to worry about. As ever, it isn’t the technology itself that should concern us, but how humans design and use tools based on it.

“AI should be used to upskill workers rather than paring their jobs back to tedious piecework“

On the issue of human exceptionalism, there’s not a lot that can be done. Even cherished qualities like creativity and invention may well be outsourced to AI in the coming years. But we shouldn’t feel threatened by this: we should feel exhilarated at the new things we can do with their help, just as the digital tools we use today have enhanced and diversified the ways in which we communicate and create.

When it comes to jobs, the AI threat is probably overstated. On closer scrutiny, many seemingly straightforward jobs include cognitively taxing elements that AI cannot master – yet (see “Find your meaning at work: 6 things a salary can’t buy“). The “gig economy”, pioneered by firms such as Uber, adds flexibility to the labour force and convenience to their customers via algorithmic management – but at a cost to workers’ rights and conditions. AI could accelerate that trend. That matters: our work is integral to our identities, and preserving the dignity of labour should be central to our society (see “Don’t give up the day job: Why going to work is good for you“). We should strive to ensure that AI is used to upskill workers rather than paring their jobs down to tedious piecework: dehumanising workers is a poor use of the technology.

Tackling that is a social and political issue, rather than a technological one. AI may force changes on our economic system – witness the discussions over the introduction of a basic universal income for all (see “What happens if we pay everyone just to live?“). But change should be people-centric, not led by AI-driven efficiency that enriches a few to the detriment of the many.

As for super-smart AI wiping us out, relax. For the moment we should worry more about semi-smart machines given too much power. Autonomous weapons may be in legal limbo, but that hasn’t stopped their development. Drone warfare provides a taste of what’s to come: how can machines that can’t tell civilians from combatants do the “right” thing without human control?

In this, as elsewhere, the answers lie with us. AI can’t strip us of our jobs, our dignity or our human rights. Only other humans can do that.

This article appeared in print under the headline “Who’s afraid of AI?”