Time to face the facts: Technological unemployment is a complete myth

Economic history suggests that when depressions, recessions, or periods of weak growth emerge during times of profound technological change, people always point to technology as the underlying cause of unemployment.

But those fears then recede with eventual economic recovery.

This point of view harks back to the technological unemployment views expressed by David Ricardo, after the post-Napoleonic-depression in the early nineteenth century, and the popularisation of the term “technological unemployment”, by John Maynard Keynes in the 1930s.

The question now is whether the same trend is being observed now, as the world economy moves towards normalisation?

Economic history teaches that the long-term effect of technology on employment is positive.

But in recent years, the conventional wisdom has been challenged by those arguing that “this time it will be different” – that we really are about to face mass technological unemployment.

If true, the consequences of such a shift could be dramatic, challenging the existing economic and social order. This is what might be termed the Neo-Luddite narrative, after Ned Ludd, an Englishman whom, it is popularly claimed, encouraged workers to smash textile machines during the industrial revolution.

Over recent years, clear dividing lines have opened up between those who foresee mass unemployment as a consequence of technological change, and those who see the opposite – a world of limited technological change, meaning there is no reason to be wary of technological unemployment.

However, from an historical perspective, this an artificial divide.

Economic history teaches the possibility of an optimistic outcome in both technology and employment.

All too often, optimism around the future of technology and employment are treated as mutually exclusive, despite the fact that economic history suggests otherwise.

If we are to break with the lessons from economic history, we need very good reasons to do so.

Economists have railed against the so-called “lump of labour fallacy”, which is the idea that there is a limited amount of work to be done, and that the more that is done by machines, the less that can be done by people.

Commenting on the lessons from economic history, Bank of England chief economist, Andrew Haldane, writes: “perhaps unusually, the historical data tell a remarkably consistent story.” Haldane cites the stability of the UK employment rate since 1750, despite wave after wave of new technology.

A study by Deloitte in the UK, going back to 1871, concluded that technology has been a “great jobs creating machine”.

And one recent study of the impact of technology on employment in Silicon Valley concluded that for each job that disappeared due to “displacement”, four “replacement” jobs were created.

This should not surprise us. New technology is not ex-nihilo – created from nothing. It has to be imagined, designed, built, marketed. and this all creates jobs – and high-value added ones to boot.

The replacement effect doesn’t stop there either. If the new technology user becomes more successful as a result of investing in new kit, then employment could increase in that company too.

New entrants into the market may also appear, and positive employment effects will be amplified through the supply chain.

In the US – where one would expect any negative employment effects to be most apparent – there is no evidence of a shift in the rate of job destruction relative to job creation over recent decades.

Where manufacturing job losses have occurred, the primary explanation is mostly due to outsourcing overseas.

But despite all this, the nagging doubts remain that this time it might be different, and that the robots may be coming for our jobs.