Site news

Sophisticated machines are fast outpacing jobs. What does this mean for the future of work? And if there are no jobs, what we will do with our time?

There’s no question that technology is drastically changing the way we work, but what will the job market look like by 2050? Will 40% of roles have been lost to automation – as predicted by Oxford university economists Dr Carl Frey and Dr Michael Osborne – or will there still be jobs even if the nature of work is exceptionally different from today? To address these issues, the Guardian hosted a roundtable discussion, in association with professional services firm Deloitte, which brought together academics, authors and IT business experts.

The workforce is likely to shift towards part-time, freelance work

Julia Lindsay, iOpener Institute

The future of work will soon become “the survival of the most adaptable”, says Paul Mason, emerging technologies director for Innovate UK. As new technologies fundamentally change the way we work, the jobs that remain will be multifaceted and changeable.

“Workers of the future will need to be highly adaptable and juggle three or more different roles at a time,” says Anand Chopra-McGowan, head of enterprise new markets for General Assembly. So ongoing education will play a key role in helping people develop new skills.

It may be the case that people need to consistently retrain to keep up-to-date with the latest technological advances, as jobs are increasingly automated and made redundant. The idea of a “job for life” will be well and truly passé. “There will be constant new areas of work people will need to stay on top of. In 2050 people will continually need to update their skills for jobs of the moment, but I have an optimistic view that there will continue to be employment if these skills are honed,” adds Chopra-McGowan.

However, Mark Spelman, co-head of future of the internet interactive, member of the executive committee for the World Economic Forum, says there will be winners and losers in this new world. “The idea of continuous training is optimistic – I imagine there will be one-day training blitzes where people learn new skills quickly, and then are employed for a month while they’re needed.”

This means the workforce is more likely to shift towards more part-time, freelance-based work, says Julia Lindsay, chief executive of iOpener Institute. “Employers won’t think in terms of employees – they’ll think in terms of specialisms. Who do I need? And for how long? Future work may also be focused around making complex decisions – using creativity, leadership and high degrees of self management.”

For businesses, this means keeping on top of the latest technological advances. “It comes back to how we use technology to inform young people about jobs. Data plays an important role – how can we engage children at school in technology, and give them more support early on in their career? It’s important that there is a cycle drive to foster a better digital environment,” says Mervin Chew, digital attraction manager for Deloitte.

We’re essentially heading towards a two-tier society

Dave Coplin, Microsoft UK

The problem with needing highly specialised roles is that it will isolate parts of the population who are unable to continuously adapt and retrain. “We can’t all be knowledge workers,” says Dan Collier, chief executive of Elevate. “So there will be a lot of unemployment – and perhaps no impetus to help these people. There will end up being a division between the few jobs that need humans, and those that can be automated.”

Alex Hearn takes a look back at the ‘no professors allowed’ informal dining club which laid the foundations for the British cybernetics movement artificial intelligence

Listen

We’re essentially heading towards a two-tier society, agrees Dave Coplin, chief envisioning officer for Microsoft UK. This feeling was echoed by all of our panel, who saw a potential divide between high-level, leadership roles and then less highly-specialised jobs that can be automated.

“This is either going to be very good or very bad – and either way there’s not going to be much in the way of work,” says Richard Newton, author of The End of Nice: How to be human in a world run by robots. The defining factor to whether there will be a two-tier society of mass unemployment, or a society of leisure, will be what society places value on. “The social contract of work has been ripped up, and people will be left with nothing for as long as businesses and corporations value productivity,” adds Newton.

The cheapest and most productive thing to do will be to automate the workforce, so if productivity is what shareholders place value on, there will be mass unemployment. “But if you use technology to reduce accidents, produce food for people and save time – that provides a great societal value,” says Spelman. It doesn’t fit with today’s idea of maximising profits, but these are important things we will need from society. “So in future we need to put societal and shareholder value together,” adds Spelman.

The idea of productivity was forged in the industrial revolution, so it’s no surprise that this may soon become an outdated way of viewing work. “There’s no shortage of work in society – there’s loads of jobs like caring, looking after children and volunteer work, for which we do not assign a value,” says Magdalena Bak-Maier, founder and managing director of Make Time Count.

However, we need to move away from this idea of working for a pay packet. “There also needs to be a shift away from the stereotype of men working and women staying at home,” adds Clare Ludlow, director of Timewise Foundation.

Coplin agrees that even if we can automate all the services we need (and thus eliminate most jobs) we will continue to have huge societal problems that need attention. “We are on a burning platform – a key issue of the future will be: how will we feed everyone?” So there’s an idea that as we continue to evolve and find new boundaries, work will be confined to working on the next human step. “First we need to tackle food and healthcare and transport issues, then we need to make the way we treat the earth more sustainably – and finally we will even look at reaching other planets,” Coplin says.

It may seem that some of these conversations are premature, as we are decades away from creating a working artificial intelligence. “There’s a huge potential for robotics, but you must remember that making a robot is hard,” says Dr Sabine Hauert, lecturer in robotics for the University of Bristol. “For example, if you wanted to create a robot and ask it to fetch you some water, that is amazingly complex. First, the robot needs to understand the home environment, then see the glass, and then locate you. These challenges are extremely hard to solve one by one, and at the moment they’re almost impossible to solve altogether.”

However, Hauert warns that we will see robots and algorithms programmed to do highly specific tasks. “Robots can be programmed to do specific tasks, rather than doing everything.”

One thing we need to remember is that the defining factor for what computers will be designed and created to do, is what humans want. “The change will come from what we want to happen. People make the planet work, so new advances will respond to how people want technology to change,” explains Mason.

But we have to be wary of creating things superior to us, warns Mark Eltringham, workplace expert and consultant for Insight Publishing. “The descent of man under machines is something to be wary and fearful of – it has the potential to be damaging in ways we haven’t thought of before.”

In the past we have used technology to replicate old ways of working – as a way to simply make old practices quicker and cheaper, but now we are about to enter a third computational wave where machines can learn and adapt. “This will have a huge economic impact – businesses will think: should I take the saving that automating the workforce will make, and run? Or should I take the saving and then work with it to create new jobs?” says Coplin.

“I used to think that creative skills would provide a ‘safe space’ as a refuge – but as technology continues to develop, I’m not so sure,” adds Newton. Indeed there is evidence that computers will eventually be able to replicate creative tasks, and even learn to create music, art and write novels.

But Newton is optimistic that this won’t devalue human accomplishments. “I think increasingly we will start to value the journey a human has been on, their personal struggle for achieving something great, even if a robot can do it better. For example, with a musician, we will value how long it took him to learn to produce such amazing music. It’s that human journey and struggle which will become important.”

Though the future of work is unclear, the panel agreed that one thing is for certain: “The nature of work is going to change – the jobs of tomorrow won’t be the same as jobs of today.”

Random glossary entry

Operating System

Operating system (OS), program that manages a computer’s resources, especially the allocation of those resources among other programs. Typical resources include the central processing unit (CPU), computer memory, file storage, input/output (I/O) devices, and network connections. Management tasks include scheduling resource use to avoid conflicts and interference between programs. Unlike most programs, which complete a task and terminate, an operating system runs indefinitely and terminates only when the computer is turned off.

Modern multiprocessingoperating systems allow many processes to be active, where each process is a “thread” of computation being used to execute a program. One form of multiprocessing is called time-sharing, which lets many users share computer access by rapidly switching between them. Time-sharing must guard against interference between users’ programs, and most systems use virtual memory, in which the memory, or “address space,” used by a program may reside in secondary memory (such as on a magnetic hard disk drive) when not in immediate use, to be swapped back to occupy the faster main computer memory on demand. This virtual memory both increases the address space available to a program and helps to prevent programs from interfering with each other, but it requires careful control by the operating system and a set of allocation tables to keep track of memory use. Perhaps the most delicate and critical task for a modern operating system is allocation of the CPU; each process is allowed to use the CPU for a limited time, which may be a fraction of a second, and then must give up control and become suspended until its next turn. Switching between processes must itself use the CPU while protecting all data of the processes.

The first digital computers had no operating systems. They ran one program at a time, which had command of all system resources, and a human operator would provide any special resources needed. The first operating systems were developed in the mid-1950s. These were small “supervisor programs” that provided basic I/O operations (such as controlling punch card readers and printers) and kept accounts of CPU usage for billing. Supervisor programs also provided multiprogramming capabilities to enable several programs to run at once. This was particularly important so that these early multimillion-dollar machines would not be idle during slow I/O operations.

Computers acquired more powerful operating systems in the 1960s with the emergence of time-sharing, which required a system to manage multiple users sharing CPU time and terminals. Two early time-sharing systems were CTSS (Compatible Time Sharing System), developed at the Massachusetts Institute of Technology, and the Dartmouth College Basic System, developed at Dartmouth College. Other multiprogrammed systems included Atlas, at the University of Manchester, England, and IBM’s OS/360, probably the most complex software package of the 1960s. After 1972 the Multics system for General Electric Co.’s GE 645 computer (and later for Honeywell Inc.’s computers) became the most sophisticated system, with most of the multiprogramming and time-sharing capabilities that later became standard.

The minicomputers of the 1970s had limited memory and required smaller operating systems. The most important operating system of that period was UNIX, developed by AT&T for large minicomputers as a simpler alternative to Multics. It became widely used in the 1980s, in part because it was free to universities and in part because it was designed with a set of tools that were powerful in the hands of skilled programmers. More recently, Linux, an open-source version of UNIX developed in part by a group led by Finnish computer science student Linus Torvalds and in part by a group led by American computer programmer Richard Stallman, has become popular on personal computers as well as on larger “mainframe” computers.

In addition to such general-purpose systems, special-purpose operating systems run on small computers that control assembly lines, aircraft, and even home appliances. They are real-time systems, designed to provide rapid response to sensors and to use their inputs to control machinery.

From the standpoint of a user or an application program, an operating system provides services. Some of these are simple user commands like “dir”—show the files on a disk—while others are low-level “system calls” that a graphics program might use to display an image. In either case the operating system provides appropriate access to its objects, the tables of disk locations in one case and the routines to transfer data to the screen in the other. Some of its routines, those that manage the CPU and memory, are generally accessible only to other portions of the operating system.

Contemporary operating systems for personal computers commonly provide a graphical user interface (GUI). The GUI may be an intrinsic part of the system, as in the older Apple Inc.’s Mac OS and Microsoft Corporation’s Windows OS; in others it is a set of programs that depend on an underlying system, as in the X Window system for UNIX and Apple’s Mac OS X.

Operating systems also provide network services and file-sharing capabilities—even the ability to share resources between systems of different types, such as Windows and UNIX. Such sharing has become feasible through the introduction of network protocols (communication rules) such as the Internet’s TCP/IP.