Artificial Intelligence will have a profound effect on the way people work, and will almost certainly also impact the availability of jobs and distribution of income. But a number of leading technologists and economists speaking at a conference on AI and the Future of Work—presented by MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and its Initiative on the Digital Economy—earlier this month suggested that the changes may not be as rapid or as unusual as is popularly suggested, which is very different from much of what I hear at typical technology conferences.

MIT President Rafael Reif, who opened the conference, said that while it is clear a big change is occurring, how to respond to such a change remains unclear to most people. Reif said he's heard from CEOs who are laying off hundreds of people whose jobs have been made obsolete by automation, who at the same time insist that they have hundreds of jobs they can't fill because they can't find the right people with the right skillsets. If we want technological advances to benefit everyone, Reif said, we must thoughtfully reinvent the future of work.

The AI Revolution: Why Now? What It Means and How to Realize the Potential

In a panel on why these changes are happening now and what they might mean looking ahead, Erik Brynjolfsson, Director of MIT's Initiative on the Digital Economy, talked about "the second machine age" enabling us to augment not just our muscles but our brains, and said this is a milestone in human history.

Brynjolfsson added, such progress has been accompanied by "the great decoupling," which refers to the condition that, while labor productivity is at record levels, median income hasn’t increased since the 1990s. This, he said, is not a function of technology, but of how we use technology.

Sinovation Ventures CEO Kai-Fu Lee, one of the leading investors in AI in China, was perhaps the most pessimistic on job destruction. He talked about four waves of technology, which have led to four different kinds of companies: internet data and the giant internet behemoths like Google and Facebook; commercial data and things like medical image recognition and fraud detection; the "digitized real world" and devices like the Amazon Echo and cameras in shopping centers and airports; and full automation, by which he means robotics and autonomous vehicles.

Lee said the first wave didn't have much of an impact on employment, but said that the second and third may replace lots of white-collar workers, while the fourth will largely hit blue-collar workers. Thus, he said, he expects more disruption for white-collar workers first. As examples, he cited a number of Chinese companies, including Megvii's "Face++" facial recognition software, which he said could replace 911 if broadly deployed; Yibot, a chatbot which could replace customer service workers; and Yongqianbao, a smart loan finance application that could replace loan officers. However, the AI revolution generally decimates jobs without replacement, he said, so we must deal with AI-induced job losses.

The solutions he suggested were eradicating poverty; re-inventing education to focus on "sustainable jobs," namely creative and social service jobs which are not replaceable by AI; creating more social and care-oriented jobs; and retiring our "industrial-age work ethic."

McKinsey Global Institute Chairman James Manyika said AI and automation offer huge benefits to business, the economy, and society, but said that their impact on work is more uncertain.

Relating information from McKinsey's recent study on automation (which I covered here), he noted that only 5 percent of jobs are close to 100 percent automatable based on the tasks involved, but that 60 percent of occupations are about 30 percent automatable, again based on the tasks involved. As a result, there will be some jobs lost, but many more jobs will experience major change. The questions, he said, are will there be enough jobs, and of these jobs, how will they change?

Thomson Reuters Labs CTO Mona Vernon, talked about giving "superpowers" to lawyers and journalists, by building software on top of massive knowledge graphs. She said that AI is changing "the architecture of the firm" by making it possible to answer questions that wouldn't have been possible to answer ten years ago. But she noted, there is a big leap required to go from "art of the possible" AI demonstrations to production grade implementations.

Moderator John Markoff, a Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford known also for his many years of reporting at The New York Times, wondered why, if the technology is so good, there are still so many jobs now. Brynjolfsson said that in the last forty years we've seen lots of jobs created, but not good jobs, and that median incomes haven't risen, so we "shouldn't be at all complacent." He said he doesn't believe in technological determinism, but instead thinks we need to make the right policy choices in areas such as education and entrepreneurship.

Augmentation vs Automation

Another panel focused on whether AI will replace jobs or augment them. MIT Economics Professor John Van Reenen acknowledged that people fear automation, and that this fear is rooted in the economic experience they've had over the past thirty or forty years.

Van Reenen said the history of the last 200-300 years is a positive one, in that the economy has been able to create new jobs. But, he said, "the question is the quality of jobs, rather than the quantity."

IBM Research Chief Operating Officer Sophie Vandebroek was a big believer in the augmentation argument. She talked about systems such as AI assisting security professionals by checking databases against known threats; said that AI helps financial services professionals by checking against regulations; and talked about how Xerox (where she used to work) developed a system for using machine learning to automate the scoring of tests. All of these things help people to better perform in the workplace, in her view.

Similarly, MIT Professor of Material Science and Engineering Krystyn Van Vliet said that the technology that lets computers look for tumors doesn't lead to fewer radiologists, but rather gives doctors more time to consult with each other and with patients. Still, she said, "people don't like to be told they need to be re-skilled."

Markoff asked if these kinds of developments will lead to the "de-skilling" of humans, and Ernst & Young Partner Dimitris Papageorgiou noted that airplanes still have two pilots even through most of a flight is conducted by autopilot. But, Papageorgiou said, AI is deepening the divide between lower-skilled and higher-skilled employees, and said Estonia and Costa Rica have changed school curricula based on where they think jobs will be in the future. Van Reenen noted that to date, technology has been biased in favor of the skilled worker, which is reflected in the huge premium going to college provides, even as the supply of college-educated workers has increased. But AI is different, he said, since it will also impact highly-skilled jobs, such as radiology.

Strategies to Navigate the First Phase

Several presenters offered strategies to make AI work better, as well as thoughts on educating workers for the new era.

Allen Blue, Co-Founder and Vice President of Product Management at LinkedIn, talked about building a responsive system so that people can have access to life-long learning. He cautioned that some jobs are ephemeral, and said that right now, the biggest job opening is for medical coders, but that this is a job which is highly likely to eventually be automated out of existence. Blue wondered how people will have the time and money to obtain education, and said employers and the government must get more involved.

Blue said there is a "need to rethink education all the way down to the kindergarten level," with a focus on areas like collaboration.

Sam Madden, a Professor at MIT CSAIL, and Faculty Co-Director of SystemsThatLearn, said he's worried about how teenagers spend their time, including how much more time they spend using computers and devices rather than interacting with their peers, and said he believes this may be having a "weird impact on social skills."

Jennifer Chayes, Technical Fellow & Managing Director, Microsoft Research New England, talked about how AI can improve health care, and as an example, pointed to applications for mobile devices that use reinforcement learning to motivate diabetics to exercise more. She is concerned about fairness in AI, and said that most systems, rather than optimizing for fairness, instead take biases in human-related data and magnify them. "We want to make sure AI is doing better than humans, not worse," she said.

Alex "Sandy" Pentland, Founding Director of the MIT Connection Science Research Initiative, said he isn't worried about jobs, but rather about methods of producing value. He said we are moving from doing routine tasks to instead focusing on tasks requiring social skills and non-routine analytical tasks, and talked about "The Human Strategy," or the idea that networks in a company or in society are just like connections in deep learning. He said that it would be interesting to bring reinforcement learning to the social domain as well as networks of production, creating "kaizen all the way up" in management levels, as well as on the shop floor.

In a discussion, Pentland said there needs to be a lot more data sharing and data transparency. Currently, he said there is an incredible concentration of data in a few hands, and he hopes to see some way of opening up access while at the same time respecting privacy laws. AI is only as good as the data used to train it, Pentland added, and said that if you're concerned about fairness, you have to understand what data went into the system.

Is it Really AI, or Just Computational Statistics?

Another panel was slated to discuss "opportunities and challenges," but really ended up talking more about the limitations of today's AI systems.

Josh Tenenbaum, Professor, MIT CSAIL, said that while we have AI technologies, we don't have real AI. Instead we have systems that do just one thing, based on pattern recognition. Real intelligence, he said, would instead model the world, explain, and understand what it sees, imagine, learn, and build new models of the world. He said we're decades away from an AI that could accomplish this, and remarked that even 3-month-old babies have more commonsense understanding of things in the world compared to an AI.

Patrick Winston, a Professor at MIT CSAIL, quipped that "‘Professor of AI' will be the last job standing," but generally was much more optimistic about the future for the work force. Things really haven't changed much since 1985, he said, when the last AI revolution turned out not to replace people. Machine learning is just another word for "computational statistics," he said, so when people say that he who owns AI will own the world, if you simply replace "AI" with "computational statistics," it sounds much less believable.

In a conversation that followed, Markoff referenced John McCarthy's project to build a thinking machine, and Winston was very skeptical. "We've always said that human-level technology is 20 years off…[and] eventually we'll be right," but probably not this time around, he said. Though what we have today is tremendously useful, it represents only a small part of human intelligence, he emphasized.

Vision: Industry 2020-2050

Similar perspectives echoed in a discussion of what panelists anticipated for 2020-2050.

Rod Brooks, Founder and CTO of Rethink Robotics, noted that learning isn't general, and said that learning how to navigate isn't the same as learning how to use chopsticks, which in turn isn't the same as learning languages. He noted that today's computers can identify pictures of people carrying umbrellas in the rain, but can't answer basic questions like "Can racoons carry umbrellas?"

Tom Kochan, Co-Director and Professor, Work and Employment Research at MIT's Sloan School of Management, said there are four major elements of an "Integrated Technology and Work Strategy," to ensure technology works for society in general.

The first element, Kochan said, is to define the challenge, and determine the problem (or problems) we are trying to solve. Second, he thinks that instead of considering the technology first, and then the workforce, we should integrate the technology and work design process. As an example, he talked about how GM spent $50 billion on automation, but didn't listen to its workforce, and thus didn't get the results it had hoped for.

The third element, Kochan said, is training, and we should train before technology is deployed, as well as "make lifelong learning a reality for all." In the case of GM, autoworkers needed to understand the technology in order for it to be deployed properly, and instead faced the stress of learning how to use the technology when it was installed. Finally, Kochan said we need to compensate those who are most adversely affected. He said that although new jobs will be created, that doesn't matter to the individuals who lose their jobs, and we must deal fairly with those who are negatively impacted.

If we are mindful of these elements, Kochan said, we will create a more shared prosperity, but "if we leave it to technologists alone, we'll replicate winners and losers."

Andrew McAfee, Co-Director of the MIT Initiative on the Digital Economy, and Principal Research Scientist, MIT Sloan School of Management, tried to give answers to what he sees as the three most common questions about the economy.

First, he said, is the question "has our economy been hijacked?" McAfee noted that the growing gap between the rich and the poor, as well as the rise of large, powerful companies and financiers. But he said what's going on is for the most part a structural change, brought about as a result of technology and globalization, rather than companies playing unfairly.

Second, McAfee hears a lot of concern about "permanent tech monopolies," and though it is impossible to assuage this concern with any certainty, such permanent monopolies are "almost certainly not" something to worry about. He recalled concerns 20 years ago that IBM, Microsoft, and later AOL could become such permanent tech monopolies, and similar comments 10 years ago about Nokia and RIM. In general, he said, "something unseats them."

Finally, McAfee asked, "Are there going to be jobs?" He answered that in the affirmative, but said there is no guarantee there will be as many jobs in the future as there are today. Although many people say we always benefit from a combination of people and machines, that's not a rule. For example, we have far fewer longshoreman today than we once had, and manufacturing employment peaked in 1979, so we really don't know what will happen over the next three decades.

In a panel discussion that followed, Markoff asked about the impact of Hollywood, and depictions of AI in cinema. Brooks noted that as a 13-year-old he saw 2001 and "fell in love with HAL." But, he said, Hollywood tends to portray the world as it is, and then add technology, whereas in the real world, society adapts along with technology.

McAfee said he is more worried about fear-mongering regarding AI, quoting Andrew Ng who said that "worrying about killer robots is like worrying about overpopulation on Mars." He said we are "spending way too much time on this sophomore dorm room BS topic."

Kochan said he is more interested in figuring out how we bring more people into the conversation on technology, as many technologies take way too long to diffuse. Instead, he said, we should bring users in early on. But Brooks countered, asking "how many people have to take a course on how to use a smartphone?"

Markoff asked about technology's role in the job debate, as well as inequality. McAfee said that Mark Zuckerberg's net worth is the "wrong thing to focus on." Instead, he said, we should be worried about the stagnation of the middle class. Kochan agreed that stagnation is a problem, and argued that the big thing driving inequality and stagnation is "the decline of institutions" like unions and the minimum wage.

In a separate talk, MIT CSAIL Director Daniela Rus said we should think of machines as tools, and said she believed that Robots and AI can create more jobs and better jobs. But she pointed out that crunching large data sets does not translate to knowledge, and that making complex calculations does not produce autonomy. Rus also noted that action is harder than perception, that perception is harder than data crunching, and that getting to 99.99 percent correct is exponentially more difficult than reaching 90 percent.

Still, Rus was optimistic for the most part, and talked about how technology can give factory workers more control over what they produce, and how things such as wearables will help blind people to better navigate the world. She closed her talk by quoting John F. Kennedy, who in 1962 said that "we believe that if men have the talent to invent new machines that put men out of work, they have the talent to put those men back to work.

There was much more on the economics of AI and jobs on the second day (which I'll cover in another post.)

Get Our Best Stories!

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.