Technology Briefing is brought to you by association with Audio-Tech, publishers of critically acclaimed programs including: Trends Magazine.

Subscribe to their monthly reports and learn about big ideas, new products, new management techniques, breakthrough concepts, and trailblazing technologies.

Transcript

As we discussed a few issues ago in our analysis of the trend Harnessing AI-Driven Growth, artificial intelligence will transform one industry after another over the coming decade, spurring economic growth in the United States and throughout the world. According to a study by the Accenture Institute for High Performance, AI technologies will increase labor productivity by as much as 40 percent, while doubling annual economic growth rates for most countries by 2035.

How is this possible? To understand AI's potential, we only have to look at how another technology (computing) increased productivity and unleashed economic growth. As Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb of the University of Toronto's Rotman School of Management explained recently in MIT Sloan Management Review, the semiconductor revolution reduced the cost of arithmetic.

Half a century ago, it would take hundreds of high-salaried engineers using slide rules and calculators several months to solve problems that a computer can solve in an instant today. In fact, everything that computers enable us to do today, from streaming music to designing spacecraft, is based on the ability of machines to process information and make calculations at incredible speed and minimal cost.

Similarly, artificial intelligence is turning another capability that has until now been expensive into one that will be cheap, and therefore abundant. That capability is prediction. Agrawal and his colleagues define prediction as "the ability to take information you have and generate information you previously didn't have."

The ability of machines to predict what will happen is the driving force behind autonomous vehicles, image recognition systems, and language translation services, as well as countless applications that haven't been imagined yet.

The aspect of AI that makes prediction possible is known as "machine learning." Machines are said to "learn" when programmers feed them millions of inputs so they can establish rules and recognize patterns. For example, after processing a multitude of images of basketballs, a computer will learn that basketballs are orange and round.

But to distinguish a basketball from other round orange objects, such as orange fruit or the sun, the machine learns to use context in order to improve its accuracy. For instance, it learns that a round orange object that is pictured near a basketball hoop is likely to be a basketball, while a round orange object that is shown in a bowl next to a banana is not.

In this way, the computer "predicts" whether a round orange object is a basketball, which is useful for image recognition systems. Similarly, security systems that use facial recognition technology can look at an image of a person's face at an airport and predict whether it is the face of a wanted criminal or a known terrorist, based on the images in the databases it has scanned.

To see how AI lowers the cost of prediction, it is useful to think of a task that needs to be completed as a series of steps. Data leads to prediction. Prediction combined with judgment leads to action. Action leads to an outcome. An outcome generates feedback, which can then be used to improve prediction.

For example, a driver's experience at driving helps him to predict whether he should stop or accelerate at a yellow light. If he stops and then sees that the car that accelerated in the lane beside him caused an accident, this outcome will then further shape his future decisions about what he should do in the same situation.

It's not just people who learn in this way; increasingly, machines are being programmed to follow the same approach. To take the same example, driverless cars are being taught to respond to traffic situations as an experienced human driver would. In other words, in small ways, machines are learning to mimic human judgment, which is the ability to weigh the consequences of different actions, based on the predictions of their outcomes.

For most applications, judgment is still beyond AI's capabilities. But its ability to reduce the cost of prediction offers important benefits to businesses.

As Sam Ransbotham, associate professor of information systems at the Carroll School of Management at Boston College, noted in an MIT Sloan Management Review blog, "With AI, a thousand radiologists cost the same as one. With AI, each customer can have his or her own customer service representative. With AI, processes can work the same every time. Each of the AI radiologists performs exactly the same as the others, reaching the same diagnosis. Each AI customer service representative recommends the same resolution - and with this uniformity in place, organizations can incrementally refine and improve, ever increasing quality."

Ransbotham believes AI could be used to cut the time it currently takes for a mortgage approval - usually 30 to 45 days - to a few hours. Legal cases that take years to resolve could also be settled in a matter of days if AI systems absorbed all of the legal precedents, all of the evidence, and all of the documents filed by both parties.

Real estate and law are just two examples; tax preparation is another. According to a report on Marketplace.org, H&R Block is using IBM's Watson computer system to try to win market share back from on-line rivals like TurboTax. The system has mastered all of the thousands of pages of the U.S. tax code and is constantly learning how to optimize returns. Human tax preparers still meet with clients, but as they input the data into the system, Watson will prompt them to focus on areas they might have overlooked.

According to Brian Uzzi of the Kellogg School of Management, "Watson would be like the second brain that would allow [tax preparers] to become familiar with and ask questions about tax code that they otherwise can't commit to memory, but [with it they] may be able to offer the same services at a lower cost and on a larger scale."

Based on this trend, we foresee the following developments:

First, artificial intelligence will replace humans in the area of making predictions in many applications by 2020; however, until at least 2025 humans will still be needed to make judgments based on those predictions.

One simple example cited by Agrawal is Google's Inbox by Gmail, which learns how people respond to emails by reading vast numbers of messages and replies. As a result, Inbox will suggest several short responses to an incoming email, but the human user must rely on his or her own judgment to select the best option. The user saves time by choosing from a menu of choices rather than typing a unique message, which means emails can be handled faster, leaving more time for productive work.

Second, AI will revolutionize healthcare, by providing more accurate diagnoses in less time.

As we reported in our July 2016 issue, IBM estimates that by 2020, the amount of medical data will double every 73 days, and Wired.com found that a physician would have to spend 160 hours a week to read the new medical research that is published. IBM's Watson supercomputer needs only 15 seconds to read 40 million documents, and can search for patterns in the medical records of 1.5 million patients. That allows Watson to correctly diagnose lung cancer in 90 percent of cases, compared to 50 percent for trained physicians. But while AI will dramatically reduce the costs of prediction in healthcare, human doctors will still need to make judgments about how to inform patients and their families about the diagnosis, and to choose the right treatment plan for the individual patient.

Third, as the cost of prediction plummets, employers will focus on finding managers and employees who exercise good judgment.

AI will make prediction cheap and abundant, so the workplace skills that will be in greatest demand will involve making ethical decisions, using creativity to spot outside-the-box opportunities, and displaying empathy to connect with customers, employees, or patients. SingularityHub.com writer and former Microsoft manager Michael Dix compares the complementary strengths of AI in prediction and humans in judgment as parallel to those of the lead characters in the television show Sherlock.

Sherlock Holmes approaches crimes with a mathematician's mind; he processes the data and applies deductive reasoning. But because he doesn't understand social and emotional cues, he relies on his friend John Watson to help him avoid reaching the wrong conclusion. Similarly, AI will allow every organization to leverage the brainpower of thousands of Sherlocks in addressing every task; but to keep the data from steering AI away from the right decision, companies will need people to fill the role of John Watson.

Fourth, sometime after 2025, AI will advance to the point where it will be able to make informed judgments in many contexts.

By learning which option a human selects in each particular set of circumstances, AI will eventually be able to make the same judgments independently. An early application will be the translation of speech and documents from one language to another in business settings, such as negotiations between companies in different countries.

Fifth, while many workers who now make their living in "prediction" occupations will need to be retrained, artificial intelligence will provide a net benefit to employers and employees throughout the economy.

Loan officers and actuaries are two professions that will be endangered as AI gradually takes over the job of assessing risk; like the H&R Block tax preparers teamed with Watson, they will need to learn to delegate most of the analytical part of their jobs - such as determining the likelihood that a potential customer will remain solvent - to AI systems, meanwhile focusing on providing empathy, applying judgment to the AI predictions, and taking advantage of the time savings to serve more customers and generate more revenues for their employers, as well as higher incomes for themselves.

Sixth, the ultimate challenge that AI will bring to executives and investors is not a "world without work" but a world with rapidly changing work.

Erik Brynjolfsson, the Director of the MIT Initiative on the Digital Economy, reminds us that, "It is important to remember that there's no shortage of important work that can be done only by humans. The response, then, isn't simply replacing income for workers being displaced by technology, but preparing them to do new jobs that are desperately needed in education, healthcare, infrastructure, environmental cleanup, entrepreneurship, innovation, scientific discovery, and many other areas. Instead of thinking of AI as a zero-sum game, or a way to automate existing jobs and services, forward-thinking executives recognize that technology adds value by expanding jobs and boosting productivity. When technology complements human workers, makes them more productive, and also cuts costs, businesses and employees are better off."

Submit A Comment

Comments are reviewed prior to posting. Please avoid discussion of pricing or recommendations for specific products.
You must include your full name to have your comments posted. We will not post your email address.

Your Name

Company

E-mail

Country

Comments

Authentication

Please type the number displayed into the box. If you receive an error, you may need to refresh the page and resubmit the information.