CHAPTER II

. . . the self-driving car [is] “an apt metaphor for what we’re dealing with — technology is moving into the driver’s seat as a primary determinant of humanity’s destiny.”

Wendall Wallach

“Who is designing and training the machine, and are those things in accordance with our values?”

Fr. Eric Salobir

At the moment, it looks like there will be an AI oligopoly that will be much more powerful than any oil company ever was.”

Wendell Wallach

CHAPTER II - The Great Disruptor

In an opening presentation, Wendell Wallach, an author and scholar at Yale University’s Interdisciplinary Center for Bioethics and senior advisor to The Hastings Center, spoke about emerging AI technologies as “a fourth industrial revolution.” Because of the broad scope of AI applications and their capacity to “amplify everything else,” Wallach believes that we may be “at a major inflection point in history.” He noted that AI has enormous capacities to shape and mold human behavior, and perhaps every segment of life.

This potential is exciting but also fraught with great risks, he said, because “the rapid pace of technological innovation and scientific discovery associated with AI is increasing the pressure on us to respond, often with little or no capacity for reflection.” Wallach regards the self-driving car as “an apt metaphor for what we’re dealing with — technology is moving into the driver’s seat as a primary determinant of humanity’s destiny. We are being challenged as to whether we can shape the trajectory of that future to some degree, with relatively weak tools.”

Wallach said that the rise of AI in its current forms raises serious questions about some fundamental principles of the Enlightenment, such as the sovereignty of human rationality and the role of individualism as the foundation of governance. He noted, for example, that the fields of behavioral economics and evolutionary psychology “are revealing that humans are not rational agents, and that we are prone to systematic errors and biases. Furthermore, our behavior can be highly determined and easily manipulated, which suggests that we as individuals have very weak will.” As AI facilitates new forms of “weaponized narratives” and propaganda via social media, said Wallach, “we are seeing major assaults on Enlightenment traditions.”

These general developments pose three major challenges, said Wallach: to evaluate AI innovations in terms of existing ethical criteria; to determine whether those criteria still apply; and to “nudge the trajectory of AI and indeed all emerging technologies toward a future with meaning and dignity for more humans.” Society will soon be asked to consider what tradeoffs its willing to make for the benefits of AI, and whether and how to mitigate the risks and negative social consequences.

What is Driving AI Innovation Today?
To learn more about what forces are propelling AI forward, Naveen Rao, Corporate Vice President and General Manager of the Artificial Intelligence Products Group at Intel Corporation, cites three primary, interrelated drivers of AI today: dataset sizes, Moore’s Law, and demand. Datasets have vastly grown in size over the past twenty years, said Rao, as Moore’s Law has enabled computers to process and store data more efficiently. He noted that computer hard drives in the late 1990s may have had 80 megabytes; now an inexpensive flash drive contains 32 or 64 gigabytes.

As for Moore’s Law, Rao shared a chart showing the relentless climb in computing efficiency as “computational substrates” have shifted from mechanical systems and relay switches to the vacuum tube, transistor and integrated circuit. The result has been dramatic improvements in the number of computations per second as measured in constant dollars.

Thanks to these trends, AI technologies can increasingly outperform human beings in tasks that were previously thought to be beyond the reach of machines. For example, in 2010, computers attempting to identify one image from among five drawn from ImageNet, a database of some 1.2 million images, failed 30% of the time — as opposed to a 5% failure rate for humans. But by 2012, computer error rates were down to 16%, a phenomenal improvement, thanks to “deep learning” techniques that enable a machine to learn from its errors. By 2015, computers were exceeding humans in tests to correctly identify images. “The time it takes for a neutral network machine-learning algorithm to train on a dataset has fallen precipitously,” said Rao. This in turn is reducing the amount of time and expertise needed to use such systems. Twenty years ago, it took a major company like Yahoo! to serve 100 million users; today a startup with only a handful of people, such as Instagram, can do that.

AI is so explosive, said Rao, because “it really does apply everywhere. We’re going to see it used across the board in the next five to ten years. To me, this moment actually feels very similar to the Internet twenty years ago.” Based on Intel forecasts, Rao cited new examples of AI applications in nine industry verticals:

As AI technologies improve, they will increase the efficiencies of scale for companies and reduce the costs of services. More people will be able to use a technology and fewer people will be needed for a given task, he said. For example, said Rao, AI will help automate healthcare processes that are currently costly, such as interpretations of an MRI scan. “That skill could be codified into an algorithm,” he said. “Once that is done, the price will drop precipitously.” Similarly, some of the “thought drudgery” associated with reviewing legal briefs and cases could be automated, said Rao, freeing up people from expensive routine tasks.

The exponential leaps in computing capacities are posing new challenges of their own, however, such as how to make sense of huge pools of data. “Our biggest computational problem today is actually data overload,” said Rao. “We have too much data in the world that we actually don’t know what to do with. If we froze the world today and gave 100 megabytes of data to every man, woman, and child on the planet, it would take us thirty years to get through all that data. This problem is going to get 75 to 100 times worse in the next ten years as data-gathering capabilities get cheaper and better.” This is why AI is so important today, said Rao: It addresses “the biggest computational problem that we face today, which is finding useful structure in data. AI is becoming the lens through which we view all data.”

The most significant upshot of AI innovations is how they are changing interactions between people and data, and in turn, our larger society. There is little question that AI and humanity will need to co-evolve in the future, but how this should be negotiated and managed is an open question. They are also likely to be unintended consequences. We may thrill to the idea of AI systems helping us to filter information to suit personalized wants and needs, but belatedly discover that the same technologies can produce fake news, closed echo chambers of public opinion, and the erosion of a shared public reality. When human bodies are blended with biocompatible implants containing AI capabilities — neuroprosthetics — and potentially even gene-modifications, difficult new complications arise.

Is an AI Juggernaut Inevitable?
In response to Rao’s presentation, participants debated whether artificial intelligence would necessarily proceed in these general directions. For Reed Hundt, CEO of the Coalition for Green Capital, former Chairman of the Federal Communications Commission, and Intel board member, the computing trends outlined by Rao are “inevitable.” “As computing architectures fundamentally change, the era of the general-purpose computer is over,” said Hundt. “Specialized-purpose architectures that imitate the brain will start to populate the environment.”

Hundt noted that “while the aggregate volume of data right now is huge, the gathering of data in each of the industry verticals mentioned by Rao is only partially complete. I think it will be 100% complete in a really short period of time.” The amassing of huge datasets subjected to AI analyses will be “fundamentally disruptive,” he added. No matter what any individuals may want, all economies around the world are committed to improving productivity and creating wealth. AI will only intensify this trend, said Hundt. Rao, who also regards rapid AI growth as inevitable, sees it as “symbiotic” with humans, in the sense of “supporting positive, exponential human growth.”

Several conference participants took issue with this vision of AI development, however. “The current lens on the technology and what it can do is pretty narrow,” said Jean-François Gagné, Co-Founder of CEO of Element AI, the world’s largest AI applied research lab. “It’s fragile. It’s limited. And there is a lot of danger that comes with that. We need to make sure that we have a smoother transition to more sophisticated systems. There are many gaps right now on all fronts. This is a moving horizon.”

A confusing complication is that AI consists of many different technologies and functions, Gagné pointed out; it is not one, single phenomenon. When people talk about AI, are they referring to “augmented intelligence” to support humans in doing discrete tasks? Or automation that replaces human functions and jobs? Or an entirely new integrated layer of AI, an immersive reality for a workplace or social life?

Broad “inevitability narratives” about AI are not helpful in illuminating the challenges ahead, said Kate Crawford, Distinguished Research Professor at New York University and Principal Researcher at Microsoft Research. She warned that such perspectives ignore “the much richer, more complex history of AI” over the past two generations. “AI has gone through several distinct ‘AI winters’ where funding dried up. The field was disparaged for not producing the results that it had claimed it could produce. By telling these linear stories of inevitability rather than cyclical stories, we’re losing a lot of important historical learning,” Crawford noted. “We’re in a big hype cycle, guys! It’s lovely to talk about co-evolution of humans and machines, and brain implants and exponential human growth, but we’re still a long way from this. Meanwhile, there are many things in the here and now that urgently need our attention.” AI brain implants, to take one example, would be tremendously costly and end up creating a different type of class system in society, Crawford said. “What sort of work are we doing in advance to actually address these concerns?”

Other participants questioned the narratives of technological determinism, saying that they ignore any role for democratic or individual agency. “Where does consent come in?” asked Joy Buolamwini, Aspen Institute Guest Scholar and founder of the Algorithmic Justice League at MIT Media Lab. “Who is making the decisions? When does [AI] enhancement become entrapment?” In the same vein, John C. Havens, Executive Director at the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, questioned any talk about human/machine “symbiosis” “when people don’t have access to their data.” He explained, “It’s not symbiotic co-evolution when, as a person, I can’t go to a data owner and say, ‘Please give me copies of my data so I can figure out what is there about me.’”

Quick Links

Choose Report Chapter

DOWNLOAD THE REPORT

Words from Charlie: Foreword to Roundtable on Artificial Intelligence Report