other factor. Artificial neural networks,
which are patterned after the arrangement of neurons in the brain and the
connections between them, are at the
heart of much of modern AI, and to
do a good job on hard problems they
require teraflops of processing power
and terabytes of training data.

Michael Witbrock, a manager ofCognitive Systems at IBM Research,says about two-thirds of the advancesin AI over the past 10 years have comefrom increases in computer process-ing power, much of that from the useof graphics processing units (GPUs).About 20% of the gain came from big-ger datasets, and 10% from better algo-rithms, he estimates. That’s changing,he says; “Advances in the fundamentalalgorithms for learning are now themain driver of progress.”Witbrock points, for example, toa technique called reinforcementlearning, “systems which can buildmodels that hypothesize about whatthe world might look like.” In re-

ARTIFICIAL INTELLIGENCE (AI), once described as a technology with per- manent potential, has come of age in the past
decade. Propelled by massively parallel computer systems, huge datasets, and better algorithms, AI has
brought a number of important
applications, such as image- and
speech-recognition and autonomous
vehicle navigation, to near-human
levels of performance.

Now, AI experts say, a wave of even
newer technology may enable systems
to understand and react to the world
in ways that traditionally have been
seen as the sole province of human
beings. These technologies include algorithms that model human intuition
and make predictions in the face of
incomplete knowledge, systems that
learn without being pre-trained with
labeled data, systems that transfer
knowledge gained in one domain to
another, hybrid systems that combine
two or more approaches, and more
powerful and energy-efficient hardware specialized for AI.

The term “artificial intelligence”
was coined by John McCarthy, a math
professor at Dartmouth, in 1955 when
he—along with Marvin Minsky of the
Massachusetts Institute of Technology (MIT), Claude Shannon of Bell Laboratories, and Nathaniel Rochester of
IBM—said they would study “the conjecture that every aspect of learning or
any other feature of intelligence can
in principle be so precisely described
that a machine can be made to simulate it.” McCarthy wanted to “find out
how to make machines use language,
form abstractions and concepts, solve
kinds of problems now reserved for
humans, and improve themselves.”
That is not a bad description of the
goals of AI today.

The path of AI over the ensuing 62
years has been anything but smooth.
There were early successes in areas
such as mathematical problem solving,
natural language, and robotics. Some of
the ideas that are central to modern AI,
such as those behind neural networks,
made conceptual advances early on. Yet
funding for AI research, mostly from
the U.S. government, ebbed and flowed,
and when it ebbed the private sector did
not take up the slack. Enthusiasm for AI
waned when the grandiose promises by
researchers failed to be met.

AI Comes of Age

A turning point for AI, and for the public’s perception of the field, occurred
in 1997 when IBM’s Deep Blue super-computer beat world champion Garry
Kasparov at chess. Deep Blue could
evaluate 200 million chess positions
per second, an astonishing display of
computer power at the time. Indeed,
advances in AI over the past 10 years
owe more to Moore’s Law than to any

Artificial Intelligence Poisedto Ride a New WaveFlush with recent successes, and pushed by evennewer technology, AI systems could get much smarter.Society| DOI: 10.1145/3088342 Gary Anthes

Chinese professional Go player Ke Jie preparing to make a move during the second game of a
match against Google’s AlphaGo in May 2017.