Devlin's
Angle

January 2001

Mathematics, Limited

Well it's here at last: the year 2001, a date
that for most of us who were coming of age in
the 1960s can mean only one thing: Stanley Kubrick
and Arthur C. Clarke's celebrated futuristic science
fiction movie 2001, A Space Odyssey. Just
how correct has Clarke and Kubrick's vision of
the future turned out to be?

In the movie, a team of new millennium space
explorers set off on a long journey of
discovery to Jupiter. To conserve energy, the
team members spend most of the time in a state
of hibernation, their life-support systems
being monitored and maintained by the on-board
computer HAL. Though HAL controls the entire
spaceship, it is supposed to be under the
ultimate control of the ship's commander, Dave,
with whom it communicates in a soothingly soft,
but emotionless male voice (actually that of
actor Douglass Rain). But once the vessel is
well away from Earth, HAL shows that it has
developed what can only be called a "mind of
its own." Having figured out that the best way
to achieve the mission for which it has been
programmed is to dispose of its human baggage
(expensive to maintain and sometimes irrational
in their actions), HAL kills off the
hibernating crew members, and then sets about
trying to eliminate its two conscious
passengers. It manages to maneuver one crew
member outside the spacecraft and sends him
spinning into outer space with no chance of
return. Commander Dave is able to save himself
only by entering the heart of the computer and
manually removing its memory cells. Man
triumphs over machine -- but only just.

It's a good story. (There's a lot more to it
than I just described.) But how realistic is the
behavior of HAL? We don't yet have computers
capable of genuinely independent thought, nor
do we have computers we can converse with using
ordinary language (except in some fairly
narrowly constrained domains). True, there have
been admirable advances in systems that can
perform useful control functions requiring
decision making, and there are working systems
that recognize and produce speech. But they are
all highly restricted in their scope. Despite the
oft-repeated claims that "the real thing" is just
around the corner, the plain fact is that we are
not even close to building computers that can
reproduce human capabilities in thinking and
using language.

But back in the 1960s, when 2001 was
being made, there was no shortage of expert opinion
claiming that the days of HAL were indeed just a
few years off. The first such prediction had been made
by the mathematician and computer pioneer Alan Turing.
In his celebrated article Computing Machinery
and Intelligence, written in 1950, Turing
claimed, "I believe that at the end of the
century the use of words and general educated
opinion will have altered so much that one will
be able to speak of machines thinking without
expecting to be contradicted."

Though the last part of Turing's claim seems to
have come true, that is a popular response to
years of hype rather than a reflection of the
far less glamorous reality. There is now plenty
of evidence, from psychology, sociology, and
from linguistics, to indicate that the original
ambitious goals of machine intelligence is not
achievable, at least when those machines are
electronic computers, no matter how big or fast
they get. (I present some of that evidence in my
1997 book
Goodbye Descartes: The End of Logic and the
Search for a New Cosmology of the Mind.)
So how did the belief in intelligent machines
ever arise?

Ever since the first modern computers were
built in the late 1940s, it was obvious that
they could do some things that had previously
required an "intelligent mind." For example, by
1956, a group at Los Alamos National Laboratory
had programmed a computer to play a poor but
legal game of chess. That same year, Allen
Newell, Clifford Shaw, and Herbert Simon of the
RAND Corporation produced a computer program
called The Logic Theorist, which could --
and did -- prove some simple theorems in
mathematics.

The success of The Logic Theorist immediately
attracted a number of other mathematicians and
computer scientists to the possibility of
machine intelligence. Mathematician John
McCarthy organized what he called a "two month
ten-man study of artificial intelligence" at
Dartmouth College in New Hampshire, thereby
coining the phrase "artificial intelligence",
or AI for short. Among the participants at the
Dartmouth program were Newell and Simon,
Minsky, and McCarthy himself. The following
year, Newell and Simon produced the General
Problem Solver, a computer program that could
solve the kinds of logic puzzles you find in
newspaper puzzle columns and in the puzzle
magazines sold at airports and railway
stations. The AI bandwagon was on the road and
gathering speed.

As is often the case, the mathematics on which
the new developments were based had been
developed many years earlier. Attempts to write
down mathematical rules of human thought go
back to the ancient Greeks, notably Aristotle
and Zeno of Citium. But the really big
breakthrough came in 1847, when an English
mathematician called George Boole published a
book called An Investigation of the Laws of
Thought. In this book, Boole showed how to
apply ordinary algebra to human thought
processes, writing down algebraic equation in
which the unknowns denoted not numbers but
human thoughts. For Boole, solving an equation
was equivalent to deducing a conclusion from a
number of given premises. With some minor
modifications, Boole's nineteenth century
algebra of thought lies beneath the electronic
computer and is the driving force behind AI.

Another direct descendent of Boole's work was
the dramatic revolution in linguistics set in
motion by MIT linguist Noam Chomsky in the
early 1950s. Chomsky showed how to use
techniques of mathematics to describe and
analyze the grammatical structure of ordinary
languages such as English, virtually overnight
transforming linguistics from a branch of
anthropology into a mathematical science. At
the same time that researchers were starting to
seriously entertain the possibility of machines
that think, Chomsky opened up (it seemed) the
possibility of machines that could understand
and speak our everyday language.

The race was on to turn the theories into
practice. Unfortunately (some would say
fortunately), after some initial successes,
progress slowed to a crawl. The result was
hardly a failure in scientific terms. For one
thing, we do have some useful systems, and they
are getting better all the time. The most
significant outcome, however, has been an
increased understanding of the human mind: how
unlike a machine it is and how unmechanical
human language use is.

One reason why computers cannot act
intelligently is that logic alone does not
produce intelligent behavior. As neuroscientist
Antonio Damasio pointed out in his 1994 book
Descartes' Error, you need emotions as well.
While Damasio acknowledged that allowing the emotions
to interfere with our reasoning can lead to
irrational behavior, he presented evidence to
show that a complete absence of emotion can
likewise lead to irrational behavior. His
evidence came from case studies of patients
for whom brain damage -- either by physical
accident, stroke, or disease -- has impaired their
emotions but had left intact their ability to
perform 'logical reasoning', as verified using
standard tests of logical reasoning skill.
Take away the emotions and the result is a
person who, while able to conduct an
intelligent conversation and score highly on
standard IQ tests, is not at all rational in
his or her behavior. Such people often act in
ways highly detrimental to their own well
being. So much for western science's idea of a
"coolly rational person" who reasons in a
manner unaffected by emotions. As Damasio's
evidence indicated, truly emotionless thought
leads to behavior that by anyone else's
standards is quite clearly irrational.

And as linguist Steven Pinker explained in his
1994 book
The Language Instinct, language too
is perhaps best explained in biological terms.
Our facility for language, said Pinker, should
be thought of as an organ, along with the
heart, the pancreas, the liver, and so forth.
Some organs process blood, others process food.
The language organ processes language. According to
Pinker, we should think of language use as an
instinctive, organic process, not a learned,
computational one.

So, while no one would deny that work in AI and
computational linguistics has led to some very
useful computer systems, the really fundamental
lessons that were learned were not about
computers but about ourselves. The research was
most successful in terms not of engineering but
of understanding what it is to be human. Though
Kubrick got it dead wrong in terms of what
computers would be able to do by 2001, he was
right on the mark in terms of what we
ultimately discover as a result of our science.
2001 showed the entire evolution of mankind,
starting from the very beginnings of our
ancestors Homo erectus and taking us through
the age of enlightenment into the present era
of science, technology, and space exploration,
and on into the then-anticipated future of
routine interplanetary travel. Looking ahead
forty years to the start of the new millennium,
Kubrick had no doubt where it was all leading.
In the much discussed -- and much misunderstood
-- surrealistic ending to the movie,
Kubrick's sole surviving interplanetary
traveler reached the end of mankind's quest for
scientific knowledge, only to be confronted
with the greatest mystery of all: Himself. In
acquiring knowledge and understanding, in
developing our technology, and in setting out
on our exploration of our world and the
universe, said Kubrick, scientists were simply
preparing the way for a far more challenging
journey into a second unknown: the exploration
of ourselves.

The dawn of the new millennium (which most of Humanity
celebrated a year ago, twelve months before the
calendar event itself) sees Mankind about to pursue
that new journey of discovery. Far from taking away
our humanity, as many feared, attempts to get computers
to think and to handle language have instead led to a
greater understanding of who and what we are.

Devlin's Angle is updated at the beginning of each month.
If much of this article seems familiar, there's a good
reason. I adapted it from a column I wrote four years
ago. I don't normally do that. But with the arrival of
the prophetic year 2001 coinciding with the inauguration
of a new US President who seems committed to putting the
defense of the nation -- and with it all life on earth --
in the hands of a computer-controlled missile defense
system, it seemed worth saying the same thing again, and
reminding ourselves that mathematics and the technologies
based on mathematics have their limitations.