Clarifications and Additions for AI: A Modern Approach

This page contains clarifying remarks for parts of the book that some
people may have had trouble with. It also includes material that has
been published (or came to our attention) after AIMA was
published, particularly material that is only published on the web.
In addition, don't miss the list of other sites
on the Web with AI content, or our list of programming references.

Chapter 1

As an interesting data point in the philosophical argument over
whether AI is more about Boolean logic or about continuous values, note
that John McCarthy has taken to signing his email messages with the quote
"He who refuses to do arithmetic is doomed to talk nonsense."

Page 7: Definition of agent. There are so many people using
the word agent in so many ways that it is hard to pin down a meaning
(just as it is hard to pin down a meaning for intelligence).
We give our definition on page 7, but there are certainly other definitions.
One dictionary says:

agent (ay-jent)n.1. a person who does something or instigates some activity.
2. one who acts on behalf of another.
3. something that produces an effect or change.

Mike Wooldridge and Nick Jennings
give the following list [which we have annotated
with relevant Part numbers] in an article titled "Intelligent Agents: Thoery
and Practice" from Knowledge Engineering Review, Vol. 10, No. 2,
1995:

autonomy: agents operate withour the direct intervention of
humans or others, and have some kind of control over
their internal state; [Part I]

reactivity: agents perceive their environment and respond in a timely
fashion to changes that occur in it; [Parts I, V]

pro-activeness: agents do not simply act in response to their
environment, they are able to exhibit goal-directed behaviour by
taking the initiative. [Part IV]

Page 10: "Bertrand Russell introduced logical
positivism." We meant to give only a brief overview of the
history of philosophy, and were well aware that this meant cutting
corners. Our naive coverage of the origins of logical positivism drew
some response and debate from those who know a lot about such matters:

William J. Rapaport says:

Bertrand Russell was not responsible for logical positivism.
Rather, he was responsible for logical atomism, which is
unrelated. He was also, with G.E. Moore, responsible for analytic (as
opposed to "synthetic" or idealistic) philosophy, which ultimately led
to logical positivism. Logical positivism, however, originated with
the Vienna Circle (Carnap, et al.), inspired by Wittgenstein; it's
most well-known "popularizer" was A.J. Ayer.

Doug Edwards says:

The term "logical positivism" is vague, and has varying senses with
varying levels of vagueness. In its narrowest and most precise sense,
it's true that it originated in the Vienna Circle, and if it could be
said to have any one founder, that person would be Rudolf Carnap--not
Moore, Russell, or Wittgenstein. I do believe it's going a bit far to
claim that Russell's logical atomism is "unrelated" to logical
positivism; I think Russell has almost as much claim as Wittgenstein
to have "inspired" logical positivism. The most important point is
that in this most precise sense of "logical positivism", Wittgenstein
was not a logical positivist himself, however much he may have
"inspired" logical positivism. The positivists condemned
"metaphysical" statements as misleading, in that such statements
appeared to have meaning but were in reality utterly devoid of it.
Wittgenstein's Tractatus, by contrast, held that such
statements could have a "higher" meaning not expressible by
propositions.

CLB: To what extent have you ever followed developments in
artificial intelligence? The third program you ever wrote was a
tic-tac-toe program that learned from its errors, and Stanford has
been one of the leading institutions for AI research...

Knuth: Well, AI interacts a lot with Volume IV; AI researchers
use the combinatorial techniques that I'm studying, so there is a lot
of literature there that is quite relevant. My job is to compare the
AI literature with what came out of the electrical engineering
community, and other disciplines; each community has had a slightly
different way of approaching the problems. I'm trying to read these
things and take out the jargon and unify the ideas. The hardest
applications and most challenging problems, throughout many years of
computer history, have been in artificial intelligence-- AI has been
the most fruitful source of techniques in computer science. It led to
many important advances, like data structures and list
processing... artificial intelligence has been a great stimulation.
Many of the best paradigms for debugging and for getting software
going, all of the symbolic algebra systems that were built, early
studies of computer graphics and computer vision, etc., all had very
strong roots in artificial intelligence.

CLB: So you're not one of those who deprecates what was done in that area...

Knuth: No, no. What happened is that a lot of people believed
that AI was going to be the panacea. It's like some company makes
only a 15% profit, when the analysts were predicting 18%, and the
stock drops. It was just the clash of expectations, to have inflated
ideas that one paradigm would solve everything. It's probably true
with all of the things that are flashy now; people will realize that
they aren't the total answer. A lot of problems are so hard that
we're never going to find a real great solution to them. People are
disappointed when they don't find the Fountain of Youth...

CLB: If you were a soon-to-graduate college senior or Ph.D. and
you didn't have any "baggage", what kind of research would you want to
do? Or would you even choose research again?

Knuth: I think the most exciting computer research now is
partly in robotics, and partly in applications to biochemistry.
Robotics, for example, that's terrific. Making devices that actually
move around and communicate with each other. Stanford has a big
robotics lab now, and our plan is for a new building that will have a
hundred robots walking the corridors, to stimulate the students.
It'll be two or three years until we move in to the building. Just
seeing robots there, you'll think of neat projects. These projects
also suggest a lot of good mathematical and theoretical questions.
And high level graphical tools, there's a tremendous amount of great
stuff in that area too. Yeah, I'd love to do that... only one life,
you know, but...

Chapter 2

Page 36, paragraph -2: Oren Etzioni, coiner of the term
softbot, points out that "the 747 flight simulator example may
mislead the reader into thinking that softbots are agents that
interact with simulations of physical worlds." In Etzioni et
al. (1992), he defines softbots as "programs that interact
with software environments by issuing commands and interpreting the
environments' response." The point is that softbots exist in a
environment that just happens to be a software environment, but is
still real in an interesting way.

Chapter 10

Don Loveland writes:
"I write to correct your reference to a paper of mine. Linear resolution
was indeed introduced by me, but in the paper

This is a confusion in the literature in which few sources have this
correct. Since I think your text will be around for a long time,
it is a good place to get this correctly entered. The paper you
quote, Mechanical theorem proving by Model Elimination, introduces
a procedure that is linear, but the concept (and that mode of
presentation) was not introduced yet. It is not a resolution
procedure, technically. It actually is a significant paper,
as it is the form used in SL-resolution (Kowalski and Kuehner)
that lead to Prolog. Thus it would be correct to reference it in
the Prolog chapter as a key paper in the theoretical background
of Prolog. Actually, it is receiving much attention now as an
extension of Prolog that can use the same (WAM) architecture
yet is complete for all first-order logic. (A twist on history
as it was part of the prehistory of Prolog.) In modern language,
the elegance and power of the WAM architecture is possible because
Prolog is an adaption of linear input resolution. Model Elimination
also has the linear input format, but is complete for all of
first-order logic. You have captured the ideas of input and linear
resolution well on page 285; they are important concepts because
of Prolog yet often are omitted or presented incorrectly in
basic AI texts, even those dealing with Prolog."

Chapter 17

Page 501, lines -4 to -5: The equation for utility of action A
could be read as stating
that there is a "utility of doing action A in (2,1)" independent of
the current belief state. This is not the case; utilities of actions
depend on the current belief state as well as the actual state,
as the subsequent text explains.

Chapter 18

Chapter 20

The March 1995 issue of Communications of the ACM has an
article on Temporal Difference Learning and TD-Gammon by Gerald
Tesauro, which describes (in more detail than our brief discussion on
page 617) the success of the temporal difference reinforcement
learning technique to the game of backgammon. Tesauro quotes Kit
Woolsey, one of the top ten backgammon players and a respected analyst
as saying:

TD-Gammon has definitely come into its own. There is no
question in my mind that its positional judgement is far better than
mine ... In the more complex positions, TD has a definite edge. In
particular, its judgement on bold vs. safe play decisions, which is
what backgammon is really about, is nothing short of phenomenol.

I believe this is true evidence that you have accomplished what you
intended to achieve with the neural network approach. Instead of a
dumb machine which can calculate things much faster than humans such
as the chess playing computers, you have built a smart machine which
learns from its experiences pretty much the same way humans do.

In 1972, I took a grad
course in philosophy of language from John Tienson at Indiana
University. In that course, he presented the sentence:

Dogs dogs dog dog dogs.

which is grammatical and meaningful, if not acceptable, with no punctuation
changes, having, of course, the same syntactic structure as:

Mice cats chase eat cheese.

Finding the "-s" morpheme unaesthetic, several of us grad students
sought something better.

Fish fish fish fish fish

doesn't quite hack it, since "fish" requires an indirect object: one
fishes *for* something. At that point, I came up with the Buffalo
sentence.

Bob Berwick gives his recollections:

Well, hard to tell about these urban legends, you know.
I just recall reading about it when I was 10 or something years
old--before 1972, to be sure. Then Ed Barton and I sat around
discussing it in 1982, and we just thought it was part of
common parlance (or urban legend) by then also. Even in the
Police police police form.

For a very hilarious take on all this, you might want to
[look at one of Carl de Marcken's]
famous "friday
afternoon GSB" abstracts--in his abstract,
he works out the exact algebraic formula for any number of buffaloes,
as a joke, etc.

And Andrew Philpot adds an anecdote:

I was recently explaining this exercise (22.8) in your book to a
friend. He was amused, but didn't seem to think that there was much
point in a language which could only talk about buffalo.
Au contraire! Later on in the weekend, we went to Buffalo Bill's (a
microbrewery in Hayward), decorated with pictures and skulls of (you
guessed it) buffalo. As we watched the Univ. of Colorado Buffaloes
football game on T.V., and contemplated heading up to Bison brewery in
Berkeley, I could legitimately turn to him, gesturing forcefully, and
make meaningful sentences in Bullafo^n.
Too bad a bewildered Bills fan didn't walk in. That would have been
priceless.
BTW, a great book.

Chapter 25

It would perhaps be clearer if the configuration space diagrams in Figures
25.13 and 25.14 (pages 798,799) were for the same space as the
visibility graph (page 800, Figure 25.15) and the Voronoi diagram
(page 801 Figure 25.16 and page 802 Figure 25.18).

Bibliography

We left out the definitive reference on the travelling
salesperson problem: