MINDS,
BRAINS AND SILLINESSA Conversation
With Philosophy Giant John Searle

John R. Searle is the Slusser Professor of
Philosophy at UC Berkeley and the author of over a dozen books on the
philosophy of mind, language and society. He's been the recipient of the
Jean Nicod Prize, the National Humanities Medal, and now, the extremely
prestigious BEAST Trophy of Awesomeness.

BEAST:
Hi, Professor Searle?

John Searle: Yeah.

BEAST: This is Ian Murphy.

JS: Yeah. OK. I'm going to put you on the
speaker phone if that's OK.

BEAST: Sure.

JS: I have a broken wrist, so it's easier to do it this
way.

BEAST: Oh, I'm sorry. How did you do that?

JS: Oh, I had a fall. It's the dumbest thing. You know,
it's not even an interesting injury. It's just boring. The bone was broken
in four places, so that was—

BEAST: Sorry to hear that.

JS: It required considerable surgery, but I seem to be
mending. Anyway, let's talk.

BEAST:All right, great. Let’s start
off with some political stuff. You wrote in Freedom and Neurobiology that
“...it is an epistemically objective fact that George W. Bush is
now President.” Concerning the voting irregularities in Florida
in 2000 and Ohio in 2004, has any disgruntled Democrat ever approached
you about printing a correction?

JS: [Laughs.] You see, it's very interesting the point
you make, because the thesis of the book, in a sense, is institutional
facts are what we recognize as institutional facts. And a lot of people,
who don't recognize Bush as—don't recognize that he was legitimately
president, but the numbers were so overwhelming that did recognize him
and the Supreme Court authenticated it—I think on very poor grounds—so
it sticks as an institutional fact. Say if you were asked on an examination,
“Who was president during that period?” There isn't any question
it was George W. Bush. I'm sorry to say it, but that's how it goes.

What I'm trying to make is actually a deep philosophical
point here. And that is, where institutional reality is concerned, if
you can get enough people to believe it then it exists.

BEAST: So does god exist?

JS: Well, the problem with god is, if he's a social construct,
then—then he's not god. A guy read my book and came to me and said
that he thinks god is a social construct, and he unfortunately had been
a priest in the Anglican faith and they kicked him out. They excommunicated
him. And I said, well there's a reason for that, you understand. There's
a name for people who think god is a social construct. They're called
atheists, and that's why you got kicked out of your job as a priest.

BEAST: OK. In the same book, you wrote:“The
reason that government can sustain itself as the ultimate system of status
functions is that it maintains the constant threat of physical force.”
How then is it that Canada exists?

JS: Oh, Canada! [Not sung to anthem.] They have policemen
in Canada too. [Laughs.] I'm sitting next to a Canadian who works for
me and she's a wonderfully nonviolent person. But in Canada they have
the armed forces and they have the Royal Mounted Police and other people
who have a monopoly on violence.

BEAST: In an op-ed written shortly after 9/11, you
wrote: “We need to give up on the illusion that there is
some policy change on our part that will change the attitude of the terrorists.”Given the fact the the US actually fomented Jihad against the Russians
in Afghanistan, funded the Taliban and trained Osama bin Laden, how can
you deny that there are consequences—or blowback—to our foreign
policy?

JS: Oh, I don't deny there are blowbacks to our foreign
policy. I mean, the problem, as you've just pointed to, is that you often
make alliances for an immediate situation and then those alliances come
to work against you later on. In the Second World War, we allied ourselves
with Stalinist Russia and it gave them a tremendous amount of military
aid, and then, of course, we then had hostile relations with them in the
Cold War. And again, as a means of fighting the Russians we allied ourselves
with the—with what we then called the insurgents, and now we call
the terrorists. But I don't think that’s anything new in history.
You can't be sure how your alliances are going to turn out or what the
consequences are going to be, but you often have to make alliances. You
have to treat somebody who shares with you a common enemy—you have
to treat them as a friend, at least temporarily.

BEAST: I'd like to move on to the economy. As a philosopher,
what do you think is the nature of money, and do you have any I can borrow?

JS: [Laughs, dismissively.] Well, the interest rates
would be pretty tough and the security I would require would be very fierce.
Money is a good example of a social construct, because it's really only
money because we think it's money. And in fact, it doesn't even have to
have a material base. I sometimes talk as if money has to be paper or
coins or something like that, but all you—strictly speaking—need
for money is some means of recording how much money you have and then
changing it in such a way, so you can take money out of your account and
put into somebody else's account. But when you do that, when you shift
money from one bank account to another, there's no actual physical transaction
that takes place. All that happens is that the computers now have different
figures—different figures for the account you took money out of
and different figures for the one it's going in to. But all you need for
money is some system of representation. Money is, sort of, the biggest
fantasy of all, but as long as it works, it works fine. Unfortunately,
it hasn't been working too well lately.

BEAST: Do you think things like economic markets,
corporations or even language have an evolutionary life that's somewhat
independent of humans?

JS: That's a good question about language. We know that
language evolved, and we know it evolved as a form of human activity,
but how much of it—how much of the evolution was actually conscious?
I think probably very little. So I think you get these things that have
an evolutionary history—forms of human life that have an evolutionary
history, but there's not much conscious choice in the evolution. And the
whole question of the evolution of human language is absolutely fascinating.
We know very little about it, and we've had to kind of guess on the basis
of looking at animal signaling systems and how primates communicate with
each other. We can sort of make some guesses about as to how language
might have evolved, but we don't really know

BEAST: OK. Moving on. Do you think former Senator
Phil Gramm has a soul?

JS: [Laughs.] Well, I have no special opinions about
him. Nobody has a soul. That was one of the great illusions of all time,
that in addition to this poor body I've got—now busted up—I've
got this mysterious entity lodged somewhere in me—the soul. And
the soul is going to float free of my body and live in heaven or hell
forever. That is one of the greatest lies ever perpetrated on the human
race. So, Phil Gramm doesn't have a soul, but neither does anyone else.
I'm an egalitarian in denying souls. I take souls away from everybody.

BEAST: The Canadian rock band Rush sings this about
free will:

You can choose a ready guide
In some celestial voice
If you choose not to decide
You still have made a choice

You can choose from phantom fears
And kindness that can kill
I will choose a path that's clear
I will choose free will

Again, I have a two part question: Is free will an
illusion, and do you think anyone rocks harder than Neil Peart?

JS: I don't know anything about the guy. I've never heard
of the guy you're asking about, so I can't answer that half of the question.
But the other half of the question is very interesting about free will,
and I point this out in an article. Even if you become convinced that
free will is an illusion, even if you decide that you're a determinist,
you can't live on the basis of that decision. See, if you decide that
colors are an illusion, you can organize your life around the assumption
that colors are an illusion. But with free will, every time you're in
a decision making situation—you go into a restaurant and are given
a choice on the menu, you can’t say, 'Look, I'm a determinist; I'll
just wait and see what happens.' And this is the deep reason: Saying that
is only intelligible to you if it's something you take as an exercise
of your own free will. In other words, the decision to deny free will
presupposes free will, and in that respect, it's unlike other illusions.
Now, I don't know whether or not we have free will. But it's an interesting
case. The fact that we can’t avoid presupposing that we have free
will doesn't mean we have it. It could be the greatest hoax that evolution
played in the whole history of the human race. We've got this enormous
apparatus that's based on the assumption of free will, of free choice,
and decision making and so on, and the whole thing may be an illusion.
I don't know whether we have free will or not.

BEAST: You're probably best known for the Chinese
Room argument against strong artificial intelligence. For those who are
unaware, could you succinctly summarize that?

JS: Yeah. Sure. It's very simple. The thesis that I'm
attacking is that having a certain cognitive capacity, say understanding
a language, consists entirely in caring out the steps in a computer program.
I call that Strong Artificial Intelligence—that the program is sufficient
for a mind—and I refute that by imagining, well, what it would be
like to carry out the steps in a program for a cognitive capacity that
I don't have. I imagine, as indeed is the case, that I don't know any
Chinese—can't speak a word of Chinese, can't even recognize Chinese
writing from Japanese writing. And we imagine that I'm locked in a room
and people give me bits of Chinese writing. Unknown to me these are questions
and I look up in a rule book, in a program, what I'm supposed to do with
these questions. I shuffle a lot of symbols, and after a while, I give
back other Chinese symbols as answers. And we'll suppose that they get
so good at writing the program, and I get so good at shuffling the symbols,
that my answers are perfectly good answers. They're as good as any native
Chinese speaker's. All the same, I don't speak a word of Chinese, and
I couldn't in this situation, because I'm just a computer. And the bottom
line of the argument is this: If I don't understand Chinese on the basis
of carrying out the computer program for understanding Chinese, then neither
does any other computer on that basis, because the computer hasn't got
anything that I don't have.

BEAST: OK. Well, couldn't it be said that the room
itself knows Chinese?

JS: [Laughs.] I've heard people say that. I kind of admire
the courage in saying, it's the room that understands Chinese.I think it's ridiculous, but it's easy to refute it, and that is to
ask yourself, why don't I understand Chinese if I'm passing the test for
understanding Chinese? And the answer is, I have no way to figure out
what any of these words mean. I just have the syntax. I just have the
symbols, but I don't have the semantics. I don't have any meaning. But
if I can't learn the meaning in this situation, neither can the room.
The room doesn't have any access to meaning that I don't have. And the
way to see this is, internalize the room. That is, let me memorize all
the steps in the program and I'll work out of doors in an open field so
there isn't any room. All the same, the bottom line of the whole discussion
is there's no way to get from the syntax of the program, the symbols of
the program, to the meanings. And this is not a weakness of the computer.
The computer is a syntactical engine. It operates with symbols. We
attach meaning to the symbols, but the computer knows nothing about that
and doesn't need to.

BEAST: OK, but what if you draw a parallel between
the room and a person? What if it's a Chinese Brain? The input would come,
the brain would translate the data, according to the rules of brain syntax,
and it communicates with the outside world. But the part of your brain
doesn't understand Chinese, it's only the total—

JS: Well, we don't know how much of the brain you need
to have language comprehension. You don't have to have an entire brain,
because people can understand language with part of the brain destroyed.
But that's not the point. The point here is, what's the difference between
a Chinese brain and the computer? And the answer is, the brain is actually
a machine. It actually is a physical organ with actual causal properties
and energy transfers. You see, we have to think of the brain as an actual
human organ like the liver or the stomach, and you can't do digestion
just by doing a computer simulation of digestion, or you can't—I'm
now looking at a rain storm in California. They do computer simulations
of rainstorms, but they won't make you wet. Now, the computer simulation
of digestion stands to real digestion the way that computer stimulation
of cognition stands to real cognition. Real cognition has to have
a set of causal relations and a set of contents, and you don't get that
in computer simulations. The computer simulation gives you a model, or
set of symbols that are isomorphic with the domain you're trying to model.
I hope the idea is clear. The Chinese brain is an actual physical organ
with causal relations, whereas the computer, the only causal relations
it has is to go the next step of the program when the machine is running.
Another way to put this point—I like to put it—is the problem
is not that the computer's too much of a machine to have thought processes,
it's not enough of a machine. Because though the computer you buy
in the store is a machine, computation doesn't name a machine process,
it names an abstract mathematical process that we found ways to implement
on machines, whereas, the actual brain is a machine. Its operations are
defined by energy transfers, and it's those energy transfers, those actual
causal relations, that are responsible for human cognition. Sorry to be
long-winded, but anyway, that's the point.

BEAST: No, that's great. OK. When I interviewed Dan
Dennett,I referenced your well-known opposition to Strong AI and
this is what he said:

“I have a new name for people that have that
view. I call them mind creationists, because they think that the mind
is—in my terms—a sky hook. It is something—you can't
get there from here. You can't get to consciousness and strong artificial
intelligence from a whole lot of computation, from a whole lot of little
robotics. Yes you can. It's just more complicated than people thought.
The creationist says, you can't get to us from amoebas. Yes you can. It's
just more complicated than people thought.”

Two part question: Can you respond to that and, your
broken wrist notwithstanding, between you and Dan Dennett, who would win
in a fist fight?

JS: [Laughs, cautiously] Um, I'm a nonviolent person,
so I'm not going to respond to that part of the question. I think that
this is fairly low level of rhetoric on his part. I'm obviously not a
creationist of any kind, but I do want to point out something that's absolutely
crucial: Brains do it! He says that I think you can't get to consciousness
from some kind of mechanism. And I say, oh yes you can. We do it every
day. Brains do it, but they do it by specific causal mechanisms. And as
I said before, the problem with a computer is not that it's too much of
a machine. It's got the wrong kind of machinery, because it just manipulates
syntax, and the brain does something more than that. The brain actually
causes conscious thoughts and feelings.

BEAST: OK. So, uh, can you make a mind out of anything?

JS: Well, we don't know. We don't know how the brain
does it. And I think—I think we ought to take the question, 'Can
you make an artificial brain that would do what our brains do out of some
other material?' the same way you take the question, 'Can you make an
artificial heart out of some other material?' Now, we know how to do it
with hearts, because we know how real hearts do it—they're pumps.
But we don't know how the brain “pumps” consciousness and
cognition. We know a lot more than we knew twenty years ago, but we still
got a long way to go. So, if we've figured how the brain did it, then
the chances of making an artificial brain would—then we'd at least
have a reasonable way of assessing the chances. But until we know how
the brain does it, we're not going to be able to make an artificial brain.

BEAST: OK. What do you have against zombies—aren't
they people like the rest of us?

JS: Well, the way I define a zombie, it has no feelings
at all. It behaves as if it were conscious, but—this is the philosopher
sense of zombie—the zombie has no feelings whatsoever. It just behaves
as if it had thoughts and feelings. And a perfect zombie, I guess, might
do a perfect imitation of having thoughts and feelings, but it wouldn't
have any. So, it would be a waste of your time to pity a zombie with a
broken arm, because a zombie doesn't feel anything. You might want to
patch it up as you'd patch up your car, but you don't have empathy for
your car—well, I make an exception for the Porsche, but that's a
special thing. I don't have sympathy or empathy; I just think it ought
to be repaired, and that's how it is with zombies. They lack all feeling.

BEAST: Well, they desire brains.

JS: They what?

BEAST: Desire braaaaains!

JS: Well, they say words like, 'I desire to have a brain.'
But they don't actually have any desires, and that's—we're postulating
that—that is the definition of a zombie. A zombie, as I'm defining
it, is a system that behaves like a human being, but has no inner thoughts
or feelings at all. It's totally unconscious.

BEAST: Will there come a time when robots reproduce
sexually—or do you think they'll use protection?

JS: [Laughs.] Well, robots will do what we make them
to do. I mean, they don't have—the robots we've got so far have
no autonomy at all. They just do what they're programmed to do. I mean,
we could maybe program them to reproduce sexually, though, it does not
sound like a very efficient method of reproducing. See, it's an amazing
fact about humans. Biparental reproduction is enormously inefficient,
costly, and it's responsible for all kinds of hassles. I can't tell you
how many difficulties it produces. But, it does have a tremendous evolutionary
advantage, and that is, it mixes the genes. If you just did cloning, you
don't get a mixture of the genes, as you do in biparental reproduction.
So. I'm all for biparental reproduction, but it seems to me if you're
talking about robots, it's not necessary to mix the genes in order to
get this kind of variation. We can artificially put in any variation we
want.

BEAST: What about the idea of 'technological singularity'—the
idea that at some point computers or robots will get to the point where
they'll start to redesign themselves and evolve on their own?

JS: Well, I always hear this. You know, I've been at
meetings where people talked about how the computer might get up and revolt
against us, and there might be a great computer March on Washington, or
something like that. But the difficulty with this is computers—as
they're presently defined, as we presently understand them—have
no autonomy at all. They just do what they're programmed to do. Now, you
can program them to program themselves. You can program the computer to
reprogram itself, but that still doesn't give us autonomy in the sense
that a conscious agent has autonomy. So until you can build—I don't
think you couldn't build a machine with genuine autonomy; that's
the question we were discussing earlier about building a conscious machine—but
I don't see how you can do it with the kind of technology that we're using
today, where all you do is manipulate symbols in silicon chips.

BEAST: OK. So, you don't think there's very good odds
that my Roomba®will gain sentience and rise up against me?

JS: No. I don't think there's any real danger from any
existing technology, for the reason I mentioned earlier. There's no autonomy
whatsoever. The robots we've got today have no will of their own. They're
not conscious.

BEAST: Will the symbiotic relationship between technology
and humans result in the evolution of cyborgs, and if so, is resistance
futile?

JS: Well, I'm not quite sure what a cyborg is. What's
the definition?

BEAST: Uh....hmm....you know, part—part—part
human—

JS: Part human, part machine.

BEAST: Yes.

JS: But it's very hard to know how you're going to do
this, organically speaking. I have been reminded of the specificity of
human physiology. You can't just do it with anything. Now, they—in
fact, they put a metal plate in my arm and they screwed the pieces of
the bone to the metal plate[Cyborg!], but only very specific sorts
of artifacts can be used. You can't just use anything. So it's hard for
me to imagine how you get a sort of combination of a human organism with,
let's say, a bunch of silicon. I mean, we do have cochlear implants and
things like that, where you get extensions of our natural capacity, but
it's hard to see how you get a full fifty-fifty. However, you know, I'm
all for experimenting. If people want to experiment, more power to them.

BEAST: OK. Um, how far away are we from having a usable
theory of consciousness, and once we do, how will the military use it
to shatter people's minds?

JS: Yeah. It's hard for me to predict, um, what the military
would do. I'm impressed by how much the military in the United States
is under civilian control. My son is a Lieutenant Colonel in the Special
Forces, currently in Afghanistan, and I get a different take on these
military matters than you'd get, say, from a standard—than most
professors in a university environment get. My son has a PhD. in history,
so he's an intellectual, who happens also to be active militarily. But
the most important part of your question is about consciousness, and there,
we just don't know how the brain does it. And my guess is that the people
who crack the problem, who figure out how the brain creates consciousness,
will probably not be the existing generation of senior scientists in neuroscience.
I think it's the kind of thing that will be done by young people, who
have fresh ideas and new ways of thinking, because the assumptions we've
been making have not given us the result we want. The basic assumption
we make is that the neuron is the functional unit and what we've got to
do is study neurons, and sometimes people study neuronal maps or neuronal
clusters. But basically, it's the neuron doctrine which has dominated
brain science, and maybe that's a mistake. Maybe we ought to think at
a different level altogether.

BEAST: OK. Do you think aunts are conscious beings?

JS: Well, it's not really a question you can solve philosophically.
You have to know more about ant neurophysiology. And the question is,
how do brains do it in general, and do ant brains have enough of the machinery?
See, I don't know about ants, but termites have about a hundred thousand
neurons, and I don't know if a hundred thousand neurons is enough to get
you in the game. I mean, I probably lose that on a big weekend—a
hundred thousand neurons. But, um, it's not an interesting question to
debate now, because we don't know enough about human consciousness to
even speculate intelligently about ants and termites and lower forms.
I mean, we know the amoeba don't have enough machinery to be conscious—a
one celled animal. But how big a nervous system do you have to have? We
don't know.

JS: Oh, no, no. OK. I didn't get the pun. I don't refer
to a-n-t as aunt. [Laughs, charitably.]

BEAST: Do you think colorless green ideas sleep furiously?

JS: Well, it depends on how sleepy they are. The ones
I know, they don't do much sleeping at all, because, you know, they're
too angry.

BEAST: [Laughs.] When you boil it down, aren't philosophy
and witchcraft basically the same thing?

JS: Well, I don't think so.You see, the problem
is, with witchcraft I think I could probably make a living.

BEAST: [Laughs.] What made you want to become a philosopher,
and do you regret not doing anything useful with your life?

JS: Yeah, I can't imagine—the question, 'What made
you want to become a philosopher?' would be like the question, 'What made
you enjoy sex or skiing, or good food?' I just can't imagine a life without
it. It's more fun than, well, let's see—it's the third most fun
thing.

BEAST: As an undergrad at the University of Wisconsin,
you were involved in a group called “Students against McCarthy—”

JS: Yeah, I was yeah. You did a lot of research—

BEAST: Are you now, or have you ever been, a communist?

No. [Laughs.] But I—I have a lot of enemies who
are communists.

BEAST: Really?

Yeah. I mean, I have some friends who are communists,
too. But most of the communists I know have stopped being communists—there
are hardly any left, I mean, poor things. I miss them. They've become
extinct. But for most of my life there were communists around. Not very
many, in the United States at least, but there were some.

BEAST: Well, Berkeley—yeah.

Yeah. Well, Bettina Aptheker, in the Free Speech Movement,
publicly declared herself to be a member of the Communist Party—not
surprising. Her dad, I think, was head of the Communist Party of the United
States. But Bettina—it's interesting how she broke with the Communist
Party, because they were so sexist. They were discriminating against women.
They did not believe in equality for women.

BEAST: Right. Um, are you scared Barack Obama might
be a communist?

I have a lot of worries, but that's not one of them.

BEAST: What are some of your worries?

That he will be inadequate to handle the economic situation
that he's now confronted with. I think he's trying hard, and I wish him
luck, but I don't—I don't think anyone has an intellectual grip
on the present situation, and I just don't know that their policies are
going to work.