Searle is a kind of
Horatius, holding the bridge againt the computationalist advance. He
deserves a large share of the credit for halting, or at least
checking, the Artificial Intelligence bandwagon which, until
his paper 'Minds, Brains and Programs' of 1980 seemed to be
sweeping ahead without resistance. Of course, the project of "strong
AI" (a label Searle invented), which aims to achieve real consciousness in
a machine, was never going to succeed , but there has always been
(and still is) a danger that some half-way convincing imitation would
be lashed together and then hailed as conscious. The AI fraternity
has a habit of redefining difficult words in order to make things easier.
Terms for things which, properly understood, imply understanding, and
which computers can't, therefore, handle - are redefined as
simpler things which computers can cope with. At the time Searle wrote his
paper, it looked as if "understanding" might quickly go the same way,
with claims that computers running certain script-based programs
could properly be said to exhibit at least a limited understanding of the
things and events described in their pre-programmed scenarios. If
this creeping debasement of the language had been allowed to proceed
unchallenged, it would not have been long before 'conscious', 'person' and
all of the related moral vocabulary were similarly subverted, with
dreadful consequences.

After all, if
machines can be people, people can be regarded as merely machines, with
all that implies for our attitude to using them and switching them on or
off

Are you actually going to
tell us anything about Searle's views, or is this just a general
sermon?

Searle's main counter-stroke
against the trend was the famous 'Chinese
Room' . This has become the most
famous argument in contemporary philosophy; about the only one which
people who aren't interested in philosophy might have heard of. A man
is locked up, given a lot of data in Chinese characters, and
runs by hand a program which answers questions in Chinese. He
can do that easily enough (given time), but since he doesn't
understand Chinese, he doesn't understand the questions or the
answers he's generating. Since he's doing exactly what a computer
would do, the computer can't understand either.

The trouble with the so-called Chinese Room argument is that
it isn't an argument at all. It's perfectly open to us to say that
the man in the machine understands the Chinese inputs if we want
to. There is a perfectly good sense in which a man with a code book
understands messages in code.

However, that isn't the line I take myself. It's clearto me
that the 'systems' response, which Searle quotes himself, is the correct diagnosis. The
man alone may not understand, but the man plus the
program forms a system which does. Now elsewhere, Searle
stresses the importance of the first person point of view, but if
we apply that here we find he's hoist with his own petard. What's the
first-person view of whatever entity is answering the questions put to the
room? Suppose instead of just asking about the story, we could ask
the room about itself: who are you, what can you see? Do you think
the answer would be 'I'm this man trapped in a room manipulating
meaningless symbols'? Of course not. To answer questions about the
man's point of view, the program would need to elicit his views in a form
he understood, and if it did that it would no longer be plausible that the
man didn't know what was going on. The answers are clearly coming from the
system, or in any case from some other entity, not
from the man. So it isn't the man's understanding
which is the issue. Of course the man, without the program, doesn't
understand. In just the same way, nobody claims an unprogrammed
computer can understand anything.

But even as a purely persuasive story, I don't
think it works. Searle doesn't specify how the instructions used by the
man in the room work: we just know they do work. But this is
important. If the program is simple or random, we probably
wouldn't think any understanding was involved. But if the
instructions have a high degree of complexity and appear to be governed by
some sophisticated overall principle, we might have a different
view. With the details Searle gives, I actually think it's hard to
have any strong intuitions one way or the other.

Actually, Searle
never claimed it was a logical argument, only a
gedankenexperiment. So far as details of how the
instructions work, it's pretty clear in the original version that Searle
means the kind of program developed by Roger Schank: but it doesn't matter
much, because it's equally clear that Searle draws the conclusion for
any possible computer program.

Whatever you think about
the story's persuasiveness, it has in practice been hugely influential.
Whether they like it or not (and some of them certainly don't), all the
people in the field of Artificial Intelligence have had to confront it and
provide some kind of answer. This in itself represented a radical
change; up to that point they had not even had to talk about the sceptical
case. The angriness of some of the exchanges on this subject is remarkable
(it's fair to say that Searle's tone in the first place was not exactly
emollient) and Searle and Dennett have become the Holmes and Moriarty of
the field - which is which depends on your own opinion. At the same
time, it's fair to say that those of a sceptical turn of mind often speak
warmly of Searle, even if they don't precisely agree with him - Edelman , for example, and Colin McGinn. But if the Chinese Room
specifically doesn't work for you, it doesn't matter that much. In
the end, Searle's point comes down to the contention - surely
unarguable - that you can't get syntax from semantics. Just shuffling
symbols around according to formal instructions can never result in any
kind of understanding.

But that is what the whole
argument is about! By merely asserting that, you beg the question. If
the brain is a machine, it seems obvious to me that mechanical operations
must be capable of yielding whatever the brain can yield.

Well, let's try a different tack. The Chinese Room
is so famous, it tends to overshadow Searle's other views, but as you
mentioned, he puts great emphasis on the first-person perspective, and
regards the problem of qualia as fundamental. In fact, in arguing with
Dennett, he has said that it is the problem of consciousness. This is perhaps
surprising at first glance, because the Chinese Room and its associated
arguments about semantics are clearly to do with meaning, not qualia. But
Searle thinks the two are linked. Searle has detailed theories
about meaning and intentionality which are arguably far
more interesting (and if true, important) than the Chinese
Room. It's difficult to do them justice briefly, but if I
understand correctly, he analyses meaning in terms
of intentionality (which in philosophy means aboutness ), and intentionality is grounded in
consciousness. How the consciousness gets added to the picture remains an
acknowledged mystery, and actually it's one of Searle's virtues that he is
quite clear about that. His hunch is that it has something to do with
particular biological qualities of the brain, and he sees more scientific
research as the way forward.

One of Searle's main interests is the way
certain real and important entities (money, football) exist because
someone formally declared that they did, or because we share a common
agreement that they do. He thinks meaning is partly like that. The
difference between uttering a string of noises and meaning
something by them is that in the latter case we perform a kind of implicit
declaration in respect of them. In Searle's terminology, each formula has
conditions of satisfaction, the conditions which make it true or false:
when we mean it, we add conditions of satisfaction to the conditions
of satisfaction. This may sound a bit obscure, but for our purposes
Searle's own terminology is dispensable: the point is that meaning comes
from intentions. This is intuitively clear - all it comes down to
is that when we mean what we say, we intend to say
it.

So where does intentionality,
and intentions in particular, come from? The mystery of
intentionality - how anything comes to be about anything - is one
of the fundamental puzzles of philosophy. Searle stresses the distinction
between original and derived intentionality. Derived intentionality is the
aboutness of words or pictures - they are about something just because
someone meant them to be about something, or interpreted them as being
about something: they get their intentionality from what we think about
them. Our thoughts themselves, however, don't depend on any
convention, they just are inherently about things. According to
Searle, this original intentionality develops out of
things like hunger. The basic biochemical processes of the brain somehow
give rise to a feeling of hunger, and a feeling of hunger is inherently
about food.

Thus, in Searle's
theory, the two basic problems of qualia and meaning are linked. The
reason computers can't do semantics is because semantics is about meaning;
meaning derives from original intentionality, and original intentionality
derives from feelings - qualia - and computers don't have any qualia. You
may not agree, but this is surely a most comprehensive and
plausible theory.

Except that both qualia and intrinsic intentionality are
incoherent myths! How can anything just be inherently
about anything? Searle's account falls apart at several stages. He
acknowledges he has no idea how the biomechanical processes of the brain
give rise to 'real feelings' of hunger, and he also has no account of how
these real feelings then prompt action. In fact, of course, the
biomechanical story of hunger does not suddenly stop at some point:
it flows on smoothly into the biomechanical processes of action, of
seeking food and of eating. Nothing in that process is fundamentally
mysterious, and if we want to say that a real feeling of hunger is
involved in causing us to eat, we must say that it is part of that
fully-mechanical, computable, non-mysterious process - otherwise we will
be driven into epiphenomenalism
.

When you come right down to it, I just do not
understand what motivates Searle's refusal to accept common sense. He
agrees that the brain is a machine, he agrees that the answer is
ultimately to be found in normal biological processes, and he has a
well-developed theory of how social processes can give rise to real and
important entities. Why doesn't he accept that the mind is a product of
just those physical and social processes? Why do we need to postulate
inherent meaningfulness that doesn't do any work, and qualia that have no
explanation? Why not accept the facts - it's the system that does the
answering in the Chinese Room, and it's a system that does the answering
in our heads!

It is not easy for me
to imagine how someone who was not in the grip of an ideology would find
that idea at all plausible!

Read:

"Minds, Brains and Programs"The original
Chinese Room text. One you really have to read, but it's no
hardship; Searle has a trenchant, commonsensical style which at its
best, as here, has tremendous persuasive force. Among the
most famous philosophical papers of the last fifty years, it has
been repeatedly summarised and re-hashed and can be found in
its original form in a number of places, but is included in
'The Philosophy of Artificial Intelligence' (ed. Margaret A Boden),
a very useful collection with other classic papers (including
Turing's seminal one from 1950), which is especially
recommended.

"The Mystery of Consciousness"An interesting set of essays, originally written
for the New York Review of Books, commenting
on the theories of various other people. A typically
entertaining (and thought-provoking) Searlian bludgeoning for
Dennett and Chalmers, with similarly combative responses from
the victims; but others, especially Edelman, get friendlier
treatment.

"The Rediscovery of the
Mind"Neither materialism nor dualism will do, argues
Searle: it's not that they're wrong so much as that they define the
issues in mistaken terms from the very beginning. A proper
understanding of consciousness is the key and should be restored to
the centre of the discussion.

Short version:
"Mind, Language and
Society" -
Searle's own comprehensive summary of "how it all hangs
together". In some ways, the ideas emerge more clearly in this
relatively condensed form. The prose is as vigorous and direct as
ever, and Searle explains how he aspires to make a "modest
contribution to the Enlightenment vision".