ABSTRACT:
When certain formal symbol systems (e.g., computer
programs) are implemented as dynamic physical symbol systems (e.g.,
when they are run on a computer) their activity can be interpreted
at higher levels (e.g., binary code can be interpreted as LISP,
LISP code can be interpreted as English, and English can be
interpreted as a meaningful conversation). These higher levels of
interpretability are called "virtual" systems. If such a virtual
system is interpretable as if it had a mind, is such a "virtual
mind" real? This is the question addressed in this "virtual"
symposium, originally conducted electronically among four cognitive
scientists: Donald Perlis, a computer scientist, argues that
according to the computationalist thesis, virtual minds are real
and hence Searle's Chinese Room Argument fails, because if Searle
memorized and executed a program that could pass the Turing Test in
Chinese he would have a second, virtual, Chinese-understanding mind
of which he was unaware (as in multiple personality). Stevan
Harnad, a psychologist, argues that Searle's Argument is valid,
virtual minds are just hermeneutic overinterpretations, and symbols
must be grounded in the real world of objects, not just the virtual
world of interpretations. Computer scientist Patrick Hayes argues
that Searle's Argument fails, but because Searle does not really
implement the program: A real implementation must not be homuncular
but mindless and mechanical, like a computer. Only then can it give
rise to a mind at the virtual level. Philosopher Ned Block
suggests that there is no reason a mindful implementation would not
be a real one.

HARNAD:
This would all be fine if there weren't just one thing that keeps
getting forgotten or overlooked in approaches like the one you are
advocating: There's no reason to believe, and plenty of reasons (e.g.
the symbol grounding problem [Harnad 1990a] and Searle's [1980] Chinese
Room Argument) not to believe, that computation -- i.e., the syntactic
manipulation of arbitrary objects (symbol-tokens) on the basis of their
shapes in a way that is systematically interpretable as meaning
something -- amounts to some, most or all of the "legwork" the brain
actually does in order to generate a mind. After all, there are plenty
of other processes under the sun besides symbol-manipulation (e.g.,
transduction, analog transformations, even protein synthesis). So
unless there are strong independent reasons for believing that the
brain is just implementing a computer program, there's no justification
whatsoever for claiming that either a computer or Searle's simulation
of it is "instantiating" either brain processes or any kind of mind.
Until further notice, "virtual mind" simply means a symbol system that
can be systematically interpreted "as if" it had a mind. So what else
is new...?

PERLIS:
I am forgetting nothing. Searle's argument is intended to be a proof of
the impossibility of the computational thesis (CT). I am simply
pointing out that it is is not a proof. Yes, of course there are other
possibilities. But computationalism is one of them. I know no one who
has offered a proof of computationalism. But some (e.g., Searle) claim
to have a disproof, and this is a mistake. (Actually, Maudlin [1989]
claims to have a disproof of computationalism too, of a very different
sort.) You are confusing a claim that a purported disproof of CT is
incorrect with a claim of a proof of CT. I claim the former, not the
latter.

Searle's argument is based on showing that it is impossible that there
is a Chinese-understanding mind in the room. He has not succeeded in
showing this, since he has no argument "ruling out" the possibility of
a virtual mind, which is in fact the "very point" of CT. So he is
begging the question. What the Computationalist Thesis (CT) posits is
precisely that a mind arises as a virtual level on the basis of a lower
level of processing: that it is a functional level of activity brought
about by mundane nonmental actions (neurons, circuits, whatever). All
Searle has is his opinion that no other mind is there but his own. And
this is no proof, though you may feel moved by it. It is as if by
putting his mind in at the low level, he thinks he thereby makes it
impossible for there to be another mind at a higher functional level,
which is a very strange thesis to defend, although he does not directly
do so in any case. He seems not to understand virtual levels in
computational systems at all.

Of course CT is implausible from many perspectives. That is not the
point. It may be implausible and still true. Searle purports to have
shown that it is not true.

IMPLAUSIBILITY, IMPOSSIBILITY and CORRIGIBILITY

HARNAD:
I don't think you have the logic of Searle's argument quite
right: Since it is a thought experiment appealing to intuitions,
it certainly can't be a proof: It's an argument. Here is the structure
of that argument.

(1) Suppose computationalism is true (i.e., that a mental state,
e.g., understanding Chinese, arises from every implementation of
the right computer program -- i.e., from doing all the right symbol
manipulations).

(2) Suppose Searle himself implements the right computer program
(i.e., does all the right symbol manipulations).

(3) Would Searle be understanding Chinese? No. Would there be anyone
or anything else in Searle understanding Chinese? No.

(4) Therefore the computational thesis should be rejected.

The "No's" are clearly intuitive judgments about plausibility, not
deductive proofs. It is logically possible that Searle
would
be understanding Chinese under those conditions (after all, it's just a
thought experiment -- in real life, memorizing and manipulating the
symbols could conceivably give rise to an emergent conscious
understanding of Chinese in Searle, as some systems-repliers have
suggested). It is also logically possible that memorizing the symbols
and rules and doing the manipulations would generate a second conscious
mind inside Searle, to which Searle would have no access, like a
multiple personality, which would in turn be understanding Chinese.
This too has been proposed. Finally, it has been suggested that
"understanding would be going on" inside Searle under these conditions,
but not in anyone's or anything's mind, just as an unconscious process,
like many other unconscious processes that normally take place
in Searle and ourselves.

All of these logical possibilities exist, and have been acknowledged as
such by both Searle and me, so there is clearly no question of a
"proof." But then (because of the impenetrable "other-minds" barrier --
Harnad 1991) neither is there a question of a proof that a stone
does not understand Chinese or indeed that anyone
(other than myself, if I do) does. Yet one can draw quite reasonable
(and indeed almost certainly correct) conclusions on the basis of
plausibility alone, and on that basis neither a stone nor Searle
understands Chinese (and, until further notice, only beings capable of
conscious understanding are capable of un conscious
"understanding," and the latter must in turn be at least potentially
conscious -- which it would
not be in Searle if he merely memorized the meaningless symbols and
manipulations).

So the logic of Searle's simple argument is no more nor less than
this: An apparently plausible thesis is shown to lead to implausible
conclusions, which are then taken (quite rightly in my view) as
evidence against the thesis. Countering that there is no "proof" that
the implausible consequences are impossible
amounts (again, in my opinion) to theory-saving on the basis of sci-fi
fantasies (driven, again in my opinion, solely by the illusion I've
dubbed [in Harnad 1990b] "getting lost in the hermeneutic hall of
mirrors" that is created by projecting our mentalistic interpretations
onto systematically interpretable symbol systems and then forgetting
that they are just our projections!).

It is at this point that my own symbol grounding argument (Harnad
1990a) comes in, trying to show why
computation is not enough (the meanings of the symbols are ungrounded,
parasitic on the interpretations we project on them) and what in its
stead might indeed be enough (symbol systems causally connected to and
grounded bottom-up in their robotic capacity to categorize and
manipulate, on the basis of their sensory projections, the objects,
events and states of affairs that their symbols refer to [Harnad
1992]).

And, as a bonus, grounded symbol systems turn out to be immune to
Searle's argument and its implausible consequences (Harnad 1989). This
immunity, however, is purchased at the price of abandoning completely
the thesis that mental states are just the implementations of the right
computer program.

There is no way to prove that implausible or even
counterfactual sci-fi fantasies are impossible (unless they contain a
logical contradiction); and perhaps there is no way to wean someone
from their hermeneutic power. But if for one moment the believers in the
CT
stopped to think that their belief could be based entirely on
overinterpreting "virtual systems" -- attributing mental states where
there is nobody home, and that perhaps the only thing that's
really
there is a symbol system that has the computational power to bear the
systematic weight of their interpretation -- then perhaps they might
consider turning to alternative candidates, grounded ones, that can
stand on their own, without the need of mentalistic projections.

PERLIS:

HARNAD:
I don't think you have the logic of the argument quite right:
Since Searle's is a thought experiment appealing to intuitions it
certainly can't be a proof: It's an argument.

PERLIS:
Of course; you are picking on a red herring. I mean that there
is a glaring gap in his argument, so big as to be question-begging. I
was addressing your contention that I had not made an argument
for CT. But my aim was to attack Searle's argument
against CT, not to argue for CT. (There is a difference!)

HARNAD:
computationalism is true (i.e., that a mental state, e.g.,
understanding Chinese, arises from every implementation of the right
computer program -- i.e., doing the right symbol manipulations).

PERLIS:
And arises at a virtual level!

HARNAD:
(2) Suppose Searle himself implements the right computer
program (i.e., does the right symbol manipulations).

(3) Would Searle be understanding Chinese? No. Would there be anyone
or anything else in Searle understanding Chinese? No.

PERLIS:Yes!
This is where the question-begging comes in! What is his
argument
to get us to believe that nothing else is there that
constitutes understanding Chinese? According to the
CT supposition there
is
something else:
the appropriate functional (virtual) level! Searle offers
nothing
to argue against this, except his view that nothing there is
understanding Chinese! He might as well have said at the outset that he
believes mere symbol manipulations do not constitute understanding
Chinese, at any level.
The room scenario adds nothing. His presence in the room is a
distraction. Why not use a computer? His same argument goes through
exactly as badly, word for word.

Precisely at the point where a massive intuition pump is needed, Searle
runs away.

HARNAD:
An apparently plausible thesis is shown to lead to implausible
conclusions, which are then taken (quite rightly in my view) as
evidence against the thesis.

PERLIS:
No! The "implausible conclusion" is simply CT itself. You
cannot make an argument against CT by simply denying CT! This is
question-begging. It is Searle's (and your) view that this is
implausible. But these views amount to
no more than plain
disbelief
in CT from the start. The Chinese Room in no way adds to the
implausibility.

HARNAD:
The argument involves no question begging at all, as long as one gets
the logic straight. Maybe this will make it clearer: Before Searle
proposed his clever little intuition pump we were free to suppose that
anything (other than ourselves) did or did not understand Chinese, as
it suited us. In particular, according to the standard Turing Test (TT)
we could suppose that a computer -- in virtue of the fact that all of
its interactions were systematically interpretable as (indeed,
indistinguishable from) coherent Chinese discourse (with a pen-pal,
say) -- really understood Chinese. There was no one and nothing there
to gainsay our diagnosis. Then came Searle's little "periscope" across
the otherwise impenetrable other-minds barrier: He reminded us that he
could become an implementation of the
"very same symbol system"
that the
computer was an implementation of but that he could then go on to
testify to us perfectly honestly that he was
not
understanding Chinese
at all, just mindlessly following rules for manipulating meaningless
squiggles and squoggles!

PERLIS:
[understanding still] arises at a virtual level.

HARNAD:
"Virtual" you say? But the "virtual understanding" that we
credulously projected onto the computer previously was based on nothing
more than the important but surely less-than-mental supposition
(likewise just
hypothetical,
by the way, since no TT-scale symbol system yet exists) that the formal
symbol system the computer was implementing could bear the full
systematic weight of the interpretation of its symbols as coherent
correspondence with a pen-pal (a systematic weight that a stone, say,
or a thermostat, could not bear), in other words, it could successfully
pass the TT.
That
was our only real premise, coupled with the fact that, because of the
other-minds barrier, no one could be the wiser. But the
notion that being able to bear that systematic weight was tantamount to
the possession of a "virtual" (as opposed to an imaginary)
understanding was actually an extra gratuitous step, and one that
Searle's thought experiment should certainly have made us much more
wary about taking. For to disbelieve Searle and to insist that there's
another "virtual" understanding in there no matter what he says is
simply to insist against the most direct and commonsense kind of
evidence that there is no distinction at all between something that
really is X and something that is merely systematically interpretable
as X. But if that distinction is to be scrapped, then we'd better scrap
the distinction between real and simulated ("virtual"?)
fires too.

HAYES:
Right on, Don! I am currently working of a Response to Searle
which has this very theme: how Searle's failure to understand the
concept of levels of interpretation (among others, notably that of
the causal story to be told about software) has misled him. I think we
should acknowledge, however, that we don't fully understand all this
stuff ourselves. I feel like someone who has glimpsed a new kind of
creature and is being told by a blind man that such a thing
couldn't
exist, so I must have made a mistake. Just a glimpse is enough to make
one mistrust arguments based on lack of imagination.

PS. I think Maudlin (1989) is wrong, by the way: the man in the room isn't
carrying out neuron-like processes any more than he is running a
program. He can always say, the hell with this, I'll have a nap before
going on with it. Neither the brain nor my Mac-2 (nor my Common-LISP
system) can do such a thing, being mere machines. But that's okay; let
Searle have his room: The point is that it isn't really a computer
running a program, only a simulation of one.

HARNAD:
Pat, could you explain this a bit more?
What is the distinction you are making? Is he or is he not implementing
the same program (i.e., is he or is he not one of the many possible
implementations of the same program -- differences among which,
because of the implementation-independence of the computational
level, are irrelevant)? And if he is not (whereas, say, an abacus
implementation of the same program would be), then why not?

Of course, if there is something
essential
about the Vax implementation,
that's another story, but then
it's no longer the symbolic functionalism
everyone was talking about, but a (so far unjustified) mentalistic
claim about
some
implementations of the same program over others.

HAYES:
You have heard me make this distinction, Stevan (in the Symposium on
Searle's Chinese Room Argument at the 16th Annual Meeting of the
Society for Philosophy and Psychology in College Park, Maryland, June
1990). I now think that the answer is, No, Searle isn't a (possible)
implementation of that algorithm. Let me start with the abacus, which
is clearly not an implementation of anything. There is a mistake here
(which is also made by Putnam (1975, p. 293) when he insists that a
computer might be realized by human clerks; the same mistake is made by
Searle (1990), more recently, when he claims that the wall behind his
desk is a computer): Abacusses are passive.
They can't actually run a program unless you somehow give them a motor
and bead feelers, etc.; in other words, unless you make them into a
computer! The idea of the implementation-independence of the
computational level does not allow there to be
no
implementation; it only suggests that how the program is
implemented is not important for understanding what it does.

What is an implementation, then? Well, at a minimum it is a mechanism
which, when the program is appropriately inserted into it, is put
into a "slave" state in which it is caused to "obey" the "instructions" (or
"follow" the "rules," or whatever). I put scare quotes here because
this metaphor, so widely used, is in fact rather misleading in one way,
and it is just here that it has misled Searle. Computers don't actually
read
and
obey
or
follow
rules. Rather, when a "rule" is moved into
their processors, they are transformed into a state in which changes
which can be appropriately interpreted as the consequence of the rule
are
caused
to happen. But, as we all know, there is no little man
inside them actually reading and following anything, and this whole way
of talking is only a metaphor which anyone with a modicum of
computational sophistication learns rapidly to use appropriately.

The point of this is that if Searle, in his room or with his prodigious
memory, is just running through these rules consciously, saying to
himself that this is a tedious business and wondering what it is all
for, and generally being a human being, then I would claim that he is
not,
in fact, implementing the program, the program is not running,
and hence the lack of understanding, if there is such a lack, is of no
relevance to our computationalist position.

Now, maybe Searle memorises the symbols and manipulation rules so well,
or something strange happens inside him, so that they become, as it
were (more metaphors, but we have no alternative in this
science-fictiony land) "compiled" somehow into internal Searlean "code."
Ah, now, I have to admit, he really is running the program. But now,
when someone talks to him in Chinese, Searle goes into a kind of coma:
All his will-power is taken from him and something takes over his
consciousness and most of his bodily functions. After all, it has to
speak, so it needs at least his jaw and vocal tract, and it has to
hear, so it needs his perceptual cortex and his left temporal lobe, and
it needs his attention, so it will probably take over his lower brain
stem's output. Hence there's not much of his nervous system that it is
going to leave unmodified; and now something like the split-personality
story becomes clearly the most intuitively plausible one. I always
wonder what happens if you ask this thing, in Chinese, what it has in
its left trouser pocket. Does it deny all knowledge of trousers? Or
does Searle's hand sneak down and investigate, without his knowing
why?

Here's a test: Suppose you ask this thing something in Chinese, and
while you are waiting for an answer, you quickly ask it/him in English,
"How's it going?." Now, does Searle say something like "Shut up, I'm
trying to keep track of all these damn rules," or does the thing act
like a Chinese speaker who doesn't understand English? If you say to
it/him, "Let's go get a coffee and the hell with all this
rule-following," is it possible that Searle might agree and give up on
the most recent Chinese input? Or will it not understand you, and maybe
ask you, in Chinese, what those strange noises are that you are making
(or maybe it completely ignores you, since it is thinking full time
about the answer to your previous question)?

HARNAD:
Of course, if there is something essential about the Vax
implementation, that's another story...

Only that it is an implementation,
as opposed to something else.

HARNAD:
\ ...but then it's no longer the symbolic functionalism everyone was
talking about, but a (so far unjustified) mentalistic claim about some
implementations of the same program over others.

HAYES:
No: its a claim about implementations of the program as opposed
to mere simulations of an implementation, of the kind that one might do
with a pencil and paper while trying to debug it: "Let's see, x becomes
17, so call Move-Hand-Right with these parameters, Hmm,..." That's not an
implementation of it, and that's what Searle-in-the-Room is doing. If the
question of his understanding anything is even germane to the
discussion, then he is not a CPU (central processing unit).

PERLIS:

HAYES:
I think Maudlin is wrong, by the way: the man in the room
isn't carrying out neuron-like processes any more than he is running a
program. He can always say, the hell with this, I'll have a nap before
going on with it. Neither the brain nor my Mac-2 (nor my Common-LISP
system) can do such a thing, being mere machines. But that's OK, let
Searle have his room: the point is that it isn't a computer running a
program, only a simulation of one.

PERLIS:
Your Mac-2 has an operating system, perhaps even multitasking.
So it can indeed decide to take a nap or get lunch. That is, it can
"decide" to work on something else for a while. Even without
multitasking, the OS (operating system) can slow down or speed up,
depending on factors irrelevant to the program.

And we can interrupt a process by hand, e.g., by typing ctrl-Z, in
mid-execution, yet we do not then say it was not really a
program-in-execution beforehand, nor after we let it run on again later.

And you say the "brain" cannot do this? Then how can Searle do
it? Surely Searle naps and lunches at times; and he is surely a (higher
organizational level of a) brain.

As for neuron-like processes: That may be how the program is
designed; to "simulate" the informational activity of a brain that is
carrying out part of a Chinese conversation. Recall, Searle's argument
aims to show
that
no
program execution can have mental states, not just
some
(Turing-approved) program execution. That is,
the criterion Searle sets out to undo is not the Turing Test,
but rather
any
program at all.
This is much more ambitious, and much more relevant, since hardly
anyone believes the TT is a good measure of mind. Certainly
functionalists do not.

I agree with Ned Block (below), that no coma is needed when Searle
internalizes the Room. If Searle were to go into a coma, he could not
keep carrying out the rule-following effort, and then the virtual mind
would collapse.
And, yes, this
is
rule-following: When we take in information and use it to
guide our behavior, we are not so different from a computer executing a
program. To be sure, using the very same Broca's area for both John Searle
and Virtual Jah-Sur (VJS), is quite a trick. But even that could be
imagined by having Searle painstakingly move his mouth to produce (to
him) meaningless sounds: he would know full well what he was doing:
making odd noises in accordance with the instructions. Nor would he
know that these were Mandarin words; yet VJS would know this (so
the CT hypothesizes). VJS would not know of any painstaking effort
to produce each sound. Indeed, it is unclear whether the notion of
effort would be available to VJS at all, since that depends
on how fancy the program is. But there is no need for any such notion
to overlap with the same physical processes that account for such
notions in Searle's awareness. VJS need not have its phenomenal states
produced by the same hardware that produces Searle's.
There can be a virtual Broca's area for VJS, created by part of the
activity of Searle's thinking through the program execution.

As for Harnad: Higher organizational levels are, by definition, not
"seen" at lower levels. By choosing to place himself at the lower
level, Searle has, according to the version of the computational thesis
(CT) I am using, put himself precisely where he will
not
participate in
the mental life he implements. An analogy: Someone can follow rules
that in fact amount to playing chess, yet have no idea whatsoever that
he is playing chess or indeed playing a game at all. The rules might be
at a level of organization far below the usual conceptual level of a
game. Instead of moves, pieces, opponents, captures, there might be
bit-maps of pixels and associated low-level pattern-recognition
software and stepper-motor commands. Yet the chess-playing might be
brilliant, if the rules are good enough. No one would deny that chess
is being played, but "he" is not playing it, in any interesting sense.

So, Searle's thought-experiment has the very outcome that CT predicts!
And it is not a surprising outcome that should make us rethink CT, as
Harnad argues. It is CT upfront. To take the internalized Chinese Room
as evidence against (the levels version of) CT is to have
understood neither the CT nor virtual levels in computers.

BLOCK:
Pat, deciding to literally obey a set of rules is
one way
of becoming a mechanism that is in a state in which it is caused to
"obey" the rules. No coma is needed. If you ask a question in English
and Searle says "Shut up," the implementation is revealed as
defective. The mistake doesn't show it isn't an implementation at all.

HAYES:
I disagree. Obeying a set of rules does not make one a mechanism
which is running a program. That's exactly the distinction I am trying
to draw. When I sit down to debug some code and pretend to be the
computer, running through the instructions carefully, the code is not
causing
me to do what it says, in the way that a computer is caused to
behave in the way specified. Your scare quotes around "obey" are
appropriate here.

If Searle says "shut up," it is revealed that there is no
implementation, not that there is a faulty one. If it was the latter,
when did the implementation become faulty? When he responded in
English? But this might be an entirely appropriate piece of behavior to
protect his Chinese long-term memory, not a fault in obeying a program,
just as an electrical computer might flash its screen when you touch
its keyboard while it is busy doing something (a piece of behavior
which is not included in the code it is currently obeying, by the way,
but lower in the system).

I admit that the notion of "implementation" I am trying to explicate
here is not exact yet: but it is more like what computer people mean by
running a program, I am sure.

ON WHAT COUNTS AS AN IMPLEMENTATION

HARNAD:
Hayes raises some interesting points about what
counts as an implementation of a program. He suggests that a program is
only being implemented (as opposed to pseudo-implemented) under some
conditions and not others, even though both "real" implementations and
pseudo-implementations go through the same steps, operate according to
the same rules, on the same inputs, generating the same outputs.

This really seems to turn Turing Equivalence and the Turing Test on
their heads. If Hayes's distinction were tenable (I'm obviously going to
try to show that it is not), it would mean that (1) the Turing Test was
invalid, because the Turing-Indistinguishable I/O performance could be
implemented the "wrong" way, and even that (2) Turing Equivalence was
not equivalent enough, despite the state-for-state, symbol-for-symbol,
rule-for-rule, one-to-one mapping between the "real" implementation and
the pseudo-implementation, because some of these mappings were somehow
"wrong."

In particular, Hayes suggests that Searle's conscious volition in
implementing the symbol system, though it certainly results in a causal
realization of all the states, governed by all the rules, is somehow
the wrong kind of causality!

I have the suspicion that it would be very hard to find a principled
way of ruling out Turing-Equivalent and Turing-Indistinguishable
implementations of this sort without invoking mysterious special
properties of mentally generated performance at
both
ends of the enterprise: They were already implicit (rightly, in my
opinion) in Turing's intuition that it would be arbitrary to deny
(merely because it turned out to be a machine) that a system had mental
states as long as its performance was Turing-Indistinguishable from
that of a system with mental states. But now even whether two systems
are Turing-Equivalent enough to count as implementations of the same
program turns out to depend on their
not
including as causal components subsystems that themselves have mental
states...

HAYES:
Abacusses are passive: They can't actually run a program unless you
somehow give them a motor and bead feelers, etc: in other words, unless
you make them into a computer. The idea of implementation-independence
of the computational level does not allow there to be no
implementation, it only suggests that however the program is
implemented is not important for understanding what it does.

HARNAD:
I agree that it makes as little sense to call an abacus an
implementation as it does to call a computer that's not plugged in or
running a program an implementation. But why should what manipulates
the abacus matter, as long as there is a physical system there,
stepping causally through the right states? In other words, I'll bet
Putnam [REF] was thinking of an abacus-manipulating system as a whole
(whether with a human or a machine doing the rule-governed movements as
part of the system) -- or if not, he should have been.

HAYES:
Computers don't actually read and obey or
follow rules. Rather, when a
"rule" is moved into their processors, they are transformed into a
state in which changes which can be appropriately interpreted as the
consequence of the rule are caused to happen. But, as we all know,
there is no little man inside them actually reading and following
anything, and this whole way of talking is only a metaphor which anyone
with a modicum of computational sophistication learns rapidly to use
appropriately.

HARNAD:
I agree. So let's drop homuncular talk about following rules and
simply talk about a system that steps causally through the right states
under the right conditions, governed causally by the right rules. I
don't care how the system
feels
about the rules one way or the other.
It should just be acting in accordance with them because of ordinary
physical causality (does that not include volition?). I mean, let's
start with a simple case. Suppose there is a super-simple abacus that
only has one bead, moved only once (and interpretable as signifying
"Life is like a bagel"). Does it make any difference whether that one
movement is made by a mechanical device or a person's finger? And if
there are instead enough beads to do be moveable and interpretable as
arithmetic, again, does it make any difference whether they are moved
by a person or a machine? We are tempted to say the person can quit and
the machine can't, but besides the fact that that's not true either,
the only difference there is that the person quits voluntarily whereas
the machine breaks involuntarily. So what? While they're performing
(and it could be for years in both cases), they are both going through
the same states under the same conditions, and both are reliably and
systematically interpretable as doing addition. Unless we gratuitously
presuppose that volition is cheating, the two systems seem equivalent
in every nonarbitrary respect we could mention.

HAYES:
The point of this is that if Searle, in his room or with his
prodigious memory, is just running through these rules consciously,
saying to himself that this is a tedious business and wondering what it
is all for, and generally being a human being, then I would claim that
he is not, in fact, implementing the program, so the program is not
running, so the lack of comprehension, if there is such a lack, is of
no relevance to our computationalist position.

HARNAD:
I don't think you have given any nonarbitrary reason for
wanting to make this claim. But suppose even this arbitrary stipulation
is met; suppose that, as actually occurs with with many learned skills,
the memorized symbol manipulation becomes so automatized with practice
that Searle can do it in the same way he can drive a car while
simultaneously carrying on an English conversation, unaware of his
driving: What difference would this make to whether or not he was
implementing the program that you would deny he was implementing when
he was doing it consciously? You answer this question quite clearly in
the passage requoted below, in which it is apparent that once you're
reassured that Searle has safely become a somnambulist, no longer able
to fend off the mentalistic attributions we are accustomed to
projecting onto our innocent computers (who are in no position to point
out that there's nobody home and that the attributions are hence
false), then the projections flow freely again. But the whole point of
Searle's thought experiment was to show that this is all just
misattribution!
What's the response? Disqualify the witness until he falls asleep in
the docket, then safely attribute to him what you wanted to attribute in the
first place!

HAYES:
Now, maybe Searle memorises the symbols and manipulation rules so well,
or something strange happens inside him, so that they become, as it
were (more metaphors, but we have no alternative in this
science-fictiony land) "compiled" somehow into internal Searlean "code."
Ah, now, I have to admit, he really is running the program. But now,
when someone talks to him in Chinese, Searle goes into a kind of coma:
All his will-power is taken from him and something takes over his
consciousness and most of his bodily functions. After all, it has to
speak, so it needs at least his jaw and vocal tract, and it has to
hear, so it needs his perceptual cortex and his left temporal lobe, and
it needs his attention, so it will probably take over his lower brain
stem's output. Hence there's not much of his nervous system that it is
going to leave unmodified, and now something like the split-personality
story becomes clearly the most intuitively plausible one. I always
wonder what happens if you ask this thing, in Chinese, what it has in
its left trouser pocket. Does it deny all knowledge of trousers? Or
does Searle's hand sneak down and investigate, without his knowing
why?

HARNAD:
This scenario, as I've suggested before, is pure sci fi: It's the
hermeneutic hall of mirrors that people get lost in after too much time
spent overinterpreting "virtual" systems that can't fight back (Harnad
1990b). But let me remind you that this mysterious power and this dual
personality are being imagined to arise purely as a consequence of memorizing
and then automatizing a bunch of symbols and rules for manipulating
them (rather than as a consequences of early sexual abuse or hysterical
personality, which are the normal causes of multiple personality
disorder)! The part about "compiling into an internal code" is just a
self-fulfilling fantasy, because the evidence, if it is not arbitrarily
ruled out of court, is that memorizing codes and rules and performing
rule-governed symbol manipulation is simply not enough to give you
these mysterious effects, though they are enough to give you a
perfectly valid implementation. (And the effects would have to be
mysterious indeed to handle the robotic challenge [deixis] of operating
on the world (as with the left trouser pocket question); the Turing
Test [TT] is just symbols-in/symbols-out, after all, whereas the
"deictic" ability you have now invoked would require symbol-grounding
[the Total Turing Test, TTT]. But that's another story altogether, one
calling for Turing-Indistingishable symbolic
and
robotic capacities [Harnad 1989].)

HAYES:
Here's a test: Suppose you ask this thing something in Chinese, and
while you are waiting for an answer, you quickly ask it/him in English,
"How's it going?." Now, does Searle say something like "Shut up, I'm
trying to keep track of all these damn rules," or does the thing act
like a Chinese speaker who doesn't understand English? If you say to
it/him, "Let's go get a coffee and the hell with all this
rule-following," is it possible that Searle might agree and give up on
the most recent Chinese input? Or will it not understand you, and maybe
ask you, in Chinese, what those strange noises are that you are making
(or maybe it completely ignores you, since it is thinking fulltime
about the answer to your previous question)?

HARNAD:
I hope I have shown that nothing rides on these questions
except whether we have left the "system" the power to gainsay the
fantasies we insist on projecting onto it.

HAYES:
[This is] a claim about implementations of the program as
opposed to mere simulations of an implementation, of the kind that one
might do with a pencil and paper while trying to debug it: let's see, x
becomes 17, so call Move-Hand-Right with these parameters, Hmm,...
That's not an implementation of it, and that's what Searle-in-the-room is
doing. If the question of his understanding anything is even germane to
the discussion, then he is not a CPU.

HARNAD:
As a matter of fact, I
would
say that a person doing, say, cookbook formal calculations by paper and
pencil could be yet another implementation (though a short-lived one)
of the same program as the one governing a calculator (if it uses the
same recipes). (If I do, by rote, math that I don't understand, having
memorized symbol manipulation rules, does a "virtual mind" in me
understand?)
An implementation merely needs to step through the right
symbolic states according to the right rules. It doesn't matter a whit
whether all or part of it is or is not thinking while doing so. What it
is to implement a program should be definable
without any reference whatsoever to the mental one way or the other.
And what causal form the implementation takes
is irrelevant, just as long as it does takes causal form. The mental
comes in much later (if at all), not as a negative criterion for what
counts as an implementation, but as a positive criterion for what might
count as a mind.

MORE ON THE VIRTUES AND VICES OF THE VIRTUAL

PERLIS:
As for Harnad: higher organizational levels are, by definition,
not "seen" at lower levels. By choosing to place himself at the lower
level, Searle has, according to the version of the computational thesis
(CT) I am using, put himself precisely where he will not
participate in
the mental life he implements. An analogy: Someone can follow rules
that in fact amount to playing chess, but have no idea whatsoever that
he is playing chess or indeed playing a game at all. The rules might be
at a level of organization far below the usual conceptual level of a
game. Instead of moves, pieces, opponents, captures, there might be
bit-maps of pixels and associated low-level pattern-recognition
software and stepper-motor commands. Yet the chess-playing might be
brilliant, if the rules are good enough. No one would deny that chess
is being played, but "he" is not playing it, in any interesting sense.

HARNAD:
I
would
deny that chess is being played, because chess is
played by someone, and in a chess-playing program, be it ever so
"brilliant," there is nobody home! All you really have is symbols and
symbol manipulations that are systematically interpretable as chess --
virtual chess, in other words. Levels have nothing to do with it. It is
not only someone who has been given pixel-level rules who would not
understand he was playing chess (just as Searle would not understand
Chinese): Even someone given rules at the
right
level (as dictated by
CT), but with all the symbols encrypted into an unfamiliar code (say,
hexadecimal, morse code, or even Chinese) would have no idea he was
playing chess!
By contrast, the computer (which understands
nothing )
is oblivious to
which of countless computationally equivalent codes you give it.
[I hope everyone realizes that the ancillary question of whether
someone could eventually figure out that he was playing hexadecimal
chess is beside the point.]

PERLIS:
So, Searle's thought-experiment has the very outcome that CT
predicts! And it is not a surprising outcome that should make us
rethink CT, as Harnad argues. It is CT upfront. To take the
internalized Chinese Room as evidence against (the levels version of)
CT is not to have understood CT, nor virtual levels in computers.

HARNAD:
Hayes wants to disqualify Searle's testimony about not
understanding Chinese because Searle is not a proper implementation
(unless he obtunds and gets taken over by the Force as a byproduct of
the exercise). Perlis wants to disqualify Searle's testimony
because he is testifying for the wrong "level" (the Chinese testimony
is representing the right level). Both Hayes and Perlis have only one
objective, whether they realize it or not: To find grounds for
discounting any negative testimony so that they can safely return to their
mentalistic interpretation of the "virtual" system -- an interpretation
that the virtual system itself will obligingly never contest because a
symbol system must by definition be able to bear the full weight of a
systematic interpretation. Such is the grip of the hermeneutic circle.

My own recommendation would be to encrypt (into codes we don't know) any
virtual systems we might be tempted to interpret mentalistically:
reduce them to Searle's meaningless squiggles and squoggles in our own
minds, and then only be prepared to reinfuse them with meaning to the
degree that they exhibit coherent causal (robotic) connections with
the real world of objects and states of affairs that their symbols can be
systematically interpreted as referring to. Such a system (if it
passed the TTT) would be
really
grounded: Nothing virtual about it.

HAYES:
Ah, Stevan, I knew I should never have agreed to start arguing
with you in public! Let me see if I can respond to your message in
a convincing way. It will be tricky, because I confess at
the outset that it is not easy to state my position clearly; I am
trying to articulate intuitions which have their roots in rather
technical matters that I believe have not been carefully articulated
before (although, ironically, your TT/TTT distinction comes the
closest, so why are
we
arguing? Oh well, never mind...), and it is easy
for this fragile - but crucially important - intuition pump to be
overwhelmed by the huge link to "ordinary" intuition which Searle has
so thoroughly exploited, using "intuitive" descriptions of how a
computer works.

The distinction I have in mind is this. What it is that makes computers
into computers -- what makes their existence such a revolutionary
factor in our society, and what makes them worth spending many dollars
on -- is that they are machines whose behavior is influenced in
systematic ways by the meanings of the symbols that we input to them.
No earlier technology had such an extraordinary ability, and indeed
such a property would have been thought of as simply impossible, or as
arising from some miraculous new grasp of the physical world (or from
some supernatural qualities) until fairly recently. A machine that
could play chess had a midget inside it, it was revealed. That
explained everything: the chess-playing capacity was in his head. But
there is nobody inside this Mac2 which I am typing on here, so its
abilities to do what I tell it are somehow embodied in the way it is
built. It really is a machine.

Now, if we look at how that
is
possible, then there turn out to be, as Perlis
correctly emphasises, layers of interpretation of code on virtual
machines of one kind or another (and this is not hermeneutical
confusion, by the way, but sound engineering talk). But at the bottom
there is something else, and this is rather important: There is
hardware, a physical machine that is being caused to open and
close electrical gates in particular ways by the pattern of bits that
arrive in its inputs from time to time. Notice, this is
not
"reading
and obeying instructions": It is not reading and obeying anything. It
is simply a collection of circuits which is operating according to
plain ordinary laws of physics just like any other part of the physical
world. It certainly does not understand its inputs in any reasonable
sense of the word "understand."

That is what I mean by an implementation. It is a mechanism, a
physically implemented, mechanical, causal machine which runs the
program. It must not have a little man in it, or at any rate it had
better be a little man whose presence can in turn be explained away in
terms of a lower-level implementation: It can't be someone whose
physical presence and whose mental abilities are essential to the
success of the account being offered of how the thing works. And it is
obvious that the Chinese room's man is essential in both of these ways,
especially when we realise the extent to which the metaphor is simply a
distorted way of describing a computer which puts cognition at exactly
the place - the symbol/physics interface - where it is least
appropriate (a point made to me by Ken Ford of the University of
Florida).

By the way, I think I detect in Harnad's message a certain outrage (or
amusement) at my apparently wanting to regard implementations as
kosher only if they
don't
have any mental states. No doubt you find this
especially twisted. But it is part of the essential point of the
computationalist idea. We start with a mystery: how can a physical
machine perform meaningful symbolic operations (understand Chinese,
etc.)? Until maybe sixty years ago there were really no good ideas on
how this could possibly happen: The best for many centuries was that
there was a soul which was somehow attached to the body but actually
existed in a separate domain. (That this was, and sometimes is, the
best story going tells us something, by the way, but that's another
story.) The problem was that it didn't seem possible for a physical
mechanism to be affected by what a symbol could mean. Here we have, for
the first time, an idea of how this can possibly happen. But in order
for it to make sense, the nature of the connection to the physical
world that it proposes needs to be taken seriously.

Now let me turn to Reply mode and respond to some of Harnad's reactions:

HARNAD:
Hayes raises some interesting points about what
counts as an implementation of a program. He suggests that a program is
only being implemented (as opposed to pseudo-implemented) under some
conditions and not others, even though both "real" implementations and
pseudo-implementations go through the same steps, operate according to
the same rules, on the same inputs, generating the same outputs.

HAYES:
But some are caused to do that by virtue of the way they are
constructed, others are choosing to do it and are using their
cognitive talents to enable them to do it.

HARNAD:
This really seems to turn Turing Equivalence and the Turing
Test on their heads.

HAYES:
Turing Equivalence is a relationship between mathematical
functions. The Turing Test is not directly related to Turing
computability. For example, if one program runs 1000 times as fast as
another, it makes no difference whatever to their Turing Equivalence,
but it could make all the difference to one passing and the other
failing the Turing Test.

HARNAD:
If Hayes's distinction were tenable (I'm obviously going to try
to show that it is not), it would mean that (1) the Turing Test was
invalid, because the Turing-Indistinguishable I/O performance could be
implemented the "wrong" way,

HAYES:
Again, you seem to confuse Turing-Equivalence with the Turing
Test.

HARNAD:
and even that (2) Turing Equivalence was not equivalent enough,
despite the state-for-state, symbol-for-symbol, rule-for-rule,
one-to-one mapping between the "real" implementation and the
pseudo-implementation, because some of these mappings were
somehow "wrong."

HAYES:
Yes, I don't think that Turing Equivalence is enough: it might
be far too slow, for example. By the way, the equivalence you suggest
is far tighter than Turing Equivalence, matching as it does internal
states on a one-to-one basis. But I don't think that the state mapping I
have in mind really works. If the man running the program had states
which corresponded to the states of the (hardware) computer running
the (assembly-language) program, then he would be in some kind of
trance. In fact, he would hardly exist in the usual form that we mean
by a "person": His body would be under the total control of
some code which was running in his head, talking Chinese. It would have
taken him over in the way that a spirit takes over the body of a medium
in a trance, as we sometimes say, or perhaps the way one of a collection of
rival personalities has temporary control of the actions and persona of
a certain kind of patient.

HARNAD:
In particular, Hayes suggests that Searle's conscious volition in
implementing the symbol system, though it certainly results in a causal
realization of all the states, governed by all the rules, is somehow
the wrong kind of causality!

HAYES:
Well, now, I always wonder quite what the word "causality" is
supposed to mean. If I am consciously obeying some rules - say,
following a recipe - then it seems that the presence of (my
understanding of) these rules in my head does not cause me to perform
the cookery in the same way that the presence of (a physical encoding
of) an assembler-language program in a computer causes the machine to
behave in a certain way. That is the core of my argument: that there is
something fundamentally different here, in particular, in that the
story of the human's following the rules entails the presence of some
conscious ego in the human head, while the computer story exactly
denies this and supplants it with a purely mechanical account of an
implementation.

HARNAD:
I have the suspicion that it would be very hard to find a
principled way of ruling out Turing-Equivalent and
Turing-Indistinguishable implementations of this sort without invoking
mysterious special properties of mentally generated performance at
both
ends of the enterprise: They were already implicit (rightly, in my
opinion) in Turing's intuition that it would be arbitrary to deny
(merely because it turned out to be a machine) that a system had mental
states as long as its performance was Turing-Indistinguishable from
that of a system with mental states. But now even whether two systems
are Turing-Equivalent enough to count as implementations of the same
program turns out to depend on their not including as
causal components subsystems that themselves have mental states...

HAYES:
Well, that is one way to put it. But the point is not that the
components don't have mental states, but that a complete account is
available of how they are constructed as physical systems. And appeal
to mental states in the causal loop vitiates any such account. Imagine
that someone tries to sell a new word-processor which can correct faulty
grammar. If it works only when "implemented" on Searle, it clearly is
not an implementation in the usual sense of that word. A computer is
implemented as a physical machine. That's what is so remarkable: one
can build elaborate symbolic structures which are (arguably)
beginning to be plausible accounts of mentality, all resting on an
implementational foundation which is merely a mechanism. That's how it
(plausibly, hypothetically) bridges the gap between physical machinery
and the mental world of the mind.

HARNAD:
I agree that it makes as little sense to call an abacus an
implementation as it does to call a computer that's not plugged in or
running a program an implementation. But why should what manipulates
the abacus matter, as long as there is a physical system there,
stepping causally through the right states?

HAYES:
I agree, except that I don't think a human reading a script and
moving the beads according to instructions is stepping
causally
through
the right states. What is causing the person to obey the instructions?
If one were asked to give an account of what was happening, one would
have to talk of reading and comprehension and accepting goals and so
forth.

HARNAD:
\....let's drop homuncular talk about following rules and
simply talk about a system that steps causally through the right states
under the right conditions, governed causally by the right rules.

HAYES:
Okay, but I think that this rules out Searle in his room right
there. Certainly it does if we adopt the usual intuitive understanding
of what a computer mechanism is.

HARNAD:
I don't care how the system feels about the rules one way or
the other. It should just be acting in accordance with them because of
ordinary physical causality (does that not include volition?).

HAYES:
No, it does not. If I read a novel and as a result am cast into
deep thinking about the nature of life and then decide to give it all
up and become a monk, did the novel cause me to make that decision? I
doubt it, according to any ordinary notion of cause. Certainly if my
family try to sue the publishers for damages they will not get very
far.
There is still some free will in there making those decisions.

HARNAD:
I mean, let's start with a simple case. Suppose there is a
super-simple abacus that only has one bead, moved only once (and
interpretable as signifying "Life is like a bagel"). Does it make any
difference whether that one movement is made by a mechanical device or
a person's finger?

HAYES:
Yes, of course it does! Consider a word processor: does it make
any difference whether that is running on a piece of electronics or is
just an interface to a human secretary? Not that this is an interesting
example, because it clearly is not a computer.

HARNAD:
And if there are instead enough beads to do be moveable and
interpretable as arithmetic, again, does it make any difference whether
they are moved by a person or a machine? We are tempted to say the
person can quit and the machine can't, but besides the fact that that's
not true either, the only difference there is that the person quits
voluntarily whereas the machine breaks involuntarily.

HAYES:
No, the other difference is that the person uses nontrivial
cognitive abilities (including such things as motor control, etc.,) to
perform this motion, while the hypothesised machine does not.
Incidentally, this is quite a nontrivial piece of engineering being
hypothesised. Notice that in order to be a computer, the machine's
behavior has to be systematically influenced by the positions of the
beads.

HARNAD:
While [the person and the machine are] performing (and it could be for
years in both cases), they are both going through the same states under
the same conditions, and both are reliably and systematically
interpretable as doing addition. Unless we gratuitously presuppose that
volition is cheating...

HAYES:
But look, of course it is cheating. Since we are supposed to be
trying to give an account of volition, to assume it as given is to beg
rather an important question.

HARNAD:
\ ...the two systems seem equivalent in every nonarbitrary respect
we could mention.

HAYES:
No, I claim that there is a fundamental difference between them.
And now, look at the intuition behind the Room: if we replace the man
by a machine running the program and try out Searle's intuition it
breaks down, because we can see immediately that this is really just
the argument that since the bare hardware doesn't seem to be
understanding Chinese, therefore the system as a whole is not. I take
it that this is clearly a mistake: if not, lets argue that point
later.

HARNAD:
\ ...suppose that, with many learned skills, the memorized symbol
manipulation becomes so automatized with practice that Searle
can do it the way he can drive a car while simultaneously carrying on
an English conversation, unaware of his driving: What difference
would this make to whether or not he was implementing the program

HAYES:
Lots of difference. But notice your choice of terminology. I
don't think Searle can implement the kind of program that could run on
constructible hardware: The man in the room is simulating running a
program, not implementing it.

HARNAD:
\ ...once you're reassured that Searle
has safely become a somnambulist, no longer able to fend off the
mentalistic attributions we are accustomed to projecting onto our
innocent computers (who are in no position to point out that there's
nobody home and that the attributions are hence false), then the
projections flow freely again. But the whole point of Searle's thought
experiment was to show that this is all just misattribution! What's the
response?
Disqualify the witness until he falls asleep in the docket, then
safely attribute to him what you wanted to attribute in the first place!

HAYES:
Ah, rhetoric. Now look, if the Chinese room is supposed to be
putting a witness where the hardware is, then it leads to very
different intuitive conclusions. The point of the passage below is to
point this out. Either the interpreter of the code is a conscious
person, in which case it isn't being run in the way it would be on a
computer, or else a very different intuitive account of the
man-memorised-code is the most natural. The idea of alternative
personalities struggling for control of a single body is not science
fiction; it is clinically recognised as rare but possible, and seems
quite the most convincing account of that version of the man with the
room in his head.

HARNAD:
This scenario, as I've suggested before, is pure sci fi: It's the
hermeneutic hall of mirrors that people get lost in after too much time
spent overinterpreting "virtual" systems that can't fight back (Harnad
1990b). But let me remind you that this mysterious power and this dual
personality are being imagined to arise as a consequence of memorizing
and then automatizing a bunch of symbols and rules for manipulating
them (rather than as a consequence of early sexual abuse or hysterical
personality, which are the normal causes of multiple personality
disorder)!

HAYES:
Bluster! The whole idea of being able to memorise (or even
follow for that matter) a piece of code sufficiently complicated to
simulate a Chinese speaker is complete fantasy in any case. We are
supposed to suspend disbelief in order to be convinced by the way our
intuitions grasp what is happening in a wildly unintuitive situation.
My point is only that an alternative intuitive account is more
plausible and does not have the consequences that Searle takes as
somehow established.

HARNAD:
The part about "compiling into an internal code" is just a
self-fulfilling fantasy, because the evidence, if it is not arbitrarily
ruled out of court, is that memorizing codes and rules and performing
rule-governed symbol manipulation are simply not enough to give you
these mysterious effects, though they are enough to give you a
perfectly valid implementation.

HAYES:
No, there is not a shred of "evidence" in this direction. Wildly
implausible imaginary scenarios are not evidence. Let us keep things
straight: A philosopher's opinions, no matter how wittily expressed, are
not evidence, and the popularity of an opinion is not a case made. And
what I am actually trying to argue here (albeit vaguely, I concede) is
that there is
not
a valid implementation in the Chinese room. That is my point.

HARNAD:
I hope I have shown that nothing rides on these questions
except whether we have left the "system" the power to gainsay the
fantasies we insist on projecting onto it.

HAYES:
No, I don't think you have shown this. You have asserted it with
your usual great style, but I don't see any actual argument so far. All
we have done is held our intuitions up against one another.

Let me ask you about "fantasies." Is it a fantasy to say that my Mac2
is currently running Word4? Or that I just now told Word4 to display
this document in printing format?

HARNAD:
As a matter of fact, I would say that a person doing, say,
cookbook formal calculations by paper and pencil could be yet another
implementation (though a short-lived one) of the same program as the
one governing a calculator (if it uses the same recipes). An
implementation merely needs to step through the right symbolic states
according to the right rules.

HAYES:
Ah, we really do disagree. It is now up to me to provide
an account of what an implementation is in a clearer form.
I will claim here that what I mean by
implementation is closer to what is the accepted idea in computer
science.

HARNAD:
It doesn't matter a whit whether all or part of it is or is not
thinking while doing so. What it is to implement a program should be
definable without any reference whatsoever to the mental.

HAYES:
That's not really the centerpiece of the distinction I want to
draw. The key is whether at some level a completely physical account
can be given of how the implementation can be realised as a machine
operating according to physical laws, whose behavior is nevertheless
systematically related to (and influenced by) the meaning of the
symbols encoded in its internal states. We can't do that for people,
and until we can, it begs questions to call a person following some
rules an implementation of a program.

HARNAD:
And what causal form the implementation takes is irrelevant,
just as long as it does takes causal form.

HAYES:
As I have said, I don't consider an account of someone obeying
rules a causal account.

HARNAD:
The mental comes in much later (if at all), not as a negative
criterion for what counts as an implementation, but as a positive
criterion for what might count as a mind.

HAYES:
If we have to bring The Mental in later to explain the mind,
then it becomes another mystery like the soul, and we have no better
account of how mental and physical states connect to one another than
the Victorians did.

Searle, J. R. (1990) Is the Brain a Digital Computer?
Presidential Address. Proceedings of the American Philsophical
Association.

FOOTNOTES

1.
Searle's (19080) Chinese Room Argument goes as follows: According to the
"Strong AI" hypothesis, if a computer running a computer program could
successfully pass the Turing Test -- i.e., correspond with you as a
pen-pal for years without your ever suspecting it of not having a mind
or of not understanding what you're writing -- then every implementation
of that program really would have a mind and really would understand
what you were saying. Searle's argument is that if the Turing Test were
conducted in Chinese and he himself implemented the program by
memorizing and performing all of its symbol-manipulation rules then he
still would not be understanding Chinese. Hence neither would the
computer that was implementing the same program. Hence the Strong AI
hypothesis is false. -- S.H.