The articles and essays in this blog range from the short to the long. Many of the posts are also introductory (i.e., educational) in nature; though, even when introductory, they still include additional commentary. Older material (dating back mainly to 2005) is being added to this blog over time.

Saturday, 13 June 2015

In
his rewarding book, How
to Create a Mind, Ray Kurzweil covers John Searle's Chinese
room argument.

It's
a great book; though I do find its philosophical sections somewhat
unskilled. And that's despite the book's abundance of scientific and
technical detail, as well as Kurzweil's imaginative capabilities.
Having said that, there's no reason why a “world-renowned
inventor, thinker and futurist” should also be an accomplished
philosopher. (There's also a danger of philosophers criticising him
for not being so.) Indeed I've detected a certain about of snobbery
(or elitism) directed at Ray Kurzweil from philosophers and
scientists; which is, I think, partly down to him being neither an
academic scientist nor a academic philosopher.

Partly
as a result of that, Kurzweil says, for example, that all critics tend to know about Watson
(an “artificially intelligent computer system capable of answering
questions posed in natural language”) is that it's a computer in a
machine. He also says that some commentators don't “acknowledge or
respond to arguments I actually make”. He adds:

“I
cannot say that Allen and similar critics would necessarily have been
convinced by the argument I made in that book [The
Singularity isNear],
but at least he and others could have responded to what I actually
wrote.”(267)

The
problem is that, in the case of Searle's Chinese room argument, you
can say the same about Ray Kurzweil case against Searle. Then again, you wouldn't expect
a full-frontal and elongated response to Searle in a popular science
book like How to Create a Mind. Having said that, there is a
great deal of detail and some complexity - when it comes to other
issues and subjects - found in this book.

The
Argument

Ray
Kurzweil's argument against Searle is extremely simple. He simply
makes a distinction between Searle's man in the Chinese room and that
man's rulebook. The man on his own doesn't understand Chinese. The
man and the rulebook, taken together, do understand Chinese.

However,
instead of talking about a man and his rulebook, let's talk about the
central processing unit (CPU) of a computer and its rulebook (or set
of algorithms). After all, this is all about human and computer
minds.

Firstly,
Kurzweil says that “the man in this thought experiment is
comparable only to the central processing unit (CPU) of a computer”
(275). However, “but the CPU is only part of the structure”. In
Searle's Chinese room, “it is the man with his rulebook that
constitutes the whole system”.

Again,
the “system does have an understanding of Chinese”; though the
man (or CPU?) on its own doesn't.

The
immediate reaction to this is how bringing two things (a man
and a rulebook) together can, in and of itself, automatically bring about a
system's understanding of Chinese. Why is a system of two or more
parts in a better position than a system with one part? (That's if,
on this reading, a system can have only one part.) How and why does
multiplicity and (as it were) systemhood bring about
understanding? It can be said that the problem of (genuine)
understanding has simply been replicated. It may indeed be the case
that the addition of a rulebook to either a man or a CPU brings about
true understanding; it's just that Kurzweil doesn't really say why it
does so.

Searle
is of course aware of what may be called the whole-system
argument. Nonetheless, he doesn't quite talk about the same
system as Kurzweil. Instead of a man and a rulebook (or a CPU and a
set of algorithms), Searle (in his paper 'Minds,
brains, and programs' -1980) writes:

“Suppose
we put a computer inside a robot.... the computer would actually
operate the robot in such a way that the robot does something very
much like perceiving, walking, moving about.... The robot would, for
example, have a television camera attached to it that enabled it to
'see'....”

As
I said, this system isn't the same as Kurzweil's: its more
complex and has more parts. Thus, on Kurzweil's own reasoning, it should stand more of a chance of
being a mind (or even person) than a computer's CPU and its set of
algorithms.

Searle
then talks about putting himself in the robot instead of a computer.
However, neither scenario works for Searle. It's still a case that
all Searle-in-the-robot is doing is “manipulating formal symbols”.
Indeed Searle is simply

Ned
Block, on the other hand, offers us a system which is very similar to
Kurzweil's. He too is impressed with the systemhood argument. In his
'The
Mind as Software in the Brain' he says
that

“we
cannot reason from 'Bill does not understand Chinese' to 'The system
of which Bill is a part does not understand Chinese.'”

Block
continues by saying that

“the
whole system – man + programme + board + paper + input and output
doors – does understand Chinese, even though the man who is acting
as the CPU does not”.

Block
adds one more point to the above. He writes:

“I
argued above that the CPU is just one of many components. If the
whole system understands Chinese, that should not lead us to expect
the CPU to understand Chinese.”

Again,
how does systemhood automatically generate understanding? Block
hardly offers an argument other than complexity or that the addition of
parts to a system may (or does) bring about understanding. Indeed he
goes so far as to say that his own system could (or does) have what
he calls “thoughts”. He writes that

“Searle
uses the fact that you are not aware of the Chinese system's thoughts
as an argument that it has no thoughts. But this is an invalid
argument. Real cases of multiple personalities are often cases in
which one personality is unaware of the others”.

Indeed
a part of Block's argument does seem to be correct. It doesn't
appear to be the case that if Searle-in-the-system doesn't know the
thoughts of the entire system that consequently the system must have
no thoughts either. That much seems acceptable. Nonetheless, the
question still remains as to how mere systemhood brings about
understanding. It doesn't matter if the CPU or Searle-in-the-system
does or does not know what the entire system is thinking (or
understanding) if mere systemhood can't bring about thoughts (or
understanding) in the first place. In other words,
Searle-in-the-system (or the CPU) may in principle be unable to know
if the system as a whole has thoughts (or understands things) and yet
at the same time it's still the case that it doesn't have thoughts
(or understand things).

Kurzweil's
Computer-Behaviourism?

Kurzweil
says something that simply begs the question in its simplicity.
He writes:

“That
system [of a man and his rulebook] does have an understanding of
Chinese; otherwise it would not be capable of convincingly answering
questions in Chinese, which would violate Searle's assumption for this
thought experiment.”

What
Kurzweil seems to be arguing is that if the system answers the
questions, then, almost (or literally) by definition, it must understand the
questions. Full stop. This is a kind of behaviourist (or perhaps
functionalist) answer to the problem. If the system behaves as if
it understands (i.e., by answering the questions), then it understands.
Indeed it's not even really a question of “behaving as if” it
understands. If the system answers the questions, it does understand
because if it didn't understand, it couldn't answer the questions!

Searle
is of course aware of this behaviourist (or at least
quasi-behaviourist) position. Basically, the problem of computer
minds replicates the problem of human “other minds”. As Searle
himself puts it:

“'How
do you know that other people understand Chinese or anything else?
Only by their behaviour. Now the computer can pass the behavioural
tests as well as they can (in principle), so if you are going to
attribute cognition to other people, you must in principle also
attribute it to computers.'”

As
I said, this begs all Searle's questions about true understanding –
or, in his terms, meaning, intentionality and reference. To put the
Searlian point in very basic terms: the system could answer the
questions without understanding the questions. Though since Kurzweil
is arguing that the very act of answering the questions quite
literally constitutes understanding; then, by definition, Searle is
wrong. Nothing, according to Kurzweil, is missing from the story.

So
forget the man and his rulebook, let's talk about the CPU and its
rulebook (or set of algorithms) instead. If the CPU and the rulebook answer
the questions, it understands the questions. In other words,
the computer understands the questions.

If
it's definitionally the case that answering the questions means
understanding the questions, then Kurzweil is (again, by
definition) correct in his argument against Searle. Nonetheless,
Searle knows that this is the argument and he's argued against it for
decades. There's something left out or wrong about Kurzweil's
position. So what's wrong with it?

Kurzweil
himself is aware - if in a rudimentary form - of what Searle will say
in response. Writing of Searle's position, Kurzweil says that

“he
states that Watson is only manipulating symbols and does not
understand the meaning of those symbols”.(170)

It
does indeed eem obviously the case that this is just a case of
“manipulating symbols” and not one of true understanding. Having
said that, that obviousness (or acquired, as it were, intuition) is
probably largely a result of reading Searle and other philosophers on
this subject. If I had read more scientists (or at least some of them) on this subject, it
may seem that what Kurzweil argues is obviously the case. So
let's forget intuition or obviousness.

Does
a thermometer understand heat because it reacts to it in the same way
each time? It's given a question (heat) and provides an answer (the
rising orfalling
mercury). Sure, this understanding is non-propositional and
doesn't involve words or even symbols. But if Kurzweil himself
sees these things in terms of the brain, its physical nature and
output (behaviour) and not in terms of meaning or sentences, then why
should that matter to him? In other words, if we can judge a computer
squarely in terms of its output or behaviour (answering questions),
then we can judge the thermometer in the same way. According to
Kurzweil, if the computer answers questions then, by definition, it
understands. Thus if the thermometer responds in the right way to
different levels of heat, then it too understands. Sentences and
their meanings are no more important to computers than they are to
thermometers.**) Paul Churchland, tangentially, argues the same about minds/brains and what he calls “propositional attitudes”.