Note: Commentaries
are in12-point type-size and author’s
responses are in 14-point type-size. The relevant passages from
the commentaries are quoted in the responses. The full commentaries are also available at the
Consciousness Online site. Commentaries not addressed
to me are in grey.

Professor Harnad, I am
highly sympathetic to your emphasis on robots, and to your recognition
of the explanatory gap at the end of your paper. But I believe there
are a couple of points along the way that are not quite right, and that
interfere with a clear view of what Turing and Searle did and didn’t
do. The ﬁrst concerns the questions (on p. 2) of what is missing, that
makes Searle not understand Chinese, and how Searle knows he doesn’t
understand Chinese. Your answer is that what is missing is “what it
feels like to be able to speak and understand Chinese”. And Searle
knows (indeed, is the only one who can know) whether he has that
feeling. But there is another answer that is implicit in Searle’s paper
(though it is made along with some other points, and so does not stand
out clearly). This is that Searle cannot respond to Chinese symbols in
any way other than verbally. For example, the people outside the room
may pass in a paper that says (in Chinese) “If you’d like a hamburger
at any time, just press the green button to the right of the ‘out’
slot.” Assuming the right program, scripts, and so forth, Searle may
well pass out a slip that says (in perfect Chinese) “Thank you very
much, if I want one I’ll do that”. But meanwhile, even though he may be
getting desperately hungry, he will not take himself to have any reason
to press the green button. You do call attention to this fact later on
in the paper. The point of my mentioning it here is to point out that
this fact is by itself sufﬁcient to show that Searle did not understand
what was written on the input slip. His feelings about his
understanding are neither here nor there. People can deny their
blindness; it seems imaginable that a person who understood perfectly
well might be under the illusion of lack of understanding (and
conversely). On page 3, you give three reasons why Turing is to be
“forgiven” for offering the imitation game. Clarity on the point I’ve
just made should help us see that Turing needs no forgiveness. Turing’s
target was named in the title of his paper – it was intelligence (not
understanding, having indistinguishable cognitive capacities, having
feelings, or having a mind). Intelligence is indeed the kind of thing
that can be exhibited in conversation. The rationale for the imitation
game could be put this way: If machine M can do as well in conversation
as human H (shown by inability of judges to reliably tell which is
which) then they have the property that’s exhibited in conversation
(namely, intelligence). You are quite right that if we include, e.g.,
recognition, and ability to manipulate under “cognitive capacities”,
we’ll need a robot, not just a computer. And if we want understanding,
as Searle uses the term, we’re going to have to have grounding, i.e.,
something that counts as nonlinguistic behavior, i.e., at least some
sensors and motors; in short, a robot. You suggest what seems to me to
be the right thought experiment at the top of p. 8. In brief, we’re to
imagine a feelingless robot that’s a good conversationalist and suits
actions to words for its whole, lengthy existence. I’ll enrich the case
by imagining the robot to have done my shopping for me for years. It
knows all about bus schedules, product reliability, etc. It says “I’ve
hooked your new speakers up to your receiver”. It has done so, and is
reliable about such matters. Is it remotely plausible that it doesn’t
know what “speaker” or “hook up” means? I am not sure what you meant us
to think, but I think the answer is that of course it understands its
words. What a robot has not got (by hypothesis), and does not need to
have, is any feelings. The research program for intelligence and for
the grounding that provides meaning for words is distinct from any
research program for producing feelings. It’s the absence of feelings
that accounts for its seeming justiﬁable to dismantle a robot. If it’s
my shopping assistant that you’re recycling, I’ll be distressed, as I
would if you took apart my car. But while a robot might defend itself
against being dismantled (I’d design that into any robot I’d make) it
can’t be genuinely afraid of dismantlement if it’s just a robot and has
no feelings. It can’t be made miserable by threat of dismantlement.
What feelings matter to is not using words meaningfully, but being a
proper subject of genuine moral concern.

ROBINSON: “Searle cannot
respond to Chinese symbols in any way other than verbally.”

Correct. And that’s why the
right Turing Test is T3, the robotic TT. But that isn’t the reason
Searle can be sure he does not understand Chinese. Nor is it true that
passing T3 entails understanding, by deﬁnition. That’s no more true
than that passing T2 does.

It’s not Searle’s feelings
*about* his understanding that are at issue, it’s his feeling *of*
understanding. And you’re right that it’s neither here nor there,
because Searle is not understanding (Chinese) though he is passing T2
(in Chinese), and nor is anyone or anything else.

Correct, but having
intelligence is synonymous with having indistinguishable cognitive
capacities. (What else does it mean?) Whereas having feelings (which is
synonymous with having a mind) is the bone of contention.

Turing was right that
explaining the causal mechanism that generates our cognitive capacities
answers all the answerable questions we can ask about cognition
(intelligence, understanding, whatever you like). But that still leaves
one question unanswered: Does the TT-passer feel (and if so, how, and
why)? Otherwise it does not have a mind.

(And mindless “cognitive
capacity” or “intelligence” can only be taken to be the same thing as
our own if we accept that as a deﬁnition, which rather begs the
question, since the question is surely an empirical one.)

ROBINSON: “[W]e’re to
imagine a feelingless robot that’s a good conversationalist and suits
actions to words for its whole, lengthy existence.”

Like you (I think), I too
believe it (1) unlikely that anything could pass T3 unless it really
understood, really had intelligence, really had our cognitive capacity.
But I also believe — *for the exact same reason* — that it is (2)
unlikely that anything could pass T3 unless it really felt, because,
for me, understanding (etc.) *means* cognitive capacity + feeling. (In
other words, again, I do not *deﬁne* understanding, etc. as passing T3.)

But, unfortunately,
although I share your belief in the truth of (1), I also believe that
we will never be able to explain why and how (2) is true (nor ever able
to know for sure *whether* (2) is true — except if we happen to be the
T3 robot).

ROBINSON: “What feelings
matter to is not using words meaningfully, but being a proper subject
of genuine moral concern.”

I am afraid I must
disagree. I do happen to be a vegan, because I don’t want to eat
feeling creatures; but the “hard” problem of explaining how and why we
feel is not just a moral matter.

I may be wrong but I
took Professor Harnad’s use of “feeling” to be somewhat broader than
your criticism seems to imply. Indeed, I read it as an allusion to
awareness, as in having the sense of what is happening when an instance
of information is understood. The use you allude to is the emotional
aspect of the word “feeling” which is certainly a common way we
understand the word but it is different, and in an important sense,
more narrow than feeling as awareness. (We are aware of emotional
content as well as intellectual content when we have either.) It’s
feeling as awareness that I believe Professor Harnad seems to have in
mind when discussing Searle’s insistence that, for a computational (or
any) machine to be really intelligent, it would need to do more than
just respond in the right way(s) verbally. It would also need to get
the meaning of what it is “reading” and “saying.” Searle certainly
makes an important point about what a machine entity would have to be
in order for it to be called intelligent in the way we use the term for
ourselves much of the time (i.e., conscious), whatever the merits of
his Chinese Room argument (which, I think, is seriously ﬂawed). But
Professor Harnad, it seems to me at least, is right to note that there
is something we could call “feeling” involved here, though perhaps it’s
an unfortunate choice of term since it is so often tied up with
references to our emotional lives. (Both emotions AND instances of
intellection are part of our mental lives after all and, as such, are
both felt by us, even if some instances of intellection have a
perfectly ﬂat modality — indeed, ﬂatness, too, is felt on this usage,
even if only as the absence of strong feelings of the emotional sort.)
I do think your point about the machine’s capacity to act in certain
ways in response to the information is useful because that’s integral
to comprehending when the generation of action is part of the
information in the input. But it doesn’t mean that a robotic body is
essential. After all, many a paraplegic could likely understand the
same proposal to order a hamburger but still not be able to act on it —
just as the uncomprehending man in the Chinese Room is unable to,
albeit for different reasons. (Searle’s man really doesn’t get it while
the paraplegic just lacks the capacity to act.) More important, I would
suggest, is to distinguish just what it is that constitutes
understanding in us, what Searle calls getting the semantics of the
“squiggles” and “squoggles”. And on that score, I think what’s ﬁnally
needed is a satisfactory account of semantics that adequately captures
what happens in us when we have understanding. We have a mental life
within which understanding occurs and that is just what the Chinese
Room, as sketched out by Searle, lacks by hypothesis. (It’s just a rote
device for matching symbols.) The problem, ﬁnally, is what would it
take to undo that lack — and could it be done using computational
processes of the kind that run on today’s computers? I think it could
but the only way to really see it, I think, is to unpack the notion of
semantics.

MIRSKY: “[T]he machine’s
capacity to act in certain ways… [is] integral to comprehending when
the generation of action is part of the information in the input. But
it doesn’t mean that a robotic body is essential. After all, many a
paraplegic could likely understand…”

We need a mechanism that
can pass the full robotic T3 before we test what happens if we cut back
on its sensorimotor capabilities. (The TT for a terminally comatose
patient would be easy to pass, but no TT at all.) (Whether one could
even contemplate blinding a T3 robot, however, is indeed a moral
question of the kind raised by W.S. Robinson above.)

MIRSKY: “[W]hat’s ﬁnally
needed is a satisfactory account of semantics that adequately captures
what happens in us when we have understanding… — could it be done using
computational processes of the kind that run on today’s computers? I
think it could but the only way to really see it, I think, is to unpack
the notion of semantics.”

Until further notice, what
both Searle’s Chinese Room argument and the Symbol Grounding Problem
show is that cognition cannot be just computation (symbol manipulation)
— any computation, whether on today’s computers or tomorrows. What’s
missing is sensorimotor capacity, which is not computational but
dynamic. That will provide grounding. But for meaning you need to
capture feeling too, and that’s a rather harder problem.

“He said, essentially,
that cognition is as cognition does (or, more accurately, as cognition
is capable of doing): Explain the causal basis of cognitive capacity
and you’ve explained cognition.” Of course Turing never said any such
thing, what he said was that if you cannot tell then you cannot tell.
That was good positivist talk when he said it in 1950, and who could
ever argue with it? Which would all be nitpicking, except we should let
it remind us not to make the same mistake (so often as here mistakenly)
alleged to Turing. “… and you really cannot ask for anything more”.
Exactly, that is what Turing did say. The implication of (and I
concede, by) Turing was that, if the thing on the other end of the test
was not really intelligent, it might be the next best thing. But let us
follow the invalid claim after all – let us reject the imitation, too,
and demand the real thing. Now, even if Turing did not really claim the
above, I – with less wisdom but sixty years more historic experience
with the ﬁeld – will happily do so in Harnad’s words: explain the
causal basis of cognitive capacity and you’ve explained cognition.
Well, who would argue that, either? Although one can argue it is a
tautology – if it turns out you have NOT completely explained
cognition, then the odds that you somehow managed to explain the causal
basis of cognition anyway approaches zero. – Searle’s Chinese Room? My
take is that Searle argues exactly correctly, but with exactly the
opposite direction that he claims. If the CR can converse in Chinese
without it anywhere “understanding” Chinese, then I take him at his (ex
hypothesi) word – that is exactly how a computer will do it, thank you
very much. I see nothing missing. The claim that something is missing
is only asserted, never argued. I ﬁnd nothing missing at all. If
someone passes in a note that says, “The Chinese Room is on ﬁre, run!”,
and it returns out a note, “Thank you for telling me, I will run away,
oh wait I can’t, please call the ﬁre department, arrrgghh!”, and yet
the operator sits still inside without even knowing his peril, well, we
have learned several things, none of which Searle ever discusses, nor
does most of the voluminous commentary upon it. If something is missing
in the CR hypothesis, it is the rest of what is likely to be needed to
fulﬁl the hypothesis, a narrative of self by the CR such that it would
not be just a manipulation of symbols, but a manipulation of symbols in
a context. I see no reason that cannot be just more symbol
manipulation, but it does have to be a particular kind of symbol
manipulation, and frankly, to make the argument interesting, we have to
do a little more than hand-waving at how it would work – which is,
sadly, the real conclusion one should reach about the CR in the ﬁrst
place. – Claims that some further “grounding” in physical terms is
needed, I believe need to be unpacked. When I sit and talk to Fred
about plans to explore Mars, neither of us has ever been there or is
likely to be. So, we are grounded in what we know, and reason by
analogy, right? Well, it turns out Fred only knows what he’s read in
some books, he’s never roved over dry sand in a crater himself – but
however far you’d like to stretch the analogy analogy, it’s beside the
point. The point, I suggest, is that when Fred reasons, whether it is
by analogy or by physical embedding, the reasoning in his head is
(presumably) some sort of symbolic manipulation “and nothing but”. So,
whatever the role of the real world in how Fred (and I) talk about
Mars, the reasoning and the talking (perhaps the “talking” is by email)
is all and only the original question which is, how are they doing that
stuff, that linguistic stuff, that cognitive stuff, that logical stuff?
– As a ﬁnal note, I would just toss in that symbol manipulation never
is just symbol manipulation, it is always a physical particular, and in
that it is in no way removed from anyone’s claims that a true cognitive
system must be physically grounded. Correspondingly, to wonder about
the role of causal systems in cognition is an empty argument, as has
often been pointed out, a lack of causality would defeat pretty much
any kind of argument one would ever make about cognition – one can
hardly imagine a workable deﬁnition of the term without a causal world
for it to live in (and for us to discuss it in). So whatever the
shortcomings of any particular causal system, it can hardly be said to
be the causality that is the problem with it. So there never really was
a “nothing but” in the symbol manipulation that Fred did in thinking
about Mars, even his thinking involves various neurons and chemical
reactions, or if Fred happens to be a computer, some circuits and bags
of electrons shifting around. – Whether any of this bears on the
problem of qualia is another matter. – The moral of the CR story (and
it’s a rollicking good story no matter what its faults) may indeed be
something about grounding, even without worrying about feelings and
qualia, and that is, a lack of grounding, or embedding, in the full
causal matrix of the world, including the social aspects thereof, is in
some way crucial to telling a believable story about cognition, much
less in realizing it. Does the room “know” that it has an operator
sitting trapped in its midst? If not, can it really be said to be
conversing with outsiders as if it were a full human agency? Well
perhaps, as humans are never known for omniscience, nor about even
modest levels of awareness of their internal states – and hey, wasn’t
that a part of the original story?

Searle argues exactly
correctly that if he were doing the Chinese T2-passing computations he
would not be understanding Chinese. That’s what’s missing.

I then focussed on *how*
Searle would know that he was not understanding Chinese. And the answer
is that it *feels like something* to understand Chinese. And Searle
would know that he did not have that feeling, if he were just
implementing Chinese T2-passing computations.

JSTERN: “Claims that
some further “grounding” in physical terms is needed… need to be
unpacked… we are grounded in what we know, and reason by analogy,
right?…”

Right. And “grounded in
what we know” means grounded “in physical terms.” Otherwise it’s just
the Chinese-Chinese dictionary deﬁnition chase all over again — which
is just an inﬁnite regress, from meaningless symbol to meaningless
symbol. (And neither cognition, nor knowing, nor T2-passing are just
“reasoning.”)

JSTERN: “The point, I
suggest, is that when [one] reasons, whether it is by analogy or by
physical embedding, the reasoning in his head is (presumably) some sort
of symbolic manipulation ‘and nothing but’…”

And the point of Searle’s
Chinese room argument, and of the symbol grounding problem, is that
without sensorimotor grounding (“physical embedding”), computation is
nothing but the ruleful manipulation of meaningless symbols. Therefore
cognition (not just “reasoning”: *cognition*) is not just computation.

But that point has already
been made many times before. The point of this particular little essay
was to focus on what even a grounded cognitive system that could pass
the Turing Test (whether T3 or T4) might still lack (namely, feeling),
and to point out that whether or not the successful T3- or T4-passer
really did feel, the full causal explanation (“reverse-engineering”) of
the underlying mechanism that generates its T3/T4 success will not
explain why or how (let alone *whether*) it feels.

JSTERN: “[S]ymbol
manipulation never is just symbol manipulation, it is always a physical
particular, and in that it is in no way removed from anyone’s claims
that a true cognitive system must be physically grounded.”

Of course computation has
to be physically implemented in order to be performed. That’s just as
true of the physical implementation of a lunar landing simulation or
the proof of a computational proof of the 4-color problem. But the
physical implementation of any computation that is actually performed
is not the same as the sensorimotor grounding of a computation that
attempts to implement our cognitive capacity.

And that, again, is because
cognition is not just computation.

JSTERN: “[T]o wonder
about the role of causal systems in cognition is an empty argument…
lack of causality would defeat pretty much any kind of argument one
would ever make about cognition – one can hardly imagine a workable
deﬁnition of the term without a causal world for it to live in (and for
us to discuss it in).”

The only causality we need
for cognition is the causality needed to pass T3. The causality to pass
T2 is not enough (if it is just computational), because the connection
between T2′s internal symbols and the external world of objects to
which those symbols refer cannot be made by the T2 system itself: It
has to be mediated by external interpreters: cognizers. Yet it is the
cognizers’s cognition that the T2 mechanism is intended to explain.

Another inﬁnite regress.

Hence, again, cognition is
not just computation.

And it’s not just any old
causality (or implementation) that’s needed: It’s the right one. T3
ﬁxes that (insofar as grounding, hence doing, is concerned). But
neither T3 nor T4 touch feeling, yielding not even a clue of a clue of
how or why cognizers can feel.

JSTERN: “[T]here never
really was a “nothing but” in the symbol manipulation… thinking
involves various neurons and chemical reactions, or if… a computer,
some circuits and bags of electrons shifting around.”

Nope. I’m afraid physical
implementation ≠ grounding, whether the implementation is in wetware or
in hardware. Grounding is the sensorimotor capacity to recognize and
manipulate the external referents of internal symbols, whether the
implementation is in wetware or in hardware.

Ceterum Censeo: And
grounding still does not explain feeling (hence meaning).

JSTERN: “Whether any of
this bears on the problem of qualia is another matter.”

None of your comments
(which are only about the Chinese room and symbol grounding) bear on
the problem of feeling (“qualia”); but the target essay does.

JSTERN: “The moral of
the CR story… may indeed be [that] a lack of grounding, or embedding,
in the full causal matrix of the world, including the social aspects
thereof, is in some way crucial to telling a believable story about
cognition, much less in realizing it.”

T2 does not require
grounding, T3 does. The only moral is that if you want to tell “a
believable story about cognition,” reverse-engineer whatever it takes
to pass T3.

But the story the target
essay is telling is not just about Searle and CR; it is about Turing
and feeling.

“None of your comments
(which are only about the Chinese room and symbol grounding) bear on
the problem of feeling (“qualia”); but the target essay does.” Yes sir,
that is exactly right. But what this shows is that you have given up on
the symbol grounding problem, and are now completely engaged in a
qualia grounding problem. My point is that this is another matter. I
see no shortcoming to symbols that need grounding. I do not buy
Searle’s assertion that the “feel” of understanding Chinese is missing,
is illustrated by the CR. You apparently do buy his assertion, his
non-argument. I see no reason to ground symbols in qualia. In fact, I
argue – and you ignore – the idea that even if you could ground a
symbol in a feeling, the symbol manipulation would still be exactly
that which realizes cognition, it would simply do so with a better
explained foundation. Grounding a symbol is not eliminating the symbol,
and a grounded symbol is still nothing at all without a computational/
cognitive process. – Have I missed the detailed description of what
your T3 is supposed to be? I gather it is a fully functional robot, not
just a teletype T2 test, but I can’t quite see that in this published
text. The problem is, your putative T3 in no way answers the questions
you have about T2. Refer it you will to David Chalmer’s zombies – they
might speak Chinese all day long, have a recollection of history,
juggle bowling pins and complain about the heat, and still have no more
understanding than Eliza. And you can extend that to T4 or Taleph2 and
still not touch the issue. If you don’t believe me, ask Donald Davidson
or Ruth Garrett Millikan about SwampMan. What that all misses is that
Turing was right to focus on what differentiates the entire set of
examples from non-cognitive systems, and that is whether an algorithmic
process can explain cognition, for it is absolutely certain that
nothing else can, save transcendental arguments. –

JSTERN: “[Y]ou have
given up on the symbol grounding problem, and are now completely
engaged in a qualia grounding problem.”

Actually, I have not given
up on the symbol grounding problem. It just wasn’t the primary focus of
my essay in this online symposium on consciousness.

And the essay was not about
qualia “grounding” (I don’t even know what that would mean!) It was
about the fact that neither the explanation of T3 nor the explanation
of T4 capacity explains the fact that we feel.

(And the only time I
mentioned the word “qualia” was in the video, as one of the countless
synonyms of consciousness that we should stop proliferating, and just
call a spade a spade: Feeling.)

JSTERN: “[E]ven if you
could ground a symbol in a feeling, the symbol manipulation would still
be exactly that which realizes cognition, it would simply do so with a
better explained foundation.”

If you grounded symbols in
sensorimotor capacity (T3) it is not at all clear that the resultant
hybrid system could still be described as computational at all.
(Symbols that are constrained by their dynamic links to their referents
are no longer the arbitrary squiggles and squiggles that formal
computation requires.)

And, to repeat, no one is
talking about “grounding” symbols in feeling: The problem is
explaining, causally, why and how some cognitive states (even hybrid
symbolic/dynamic ones) are *felt* states.

JSTERN: “Have I missed
the detailed description of what your T3 is supposed to be? I gather it
is a fully functional robot, not just a teletype T2 test, but I can’t
quite see that in this published text.”

As far as I can tell, you
haven’t missed anything:

A system that can pass T2
has the capacity to do anything we can do, verbally, and do it in a way
that is indistinguishable *from* the way any one of us does it, *to*
any one of us, for a lifetime.

A system that can pass T3
has the capacity to do anything we can do, both verbally and
robotically, and do it in a way that is indistinguishable *from* the
way any one of us does it, *to* any one of us, for a lifetime.

A system that can pass T4
has the capacity to do anything we can do, both verbally and
robotically, do it in a way that is indistinguishable *from* the way
any one of us does it, *to* any one of us, for a lifetime, both
behaviourally, and in all its measurable neurobiological function.

JSTERN: “David
Chalmers’s zombies… might speak Chinese all day long, have a
recollection of history, juggle bowling pins and complain about the
heat, and still have no more understanding than Eliza.”

Never met a zombie, so far
as I know. Nor do I know on what basis they are being supposed to
exist. They sound like T3 or T4 robots without feelings, and as such
they do not add or settle anything one way or the other.

The unsolvable (“hard”)
problem remains that of explaining how and why T3 or T4 robots feel, if
they do, and if they do not (i.e., if they are “zombies”), than
explaining how and why *we* are *not* Zombies.

JSTERN: “And you can
extend that to T4 or T-aleph2 and still not touch the issue.”

You can say that again…

JSTERN: “Turing was
right to focus on what differentiates the entire set of examples from
non-cognitive systems, and that is whether an algorithmic process can
explain cognition, for it is absolutely certain that nothing else can,
save transcendental arguments.”

No, what distinguishes a
TT-passer from other systems is that it passes the TT totally
indistinguishably from any of us. The only TT that can be passed by
computation alone is T2, and that is not cognition because it is
ungrounded.

Only a dynamical system can
pass T3 or T4 (probably a hybrid dynamic/computational one). And a
purely computational system can *simulate* just about any dynamical
system, but it can’t *be* any dynamical system.

If T3 and T4 robots feel,
then they feel, and then what we are looking for is not a
“transcendental argument” but a down-to-earth explanation of how and
why they feel,; but there does not seem to be the causal room for such
an explanation.

If T3 and T4 robots don’t
feel, then they don’t feel, and then what we are looking for is not a
“transcendental argument” but a downto-earth explanation of how and why
they don’t yet we do; but there does not seem to be the causal room for
such an explanation.

First, my apologies for
being mislead by the “qualia” objection, that I had failed to mention
it in my ﬁrst reply. Only, I’m not sure why you raised it. Let me try
to amend my ﬁrst reply with this insertion, then reply to one further
point in this your second reply. Also, if my language here sounds curt
that is not my intention, I’m only after brevity in discussion, and no
doubt it suffers from my brevity in composition as well – or I would
have hoped not to have made the excursion on qualia! So, to repair,
please add this after where I _incorrectly_ said: “But what this shows
is that you have given up on the symbol grounding problem, and are now
completely engaged in a qualia grounding problem.” [Actually, your
essay does mention qualia, and then at the end, waves them away. Yet,
you accept Searle’s complaint that “something is missing” and at least
Searle asserts that it is a quale, if “feeling” is even speciﬁc enough
to be worth discussing as a quale (and of course the tradition is that
it is), and that "something is missing" is what your essay, and your
work on the grounding problem, attempts to solve. OK, ﬁrst, of course
there *is* a grounding problem for symbols, or else what is it that
makes the lexical string c-a-t refer to a real-world feline. But, this
is hardly unique to AI systems, the same question is asked why you
refer to a cat that way, or even why I do. And certainly the “answer”
is in the way that the agent, be it human or computer, relates to
actual cats. So then, what is it that Searle is going on about? Not
that at all. So, I take my lead from Searle’s confusions, rather than
the much more universal linguistic problem. And, I should mention that
Searle’s confusions are, at worst, a mereological problem. I’m sure
this has been raised regarding the CR, but – within Searle himself,
surely he does not expect his pineal gland, alone, to understand
Chinese, nor any one of his neurons. The classic “systems approach” is
that systems understand, not each component of the system, that false
assertion is an ancient philosophical error, a fallacy of composition.
So, I assert again, Searle’s description of the CR from the start is
absolutely correct, he as a component of the CR of course does not
understand Chinese, whether or not the proper understanding of Chinese
should include feelings, whether or not the grounded understanding of
Chinese requires physical actions. The actual direction your essay
takes is to demand further behaviorist/positivist conﬁrmation of
understanding. You take Turing’s original test and chase it down a
regress, then try to wave off Humean skepticism that any observational
system could ever bring certainty. I agree with that to a large extent,
but I’m afraid that in chasing the regress, you forget what it is we
were discussing, what is critical to the discussion, and that is the
symbol manipulation in the ﬁrst place. If you succeed in chasing the
regress and waving off the remnant (which I do not believe is quite the
proper structure, but let’s allow it), then you have only established
that the original system was right after all, you certainly have not
*reduced* symbol manipulation into swinging a baseball bat. And so, I
argue that starting the regress is an improper, unproductive move, and
offer further reasons why this regress, to qualia or embedding, is both
unnecessary and unpersuasive.] Finally, the additional point: “Only a
dynamical system can pass T3 or T4 (probably a hybrid
dynamic/computational one). And a purely computational system can
*simulate* just about any dynamical system, but it can’t *be* any
dynamical system.” But you have agreed that a computational system must
have a physical realization, and that physical realization is certainly
of the same dynamic systems quality as T3 or T4, so whatever difference
you are asserting in T3 or T4, it is not the property of being a
dynamic system. I’d suggest it is only in looking more humanoid, and I
think we all know that is not a sufﬁcient argument.

JSTERN: “Actually, your
essay does mention qualia, and then at the end, waves them away.”

I can’t grep a single token
of “qualia” in my essay, but if you mean my preferred synonym,
“feelings,” I would not say I wave them away! On the contrary, I say
they are real, relevant — and inexplicable (with reasons)…

JSTERN: “‘something is
missing’ is what your essay, and your work on the grounding problem,
attempts to solve”

Actually, the essay points
out that there are two things missing (grounding and feeling), and only
one of them can be provided. What is missing in the verbal Turing Test,
T2 (if passed via computation alone), is sensorimotor grounding. The
solution is to move up to T3 (the robotic TT).

What is missing in any
full, Turing-scale, reverse-engineering solution for generating and
explaining cognitive capacity — whether T3 or T4 — is an explanation of
why and how it feels (if it does); and the reason for this is that
there is no causal room for feeling as an independent causal component
in the explanation (except if we resort to psychokinesis, which is
contradicted by all evidence).

JSTERN: “systems
understand, not each component of the system… Searle… as a component of
the CR of course does not understand Chinese”

Searle’s Chinese Room
Argument has been generating fun as well as insights for over 30 years
now, but it is rather late in the day to resurrect the hoary old
“System Reply” without at least a new twist!

On the face of it, the
“System Reply” (which is that it is not Searle that would understand if
he executed the Chinese T2-passing computations, but “The System”) is
laid to rest by Searle’s own original reply to this, which ran
something like this (the words are not Searle’s but spicily improvised
here by me):

“Ok, if you really believe
that whereas I do not understand Chinese, the ‘system’ — consisting of
me plus the symbols, plus the algorithms I consult to manipulate them —
does understand Chinese, then please suppose that I memorize all the
algorithms and do all the symbol manipulations in my head. Then “le
système c’est moi”: There’s nothing and no one else to point to. So
unless you are prepared to believe (as I certainly am not!) that
memorizing and executing a bunch of symbol-manipulation rules is
sufﬁcient to generate multiple personality disorder — so that there are
now *two* of me in my brain, unaware of one another, one of whom
understands Chinese and the other does not — please do believe me that
I would not be understanding Chinese under those conditions either. And
there’s no one else there but me… Try the exercise out on something
simpler, such as training a 6 year-old innocent of math to factor
quadratic equations by rote symbol-manipulation formula, in his head.
Given any quadratic equation as input, he can give the roots as output.
See if that generates an alter ego who understands what he’s doing…”

(But, can I please repeat
my hope that my little essay will not just become an occasion to
rehearse the arguments for and against Searle’s Chinese Room Argument?
With T3, we’ve already left the Chinese Room, and the problem at hand
is explaining how and why we feel, not how and why the Chinese Room
Argument is right or wrong…)

Reverse-engineering the
mechanism underlying a device’s performance capacities is hardly
behaviorism/positivism! Behaviorists did not provide internal causal
mechanisms at all, and positivists were concerned with basic science,
not engineering, whether forward or reverse.

JSTERN: “You take
Turing’s original test and chase it down a regress, then try to wave
off Humean skepticism that any observational system could ever bring
certainty.”

The regress is the one
involved in trying to chase down meaning by going from meaningless
squiggle to meaningless squoggle (as in looking up a deﬁnition in a
Chinese-Chinese dictionary when you don’t know a word of Chinese).

The Humean/Cartesian
uncertainty is the usual one, the one that attends any empirical
observation or generalization, but I do think the uncertainty is an
order of magnitude worse in the case of the uncertainty about other
(feeling) minds than it is about, say, the outside world, or physical
laws.

But, with Turing, I agree
that Turing-indistinguishability — which can be scaled all the way up
to empirical-indistinguishability — is the best one can hope for, so
we’re stuck with T3 or T4. The rest is the usual underdetermination of
theory by data (there may be more than one successful causal theory
that can explain all the data) plus the unique case of uncertainty
about whether the system that conforms to the theory really has a
(feeling) mind.

JSTERN: “in chasing the
regress, you… have only established that the original system was right
after all, you certainly have not *reduced* symbol manipulation into
swinging a baseball bat.”

Passing T3 is not reducing
symbol manipulation (computation) to bat swinging (dynamics). It is
augmenting the task of reverse-engineering, scaling it up to having to
explain not just verbal capacity, but the sensorimotor capacity in
which verbal capacity (and much else in human cognition) is grounded.
That means augmenting pure symbol manipulation to a hybrid
symbolic/dynamic system (yet to be designed!) that can successfully
pass T3.

But both T2 and pure symbol
manipulation are by now dead and buried, along with the “system reply.”

JSTERN: “this regress,
to qualia or embedding, is both unnecessary and unpersuasive.”

The T2 symbol-tosymbol
regress, which is grounded (halted) by dynamic T3 capacity makes no
appeal to feeling (“qualia”). (The regression is just as present if you
simply point out that Searle could not point to a zebra even as he was
explaining what a Ban-Ma was, in Chinese.)

And the problem of
explaining feeling makes no particular use of the regress
(groundedness); it is assumed that that has already been taken care of
in T3 — but it doesn’t help!

JSTERN: “you have agreed
that a computational system must have a physical realization, and that
physical realization is certainly of the same dynamic systems quality
as T3 or T4, so whatever difference you are asserting in T3 or T4, it
is not the property of being a dynamic system. I’d suggest it is only
in looking more humanoid, and I think we all know that is not a
sufﬁcient argument.”

Hardly. What distinguishes
a T3 robot from a T2 computer is not a difference in what their
respective computer programs can do, but a difference in what their
dynamics can do. The ability to recognize a zebra is not just a
difference in appearance!

(Apologies in advance as
I’m not on my own computer and must necessarily type fast with less
time to double check what I’ve written. Hopefully I will have enough.)
Doesn’t the issue ﬁnally boil down to what it means to understand
something at all? As Professor Harnad notes, Searle seems to be looking
for something we can call the feel of understanding, what it feels like
to understand Chinese in his mythical room. The author raises two key
issues: First that understanding requires something more than the
isolationism provided by the basic Chinese Room scenario. He suggests
it requires a dynamical relation with the world (robotic capacity to
sense and operate in the world). What he apparently means is not that
the CR have the capacity to act on some things it understands but that
it has the ability to connect symbols to inputs in a causal chain.
Thus, we’re told that having meaning is to ground symbolic
representations in elements of the physical environment. Meaning is a
referring relation. Professor Harnad further differentiates between
meaning as connection between symbol and referent on the one hand, and
having feelings of getting it or understanding (in the sense already
described above) on the other. A CR equipped with the necessary
physical links might, he suggests, pass the requisite Turing test for
understanding but we would still be at a loss to ascertain if the
prescribed feeling is present or absent. A certain mystery is preserved
in this account. But perhaps the mystery could be dispelled if we go
back to the question of meaning as a matter of grounding. Perhaps
grounding is not an adequate account of meaning after all, even if
grounding plays a role in our mental lives (in which meaning is
realized). It seems undeniable that we are grounded in an important
sense and that we see meaning in things, at least to some degree,
through the referential relation between word (or symbol) and elements
in the world as is suggested. But does that imply that meaning is just
an expression of this kind of causal linkage? Is it enough to suppose
that without a causal chain to outside stimuli we are stuck with
circularity, i.e., one symbol ﬁnding its meaning in another equally
isolated symbol — and that this undermines any real possibility of
meaning? Would a CR that depended for all its inputs on what it is fed
about other symbols be unable to achieve understanding in any but an
imitative sense? Suppose we were totally isolated from our sensory
inputs about the world and had no inputs but a feed of digitized
signals which had no analogical relation to the underlying causal
phenomena which generated them because they were, say, generated in an
abstract way (Morse code perhaps)? Assuming that they were not merely
random but organized in patterns which were recognizable and so carried
some information through their repetitions, would we be denied any
possibility of achieving meaning at all? It would certainly be a much
more limited world than we are used to — and probably a very different
one, no matter how encyclopedic the information fed to us in symbols
would be. But would it follow that we would not be able to have any
understanding at all? (Didn’t Helen Keller learn about her world
somewhat like this?) After all, what are we but organic machines which
receive inputs about the world through physical systems? What we know
of the world begins with the signals our sensory equipment pick up and
pass up the line through our nervous to our neurological systems. What
happens to those signals involves various physical transformations
along the way, much as computers transfer their signals into more and
more complex arrays of information. But how is that signiﬁcantly
different (operationally rather than in terms of the physical platform)
from what computers do (though we are arguably much more complex)?
Perhaps a better account of semantics is to be found in a more complex
notion of the kind of system we are and which Searle’s CR manifestly
isn’t?

After all, the system he
describes is specked to do no more than rote responding, symbol
matching, though no one thinks that symbol matching is all we’re doing
when we read and match symbols through understanding their semantic
content. (Indeed, that’s the basic intuition that seems to make the CR
argument so compelling. If that’s all that a computer ﬁnally is, how
can it understand in the way we do?) Yet, the fact that computational
operations are largely mechanical in this way (and we need not deny it)
doesn’t imply anything about more complicated features that may arise
from increasingly complex operations formed by what are otherwise much
simpler operations. Why should a series of ordered electrical signals
yield a readable screen on my pc as I’m typing this? The letters I see
aren’t the series of signals that convert the lighted areas on my
screen into what are, for me, so many words. And yet they do it and
produce something in which I can ﬁnd meaning even if the digital
signals themselves carry no such meaning. Why then should we abandon
hope that feeling (qua awareness) could be causally explained. After
all, if the CR is too thinly specked to do anything but rote symbol
matching, it’s only natural to presume it’s too thinly specked to do
whatever it is that amounts to the feeling of knowing or understanding
something. If semantics involves the complexity of a system, why
wouldn’t that same complexity be enough to account for the sense of
understanding that accompanies instances of understanding in us in
cases like this?

SMIRSKY: “A CR equipped
with the necessary physical links might… pass the requisite Turing test
for understanding but we would still be at a loss to ascertain if the
prescribed feeling is present or absent”

Passing T3 is not “a CR
plus physical links” (if by “CR” you mean a computer). Passing T3
requires a robot — a dynamical system with the sensorimotor power to do
anything a person can do, verbally, and behaviorally, in the world.
That’s highly unlikely to be just a computer plus peripherals.

(The brain and body
certainly are not a computer plus peripherals; there’s not just
[implemented] computations going on in the brain, but a lot of analog
dynamics too.)

But even if it were possble
to pass T3 with just computation + peripherals, cognitive states would
not be just the computational states: They would have to include the
dynamical states that include the sensorimotor transduction plus the
computation. (Otherwise Searle could successfully re-run his CR
argument against the computational component alone.)

SMIRSKY: “Perhaps
grounding is not an adequate account of meaning after all, even if
grounding plays a role in our mental lives (in which meaning is
realized).”

Grounding is a necessary
but not a sufﬁcient condition for meaning.

SMIRSKY: “But does that
imply that meaning is just an expression of this kind of causal
linkage? Is it enough to suppose that without a causal chain to outside
stimuli we are stuck with circularity, i.e., one symbol ﬁnding its
meaning in another equally isolated symbol — and that this undermines
any real possibility of meaning?”

Please see the earlier
posting on “Cognition, Computation and Coma”: The T3 causal connection
to the world is needed to test whether the T3 robot really has T3-scale
capacity, indistinguishable from our own, for a lifetime. But apart
from the need to *test* it

— in order to make sure
that the robot has full T3 power — all that’s needed in order to *have*
full T3 power is to have it. There’s no need for a prior real-time
causal history of sensorimotor contact with objects. Just as toasters
would be toasters, with all their causal powers, even if they had grown
on trees rather than being designed and built by engineers, so my
current T3 capacity would be what it is at this moment even if I had
dropped freshly off a tree, fully formed, 10 minutes ago, with no real
history (including having actually written the target essay several
weeks ago, nor having seen and interacted with real objects throughout
a lifetime).

This is not to say that it
is likely to be possible to design a T3 robot with a viable yet
prefabricated “virtual” history, rather than a real history — any more
than it is likely that a T3 robot designed to pass the T3 for a
comatose person would be able to pass the T3 for a normal, awake,
ambulatory person.

But the most important
point is that it is a big (and circular) mistake to assume that what
goes on internally inside a T3 device (or inside any device, other than
a computer) consists only of computations, plus whatever hardware is
needed to implement them. Once we have left the domain of T2 — where it
could, in principle, be just symbols in, symbols out, and nothing but
symbol-manipulation (computation) in between — we have entered the
world of dynamics, and not just external or peripheral dynamics, but
also internal dynamics.

And that includes the
internal dynamics of blind, deaf or paralyzed people who, even if
they’ve lost the power to see, hear or move, retain some or all of the
internal dynamics of their sensorimotor systems — which are not, we
should keep reminding ourselves, simple the dynamics of implementing
computations.

SMIRSKY: “Would a CR
that depended for all its inputs on what it is fed about other symbols
be unable to achieve understanding in any but an imitative sense?”

If by “CR” you mean just a
computer, then indeed any computer is just as vulnerable to Searle’s
argument and to the symbol grounding problem, no matter what it is fed
by way of input.

If you mean a T3 robot that
grew on a tree and could pass T3 as soon as it fell to the ground, yes,
in principle it should be able to continue with just verbal (T2) input,
if it really has full T3 power. But nothing hangs on that, because it
is not just a computer, computing. It also has whatever internal
dynamic wherewithal is needed to pass T3.

SMIRSKY: “Suppose we
were totally isolated from our sensory inputs about the world and had
no inputs but a feed of digitized signals which had no analogical
relation to the underlying causal phenomena which generated them
because they were, say, generated in an abstract way (Morse code
perhaps)?”

Same answer as above!

SMIRSKY: “(Didn’t Helen
Keller learn about her world somewhat like this?)”

Helen Keller was not a
computer. She was a human with some sensory deﬁcits — but enough intact
sensorimotor capacity for normal human cognitive capacity.

SMIRSKY: “After all,
what are we but organic machines which receive inputs about the world
through physical systems? What we know of the world begins with the
signals our sensory equipment pick up and pass up the line through our
nervous to our neurological systems.”

All true. But whatever we
are, we are not just digital computers receiving and sending digital
digital I/O. We are dynamical systems

— probably hybrid
analog/computational — with T3-scale capacity.

SMIRSKY: “how is that
signiﬁcantly different (operationally rather than in terms of the
physical platform) from what computers do”

We see, hear, touch and
manipulate the things in the world. Computers just manipulate symbols.
And tempting as it is to think of all of our “input” as being symbolic
input to a symbol-manipulating computer, it’s not. It’s sensorimotor
input to a hybrid analog/digital dynamical system; no one knows how
much of the structure and function of this T3 system is computational,
but we can be sure that it’s not all computational.

SMIRSKY: “the system
[Searle] describes is specked to do no more than rote responding,
symbol matching, though no one thinks that symbol matching is all we’re
doing when we read and match symbols through understanding their
semantic content”

T2 is “specked” to do
everything that any of us can do with words, in and out. Passing T2
with computation alone is specked to do whatever computation can do.
But for T3, which is necessarily hybrid — dynamic/computational, all
purely computational bets are off.

SMIRSKY: “more
complicated features… may arise from increasingly complex operations
formed by what are otherwise much simpler operations.”

T2 can be made as “complex”
as verbal interaction can be made. So can the computations on the I/O.
But as long they are just computations (symbol-manipulations), be they
ever so complex, the result is the same: Symbols alone are ungrounded.

And the transition from T2
to T3 is not a phase-transition in “degree of complexity.” It is the
transition from just implementation-independent computation (symbol
manipulation) to the dynamics of hybrid sensorimotor transduction (and
any other internal dynamics needed to pass T3.

SMIRSKY: “Why should a
series of ordered electrical signals yield a readable screen on my pc
as I’m typing this?”

A person is looking at the
shape on the screen. And the screen is not part of the computer (it is
a peripheral device). If you see a circle on the screen, that does not
mean the computer sees a circle. The objective of cognitive science is
to explain how you are able to detect, recognize, manipulate, name and
describe circles, not how a computer, properly wired to a peripheral
device, can generate something that looks like a circle to you.

SMIRSKY: “The letters I
see aren’t the series of signals that convert the lighted areas on my
screen into what are, for me, so many words. And yet they do it and
produce something in which I can ﬁnd meaning even if the digital
signals themselves carry no such meaning.”

The shape you see on a
screen is indeed generated by a computer. But neither what you can do
with that shape (detect, recognize, manipulate, name and describe it:
T3) — nor what it *feels like* to be able to see and do all that — are
being done by the computer. This is just as true for being able to read
and understand what words mean: T3 is miles apart from T2. And input to
a T3 robot is not input to a computer.

SMIRSKY: “Why then
should we abandon hope that feeling (qua awareness) could be causally
explained.”

None of what you have said
so far has had any bearing on whether (or how) feeling could be
causally explained. It has only been about whether all of our cognitive
know-how could be accomplished by computation alone. And the answer is:
No. At the very least, sensorimotor grounding is needed, and
sensorimotor transduction is not computation; and neither the input to
a sensory transducer nor the output of a motor effector is input and
output to or from a computer.

The problem of explaining
how and why whatever system does successfully pass T3 (or T4) feels is
a separate problem, and just as “hard” if the system is hybrid
dynamic/computational as if it had been just computational.

SMIRSKY: “If semantics
involves the complexity of a system, why wouldn’t that same complexity
be enough to account for the sense of understanding that accompanies
instances of understanding in us in cases like this?”

I came across this
conference by chance, I’m highly appreciative of the willingness of
Professor Harnan and the other participants to make themselves
available to the public in this way. Unlike W. S Robinson, I am
unsympathetic to Professor Harnad’s emphasis on robots, and I hope to
explain why. Professor Harnad doubts whether there is any way to give a
credible description of what anything feels like, without ever having
felt anything: and suggests that the way to overcome this is to augment
the computer with sensorimotor apparatus. The mistake here, I suggest,
is to imagine that a robot or a robot/computer combination would feel
anything more than the computer would on its own. Professor Harnad
suggests that our robots and computers are able to do a tiny fraction
of what we do. My submission is that they are not able to do even that
tiny fraction, that what they do do is not even related to what we can
do, or at least, not to the relevant part of what we do. The crucial
distinction between our capacities and those of computer/robots can be
brought into focus by considering the distinction made by Searle
between “observer-dependent” and “observer-independent” phenomena.
Examples of observer-dependent phenomena include money and computers.
Something is only money when identiﬁed as such by an observer.
Something is only a computer when identiﬁed as such by an observer.
Examples of observer-independent phenomena include the metals used to
make the coins we use as money, the metals and plastics etc we use to
make computers, the physical processes in our brains, and
consciousness. All these exist in nature independently of whether they
are identiﬁed as such by an observer or not. The robot’s feeling
something is observer-dependent. The robot’s existence as a discrete
entity is observer-dependent. The syntax in the associated computer is
observer-dependent, but also the semantics and even the computer’s very
status as a computer. This is not a matter of intuitions, it’s more in
the nature of a mathematical truth. The whole of what (we say) makes up
the computer/ robot is available for our inspection, and we can see
that the “meaning” is something additional to that inventory. We know
where the “meaning” is, and we know it doesn’t go into the computer,
not in the necessary observer-independent way. And the same is true for
the robot, so that a robot/computer combination is no improvement on a
computer alone. The crucial, observer-independent features are still
missing. This is a conference about consciousness, but computers and
robots are products of consciousness, and not in any sense its source,
and as such they can not be expected to tell us much about its source,
or its nature.

(Reply to Bernie Ranson —
with apologies for the delay: I missed your commentary!)

B.RANSON: “Professor
Harnad doubts whether there is any way to give a credible description
of what anything feels like, without ever having felt anything: and
suggests that the way to overcome this is to augment the computer with
sensorimotor apparatus.”

But note that that’s only
about T2, and hence its about verbal *descriptions* of what things feel
like. And the solution is T3, in which the robot can actual have
sensorimotor interactions with the things described. That grounds T2
descriptions of things, but it most deﬁnitely doesn’t explain how and
why the T3 robot feels anything at all (if it does): The robot’s
descriptions would be grounded by the data from its sensorimotor
interactions with things whether or not it really felt like something
to undergo those sensorimotor interactions; if the T3 robot did not
feel, then it would simply have the requisite sensorimotor means abd
data to ground its verbal descriptions — means and data lacked by a T2
system, but possessed by a T3 robot.

B.RANSON: “The mistake
here, I suggest, is to imagine that a robot or a robot/computer
combination would feel anything more than the computer would on its
own.”

It might or might not be a
mistake to conclude that a T3 robot really feels. Turing’s point was
that if the robot’s performance was indistinguishable from that of a
real, feeling person, then it would be arbitrary to ask for more in the
case of the T3 robot. (We could ask for T4, even though we don’t ask
for it in real people when we’re doing everyday mind-reading, but even
T4 would not explain how and why the brain feels: it would only — like
T3 — explain how the brain can do what it can do.)

So whereas it may (or may
not) be true that a T3 robot would no more feel than a T2 computer
does, adding T4 would not explain how or why it feels either — and it
would not even improve on T3′s performance power.

B.RANSON: “Professor
Harnad suggests that our robots and computers are able to do a tiny
fraction of what we do. My submission is that they are not able to do
even that tiny fraction, that what they do do is not even related to
what we can do, or at least, not to the relevant part of what we do.”

Let’s not quibble over how
tiny is tiny. But I agree that generating fragments of our total
performance capacity has far more degrees of freedom than generating
all of our performance capacity. Toy fragments are more
“underdetermined,” and hence less likely to be generating performance
in the right way. (That was part of Turing’s reason for insisting on
*total* indistinguishability.) So today’s robots are just arbitrary
toys; but that does not mean that tomorrow’s T3 robots will be. (And
let’s not forget that we, too, are T4 robots, and feeling ones, but
engineered long ago by the “Blind Watchmaker” [i.e., evolution] rather
than reverse-engineered by future cognitive scientists.)

B.RANSON: “The crucial
distinction between our capacities and those of computer/robots can be
brought into focus by considering the distinction made by Searle
between “observer-dependent” and “observer-independent” phenomena…
[observer-dependent:… money and computers… observer-independent:…
metals and plastics… physical processes in our brains, and
consciousness]”

Interesting way to put the
artiﬁcial/natural kinds distinction (except for consciousness, i.e.,
feeling, which does not ﬁt, and is the bone of contention here); but
how does this help explain how to pass the Turing Test, let alone how
and why T3 or T4 feels?

B.RANSON: “The robot’s
feeling something is observer-dependent. The robot’s existence as a
discrete entity is observer-dependent.”

If the robot feels, that’s
no more “observer-dependent” than that I feel. And the robot is an
artifact, but if it had grown, identically, from a tree, it would not
be an artifact. Either way, it can do what it can do; and if it feels,
we have no idea how or why (just as we have no idea how or why we feel).

B.RANSON: “The syntax in
the associated computer is observer-dependent, but also the semantics
and even the computer’s very status as a computer.”

Syntax is syntax. The fact
that it was designed by a person is not particularly relevant. What is
deﬁnitely observer-dependent is the syntax of an ungrounded computer
program. To free it from dependence on an external observer, it has to
be grounded in the capacity for T3 sensorimotor robotic interactions
with the referents of its internal symbols. Then the connection between
its internal symbols and their external referents is no longer
observer-dependent.

But that still leaves
untouched the question of whether or not it feels; and it gives no hint
of how and why it feels, if it does. Whether or not something or
someone feels is certainly not (external) observer-dependent…

B.RANSON: “This is not a
matter of intuitions, it’s more in the nature of a mathematical truth.
The whole of what (we say) makes up the computer/robot is available for
our inspection, and we can see that the “meaning” is something
additional to that inventory. We know where the “meaning” is, and we
know it doesn’t go into the computer, not in the necessary
observer-independent way.”

You seem to think that if
we build it, and know how it works, that somehow rules out the fact
that it really feels. There’s no earthly reason that needs to be true.

Yes, the meaning of
ungrounded symbols is in the head of an external observer/interpreter.
But cognitive science is about trying to reverse-engineer what is going
on in the heads of observers!

B.RANSON: “a
robot/computer combination is no improvement on a computer alone. The
crucial, observer-independent features are still missing.”

It’s certainly an
improvement in terms of what it can do (e.g., T2 vs. T3). And since,
because of the other-minds problem, whether it feels is not observable,
it’s certainly not obserever-dependent.

B.RANSON: “This is a
conference about consciousness, but computers and robots are products
of consciousness, and not in any sense its source, and as such they can
not be expected to tell us much about its source, or its nature.”

And if this were a
conference about life, rather than consciousness, it could well be
talking about how one synthesizes and explains life. There’s no earthly
reason the same should not be true about cognition — except that
feeling (which is surely as real and “observer-independent” property as
living is) cannot be causally explained in the same way that everything
else — living and nonliving, artiﬁcial and natural,
observer-independent and observer-dependent — can be.

Professor Harnad writes
in his response to JStern: “Searle’s Chinese Room Argument has been
generating fun as well as insights for over 30 years now, but it is
rather late in the day to resurrect the hoary old ‘System Reply’
without at least a new twist!” I want to be cognizant of this concern
and keep the discussion focused on Professor Harnad’s paper in
accordance with his understandable desire for that. But I also think it
would be a mistake to let the above pass uncommented on. The so-called
System Reply fails if one takes the Chinese Room scenario to represent
ANY possible system consisting of the same constituents (computational
processes running on computers) as those that make up the CR. But there
is no reason to think that this failure can be generalized to all
possible systems of this type. Just as certain biological systems have
capacities which others lack (humans have understanding but it’s not
clear that any and all living organisms do, though there are certainly
debates about that), so the fact that the CR fails says nothing about
other qualitatively equivalent systems. The CR may be seen to fail
because it is inadequately specked, just as a microbe or a jellyﬁsh or
an arthropod or a mouse or a horse lack the capacities to understand
because they lack the specs we humans have. Just because the rote
processes which the CR is capable of performing are 1) merely
mechanical operations (not conscious, and thus capable of
understanding, in and of themselves) and 2) provide nothing like what
we would expect of a genuine understanding entity in the combination
that is the CR, it doesn’t follow that this is generalizable to all
systems of the same qualitative (computational) type. The issue, I
think, lies in what we think understanding is. Searle’s argument treats
it as a bottom line feature of minds, assuming that, if it is present
or even potentially present, it should be found there as a feature of
one or more of the contituent elements of the CR (e.g., that
understanding is a process level feature). But, in fact, there is no
reason to expect to ﬁnd it at THAT level when you think about it. If
understanding is a function of process complexity, as it well may be,
then it would not be surprising that an underspecked system (lacking
the scope and complexity needed) would lack it — or that a more fully
specked system would not. The real point to the System Reply is not
that the CR understands even if Searle, as a system component doesn’t
(because Searle’s CR doesn’t understand anymore than Searle, the
component, does). It’s that understanding may well be best understood
as a system level feature (the result of the interplay of many
processes, none of which are, themselves, instances of understanding)
rather than as a feature of any of the particular processes that make
up the system itself. This is why I have suggested that we need to look
more closely at the matter of meaning (understanding the semantics of
the symbols). And here we come back to the paper under discussion.
Professor Harnad strikes me as correct when he focuses on the issue of
feeling (meant here as a term for the kind of awareness we have when we
know or understand anything). But it seems wrong to me to think that
the other aspect of this, meaning, is a matter of a referential
relation between symbol and object in the world (ﬁnally achieved
through the addition of a dynamical physical relation between the CR
and the environment external to it). There is no reason to think that
all meaning consists of this relation, even if some meanings (in some
sense of the word “meaning”) happen to. Moreover, as we are always
imprisoned in our own organically based mechanical contraptions, there
can ﬁnally be no difference between the robot CR’s connections between
symbol and object and that of the non-robot CR. In both cases, the
referencing that’s going on involves representation to representation.
That the representations may in whole or part trace their genesis back
to external stimuli doesn’t alter the fact that the external stimuli we
can talk (and think) about are whatever it is we hold in in our heads
at any given moment. Even the external world, with all its complexity
and detail, when understood as the knowledge we have of it, consists of
the layered mapping we develop over the course of the natural history
of our individual lives. Here is an example I have often used to show
this: Driving up the east coast on a road trip with my wife some time
back I saw a road sign in South Carolina (I believe that was the state)
which read “Burn lights with wipers”. I did a double take and was
momentarily confused. I had images in my head of a big bonﬁre with
people tossing old wiper blades and light bulbs onto it. Then, in an
instant, everything shifted and I got a different picture. I realized
that what the sign meant was that motorists should turn on their
headlights when running their windshield wipers in inclement weather,
that the imperative to “burn” meant turn on, not set on ﬁre. This
recognition was accompanied by a different set of mental images where I
saw myself leaning over to turn on the headlights on the dashboard as
the wipers rythmically stroked the windshield in a heavy downpour. I
even had a brief ﬂash of myself failing to do that and suddenly
crashing into another vehicle for lack of visibility. Lots of images
passed through my head, replacing the one of the bonﬁre of wiper blades
and light bulbs. But my actions never changed. It wasn’t raining after
all and I had no need to act on the sign’s instruction. It’s just that
I suddenly had images that made sense of the words. No overt behavior
on my part followed the change in my mental imagery. I want to suggest
that the meaning I found in the words lay in the complex of images
which, I expect, are not shared in precise detail by any two people on
earth. But the fact that MY images (the dashboard or the interior of
the vehicle I visualized, or the exterior environment I “saw” or the
way in which I visualized myself leaning toward the dashboard, etc.,)
were unique to me didn’t prevent me from getting the sign’s meaning as
it was intended by those who had posted the sign. It seemed to me at
that moment that what constituted the meaning was not the references in
any particular, nor was it my behavior (as there was no change in mine)
nor was it something found in the sign outside my car with which I
abruptly connected. Rather it was a certain critical mass of imagery
which, however different from person to person, had enough in common
with the signmakers, because of shared associations, etc., to provide
enough common touchstones for an exchange of information on a
symbol/reference level.

The meaning of the words
did not lie in some physical grounding but in the web of mental images
I had accumulated in my head over a lifetime up to that point. Now it’s
certainly true that this accumulation was driven in large part by the
kind of referential relations implied by the grounding thesis offered
by Professor Harnad. But it was the images and their relations to one
another that constituted my understanding, not the grounding of the
words to anything in particular in the world at that instance of
observation. There seems to be no reason, in principle, that the images
could not have found their way into a system like me in some other way,
i.e., by being put there as part of a data dump. This may not be how we
get our mental images, of course, as there is every reason to believe
we build up our stored information piecemeal and cumulatively over the
course of our lives. But the way WE get them seems to be less important
here than their role in the occurrence of understanding in the system.
As to the matter of feeling, I want to suggest that insofar as meaning
may be a function of complex associative operations within a given
system utilizing retained and new inputs, why should we think it
unlikely that being aware of what one is doing on a mental level (such
as my introspective observation of a moment’s confusion on the road
through South Carolina) is any less likely to be a function of a
sufﬁciently complex system that can and does collect, retain and
connect multiple representations on many levels over a lifetime? If so,
there is no real explanatory gap to worry about once we recognize that
the only point of explanation in this case is to explain what causes
what. Why should an explanation that describers semantics as a function
of a certain level of computational complexity not also sufﬁce to
account for the occurrences we think of as being aware of what is going
on, of feeling what’s happening (in Antonio Damasio’s interesting way
of putting this)?

SMIRSKY: “[From the fact
that passing T2 through computation alone] fails [to generate
understanding] it doesn’t follow that this is generalizable to all
systems of the same qualitative (computational) type.”

It generalizes to all
computational systems.

(I don’t know what you mean
by computational systems of “qualitatively different type.” I only know
one type of computation — the kind described by Turing, Church, Goedel
and Kleene, all equivalently and equipotently:
implementation-independent, rule-based symbol manipulation; the symbols
are systematically interpretable as meaningful, but their meanings are
not intrinsic to the system. The symbol-manipulation rules (algorithms)
are based on the symbols’ shapes (which are arbitrary), not their
meaning. Syntax, not semantics. See, for example, the Turing Machine.)

What Searle shows is that
cognition cannot be just computation. And that means any computation.

SMIRSKY: “If
understanding is a function of process complexity, as it well may be,
then it would not be surprising that an underspecked system (lacking
the scope and complexity needed) would lack it — or that a more fully
specked system would not.”

The reference to complexity
is exceedingly vague. I know of no qualitative difference between
“types” of computation based on the complexity of the computation.
Computation is computation. And cognition is not just computation, no
matter how complex the computation.

SMIRSKY: “understanding
may well be best understood as a system level feature (the result of
the interplay of many processes, none of which are, themselves,
instances of understanding)”

Perhaps, but this too is
exceedingly vague. I can construe it as being somehow true of whatever
it will prove to take in order to pass T3. But it certainly does not
rescue T2, nor the thesis that cognition is just computation.

SMIRSKY: “it seems
wrong.. that… meaning, is a matter of a referential relation between
symbol and object in the world (ﬁnally achieved through the addition of
a dynamical physical relation between the CR and the environment
external to it).”

It seems wrong because it
*is* wrong! Connecting the internal symbols of a T3 system to their
external referents gives the symbols grounding, not meaning.

Meaning is T3 grounding +
what-it-feels-like-to-mean

SMIRSKY: “there can
ﬁnally be no difference between the [T3] robot’s… connections between
symbol and object and that of the [T2 system]. In both cases, the
referencing… involves representation to representation.”

“Representation” covers a
multiple of sins! If you mean computation, see above. If you mean
non-computational processes, all bets are off. Now the trick is to ﬁnd
out what those “representations” turn out to be, by reverse-engineering
them, T3-scale…

SMIRSKY: “the external
stimuli we can talk (and think) about are whatever it is we hold in in
our heads at any given moment.”

Yes, and then the question
is: what has to be going on in our heads to give us the capacity to do
that? The way to ﬁnd out is to design a system that can pass T3. (We
already know it can’t be done by computation alone.)

SMIRSKY: “I… had images
that made sense of the words… the meaning was not the referen[ts]… nor…
my behavior…it was a certain critical mass of imagery … not… some
physical grounding but… the web of mental images… and their relations
to one another that constituted my understanding, not the grounding of
the words.”

No doubt, but the question
remains: what has to be going on in our heads to give us the capacity
to do that? The way to ﬁnd out is to design a system that can pass T3.
(We already know it can’t be done by computation alone.)

SMIRSKY: “There seems to
be no reason… the images could not have found their way into a system
like me in some other way… [e.g.] put there as part of a data dump.”

I agree that a real-time
prior history — although it is probably necessary in practice — is not
necessary in principle in order to have full T3 power. (E.g., to
recognize, manipulate, name, describe, think about and imagine apples,
you need not have had a real-time history of causal contact with apples
in order to have developed the requisite apple-detectors and
apple-know-how. They could have been built in by the engineer that
built you — even, per impossibile, the Blind Watchmaker. But it
is not very likely. And, either way, whether inborn or learned, that T3
power will not be just computational. Hence building it in in advance
would entail more than a data-dump.)

SMIRSKY: “[just as]
meaning may be a function of complex associative operations within a
given system utilizing retained and new inputs… [so] being aware of
what one is doing on a mental level… is… a function of a sufﬁciently
complex system that can and does collect, retain and connect multiple
representations on many levels over a lifetime?”

“Computational complexity”
is already insufﬁcient to constitute sensorimotor grounding. It is a
fortiori insufﬁcient to explain feeling (hence meaning).

But let me clarify one
thing: when I substitute “feeling” for all those other weasel words for
“conscious” and “consciousness” — “intentionality,” “subjectivity,”
“mental,” etc., I am not talking particularly about the quality of the
feeling (what it feels like), just the fact that it is felt (i.e., the
fact that it feels like something).

As I think I noted in the
target essay, I may feel a toothache even though I don’t have a tooth,
and even though it is in reality referred pain from an eye infection.
So it is not my awareness that my tooth is injured that is at issue (it
may or may not be injured, and I may or may not have a tooth, or even a
body!). What is at issue is that there is something it feels like to
have a toothache. And I’m feeling something like that, whether or not
I’m right about my having an injured tooth, or a tooth at all.

By exactly the same token,
when I suggest that meaning = T3 capacity + what-it-feels-like to mean,
what I mean is that I may be able to use a word in a way that is T2-
and T3-indistinguishable from the way anyone else uses it, but, in
addition, I (and presumably everyone else who knows the meaning of the
word) also know what that word means, and that means we all have the
feeling that we know what it means. I may be wrong (just as I was about
my tooth); I (or maybe everyone) may be misusing the word, or may have
wrong beliefs about its referent (as conﬁrmed by T3). But if there is
nothing it feels like to be the T3 system using that word, then the
word merely has grounding, not meaning.

SMIRSKY: “there is no
real explanatory gap to worry about once we recognize that the only
point of explanation in this case is to explain what causes what. Why
should an explanation that describes semantics as a function of a
certain level of computational complexity not also sufﬁce to account
for the occurrences we think of as being aware of what is going on, of
feeling what’s happening”

On the Systems Reply
(response to Stevan Harnad): There seems to be a signiﬁcant
miscommunication here. In the classic Systems Reply, I have always
taken “the system” to include all of the sensorimotor components that
are needed for the system to work in the real world. However, what you
express as a paraphrase of Searle, seems to quite deliberately exclude
that part of the system.

RICKERT: “In the classic
Systems Reply, I have always taken “the system” to include all of the
sensorimotor components that are needed for the system to work in the
real world. However, what you express as a paraphrase of Searle, seems
to quite deliberately exclude that part of the system.”

Quite deliberately.

In T2, the peripherals by
which the Chinese symbol input are received from the pen-pal are
trivial. So are the peripherals by which Searle’s Chinese symbol output
are transmitted to the pen-pal. Symbols in, symbols out, and nothing
but symbol-manipulation in between. It think it would stretch credulity
to the point of absurdity to say that even if Searle memorized all the
algorithms and executed them in his head, he still wouldn’t be the
whole system, because the whole system consists of that plus the input
and output itself.

That’s like saying I don’t
understand English. It’s the system consisting of me and the words that
understands.

No, the only point at which
the peripherals become important is in the “Robot Reply,” which (in the
original Searle paper and accompanying commentaries in BBS, which I
umpired!) was a defective objection. The original robot reply ran along
the lines of saying that T2 needed peripherals in order to connect with
the world robotically. Searle responded, ﬁne, I’ll still do the
computational part, and I still won’t understand.

The right Robot Reply would
have been to demand not just T2 power plus robotic peripherals, but T3
power, because in that case Searle really would not be, and could not
be, the entire System, just its (implementation-independent)
computational component.

That would be a valid
“System Reply,” but it would be a reply about T3 power and not just T2
power, and it would purchase its immunity to Searle’s Chinese Room
Argument (that cognition is not just computation) at the price of
conceding that cognition is indeed not just computation! The “System”
that understands has to include the sensorimotor dynamics (at least).

Harnad: "In T2, the
peripherals by which the Chinese symbol input are received from the
pen-pal are trivial."

Agreed. However, I don’t
see that as contradicting the “systems reply”.

According to the
“systems reply”, if AI is successful, then the intentionality will be
in the system. As I see it, if the peripheral system is trivial, then
AI will not be successful. And that is consistent with the “systems
reply.” I expect that the kind of system that Searle was envisaging
would fail a rigorous Turing test. We wouldn’t need to go to a TTT to
see that.

When asked about the
source of intentionality, Searle says it is due to the causal powers of
the brain. That’s pretty much a restatement of what I take the “systems
reply” to say.

Harnad: "That’s like saying
I don’t understand English. It’s the system consisting of me and the
words that understands.|

I am left wondering what
you mean by that “I”. Surely, it includes the system.

NRICKERT: “According to
the ‘systems reply’, if AI is successful, then the intentionality will
be in the system. As I see it, if the peripheral system is trivial,
then AI will not be successful. And that is consistent with the
‘systems reply.’”

“AI” is a loosely deﬁned
ﬁeld. Who knows what is true or untrue according to a ﬁeld.

In contrast,
“computationalism” is a thesis (somewhat close to what Searle called
“Strong AI”).

Response to Stuart W.
Mirsky on 2/18 at 16:36 Mirsky: “The use [of ‘feeling’] you allude to
is the emotional aspect of the word ‘feeling’ . . .” Robinson: No, I
was thinking of an alleged *cognitive* feeling – the alleged feeling of
understanding. (Besides what Prof. Harnad says, you can ﬁnd claims to
such feelings in, e.g., D. Pitt, G. Strawson, C. Siewert, etc.) But I
think we can cut through some of this by observing that *feeling* in
the target paper, in the other authors I just mentioned, in my own
work, and in commonsense, requires consciousness. Robots (I stipulate,
but I believe agreeably to others in this discussion) have no
consciousness. Therefore, they have no feelings. Mirsky: “. . .
intelligent in the way we use the term for ourselves much of the time
(i.e., conscious), . . . .” Robinson: I don’t think we use
“intelligent” to include consciousness by analytic implication. For, I
think most readers of this discussion think it’s a non-trivial question
whether a computer, or a robot, both of which they take to lack
consciousness, could be intelligent – which they couldn’t do if they
were including consiousness as partially constitutive of intelligence.
(It’s arguable, I realize, but I think Turing agreed; i.e., he did not
think that a machine’s doing well on the imitation game would show that
it was conscious.) At any rate, I am holding out for a tripartite
distinction: Intelligence (showable by success on T2), understanding
(showable by success on T3) and consciousness which requires more than
success on T3 (but it’s in dispute exactly what *would* show it). I’ve
no wish to dispute about words, and I realize that all of the terms in
this discussion are sometimes used in ways other than I’ve just
described. But we need *some* words to mark important distinctions.
I’ve used “intelligence” because that was Turing’s term, and
“understanding” because that was Searle’s. The reason that’s implicit
in Searle’s paper (though, as I said, he mixes it in with other points)
for why he doesn’t understand is that he can’t do anything non-verbal
with Chinese symbols. His program enables him to connect words with
words, but there’s nothing in the situation that lets him connect words
to non-words, e.g., to his hunger or to hamburgers. In Harnad’s terms,
T3 is not satisﬁed. So Searle in the CR doesn’t have what he called
“understanding”. One of my leading points was that failure on T3 is
sufﬁcient for lack of understanding. Absence of consciousness just
doesn’t come into Searle’s reasoning. Neither does lack of a cognitive
feeling (which he could have with or without actual understanding). And
since feeling doesn’t come into the matter, absence of feeling does not
show that a robot – a device that *does* make word-tothings connections
– does not have understanding (again, taken to be the property that
Searle is arguing about in “Minds, Brains and Programs”). Paraplegics
can’t do much, but they can issue requests that get others to act on
their behalf. Searle in the CR can’t do that. Even though he knows the
people outside can read Chinese, he has no reason to write any
particular collection of symbols in order to get himself a hamburger;
he can’t connect his hunger to the symbols. It’s all just words to
words, never words to things or things to words. Mirsky: “[We need an
account that] adequately captures what happens in us when we have
understanding.” Robinson: That’s too broad. Lots of things happen in us
when we have understanding, but that doesn’t show that all of them are
constitutive of what understanding is; it doesn’t show that they are
necessary to understanding rather than its normal accompaniments in us.
Searle’s “understanding” was denied to the man in the CR (and even
after the internalizing of the scripts and program in memory) because
of lack of word-world connectability – in Harnad’s terms, lack of
success at T3. Consciousness doesn’t come into it. Searle was *not*
arguing that formal symbol manipulation as such could not yield
understanding on the ground that it could not yield consciousness. It
was a bad argument, but it wasn’t *that* bad,

WROBINSON: “Robots (I
stipulate, but I believe agreeably to others in this discussion) have
no…feelings”

Not agreeable to me! For
me, robots are simply causal systems. *We* are feeling robots; and the
task of cognitive science is to reverse-engineer our cognitive
capacity, i.e., ﬁnd out what kind of robots we are, and how we work (by
designing a robot that can pass T3).

Trouble is that this will
explain our cognitive capacities, but it will not explain how and why
we feel.

Moreover, I think it’s
claiming far too much to say that robots are, by stipulation, or by
deﬁnition, or of necessity, unfeeling. That would rather prejudge what
is surely a factual question. (I would have said an empirical question,
but because of the other-minds problem, it is in fact an undecidable
empirical question.)

WROBINSON: “it’s a
non-trivial question whether a computer, or a robot, both of which [we]
take to lack [feelings], could be intelligent”

It’s trivial if we’re just
legislating what we choose to call “intelligent” or “understanding” or
whatever. It is nontrivial if we are concerned about whether and why
our reverse-engineering of cognitive capacity fails to explain the fact
that it feels like something to have and use our T3 capacity.

WROBINSON: “failure on
T3 is sufﬁcient for lack of understanding. Absence of consciousness
just doesn’t come into Searle’s reasoning.”

Searle only appealed to the
unexamined, a-theoretical notion of “understanding Chinese” in order to
conclude (correctly) that if he executed the T2-passing computations,
he would not be understanding Chinese. That’s all.

But although it might be
worthwhile querying Searle about this, I don’t think that the basis on
which he was concluding that we would not be understanding in the
Chinese room was that he would fail T3 in Chinese! (That’s the symbol
grounding problem, not the Chinese Room argument: let’s not mix them
up!)

Searle’s basis for
concluding that he didn’t understand Chinese would be the same as mine:
When I hear Chinese (or see it written) I have no idea what it means.
It feels like I’m listening to meaningless vocalizations, or looking at
meaningless squiggles and squoggles. That feels very different from
what it feels like to hear, speak or write English, which I do
understand.

And that would be true even
if a Chinese pen-pal told Searle, truly: no, you’ve been communicating
coherently with me in Chinese for 40 years. Searle would still reply
(truly, and simply by consulting what it feels like to understand and
not-understand) that, no, he had simply been doing meaningless symbol
manipulations according to memorized rules for 40 years, and, no, he
could not understand a word of Chinese.

Of course Searle knows that
not understanding Chinese means, among other things, having no idea
what Chinese symbols refer to in the world. But he doesn’t have to go
that deep to say he doesn’t understand Chinese. Besides, it wasn’t
robotic capacity in the world (T3) that was on trial in the Chinese
room, it was T2.

WROBINSON: “Searle was
*not* arguing that formal symbol manipulation as such could not yield
understanding on the ground that it could not yield consciousness.”

Nor am I. But he was
arguing that under such conditions he would not be understanding
Chinese, and that on that topic he was the sole authority, not the
Chinese pen-pals who insisted he did understand Chinese (because they
could not distinguish his letters from those of a real pen-pal), nor
the dedicated computationalists, who likewise insisted that he (or “The
System”) did understand Chinese, because he had passed T2. Searle, the
sole authority, made the privileged 1st-person judgment that only he
was in a position to make: “I know what it feels like to understand
Chinese, and I don’t understand Chinese.”

He could say that with as
much Cartesian authority and certainty as he could say that he had a
toothache (though not that he had a tooth injury, or even a tooth).

T3 incapacity had nothing
to do with it. (And that was my point!)

WROBINSON: “what about
visual object recognition? If that’s included, it doesn’t seem to be
what Turing had in mind”

This is again about T3 vs.
T2. And, yes, the punchline is that if Turing did just mean T2, and
passing via computation alone, then Turing was mistaken; he ought to
have included T3, and dynamical processes, if need be. (And I rather
think he did mean to include the latter.)

WROBINSON: “Here’s a
candidate for a deﬁnition of intelligence that seems to ﬁt with
Turing’s paper: Ability to respond appropriately to a wide range of
novel circumstances.”

That would be just about as
unrigorous and unacceptable as Turing’s own (inexplicably loose)
suggestion that “in about ﬁfty years’ time it will be possible, to
programme computers… [to] play the imitation game so well that an
average interrogator will not have more than 70 per cent chance of
making the right identiﬁcation after ﬁve minutes of questioning” — in
stark contrast with his earlier suggestion in the same paper that “a statistical survey such as a Gallup poll [would be] absurd…”

I prefer to attribute this
to the kind of loose speaking that great mathematicians often engage
in, where they leave out the details that are intuitively obvious to
them, but need to be rigorously proved by others in order to be
understood and believed.

Turing was spot-on with his
criterion of total indistinguishability in performance capacity (what I
dubbed as “cognition is as cognition does”), but this does not entail a
commitment to verbal performance capacity alone, nor to computation
alone. On the contrary, the notion of Turing-indistinguishability
immediately generates the Turing hierarchy, from arbitrary toy
fragments of performance, to T2, T3 and T4, showing that the Turing
performance hierarchy is actually an empirical observability hierarchy.

WROBINSON: “What about
my shopping assistant robot, that gets around town under many variable
and unforeseen circumstances”

The shopping assistant
robot is a toy; it already fails T2 and T3, so you need inquire no
further about whether or not it understands… (And grounding, too, is a
T3-scale property, not a property of arbitrary fragments of robotic
capacity.)

Yes of course. If not, then
T3 (or T4) would be face-valid, and would constitute cognition by
deﬁnition. There is (and always was) more at stake in this all along,
no matter how coy or dismissive Turing affected to be on the question
of the mind!

Don’t get too hung up on
Searle’s use of “understanding” in his argument. He really just meant
the ordinary, everyday notion of understanding or not understanding a
language. And he doesn’t bring up the fact that he is performing a
simple introspection when he judges that he doesn’t understand Chinese
because it’s just too obvious.

WROBINSON: “I hold that
passing T3 could happen with cognitive capacity, period. You hold that
it could be passed only with cognitive capacity + feeling.”

Of course passing T3 could
happen with cognitive capacity alone. That’s true by deﬁnition, because
T3 is supposed to be the generation of our cognitive capacity. The
question is whether it would generate understanding (or seeing, or
hearing, or meaning) or any other of the felt states that normally
accompany the possession and exercise of our cognitive capacity.

And — if I could only wean
this discussion away from the oh-so-seductive allures of the Chinese
room — there’s still the problem of how and why cognitive states are
felt that needs to be addressed…

The pre-posting
review process in this discussion seems to have become backed up. As I
am going off-line and may not return for awhile, I thought I would take
the opportunity to offer some quick replies now to Professor Harnad’s
latest response to some of my comments (received initially via e-mail
but not yet appearing here). I fear this is a little unfair, as I
cannot repeat his full text here and must conﬁne myself to responding
selectively which runs the risk of losing context or the full thrust of
his remarks. I therefore hope this will not be taken the wrong way (and
that the moderator(s) here will adjust the queue of responses so they
follow the line of discussion if that’s needed). On the other hand,
maybe this will also just go into the queue and come out in a more
appropriate spot in the discussion). Professor Harnad writes: “. . .
the most important point is that it is a big (and circular) mistake to
assume that what goes on internally inside a T3 device (or inside any
device, other than a computer) consists only of computations, plus
whatever hardware is needed to implement them. Once we have left the
domain of T2 — where it could, in principle, be just symbols in,
symbols out, and nothing but symbol-manipulation (computation) in
between — we have entered the world of dynamics, and not just external
or peripheral dynamics, but also internal dynamics.” Responding: If the
issue is what does a brain do to produce understanding (and all the
other features we lump together under the term “consciousness” as
applied to ourselves), then the idea that hardware or peripherals adds
something which moves us out of the computationalist ballpark is
misleading. No actual computer is ever pure theoretically (an abstract,
uninstantiated model) and no AI researcher I know of thinks an AI
application can be implemented without hardware. In and of itself, the
hardware doesn’t matter, even if we always need whatever hardware is
sufﬁcient to run whatever programs are at issue. If that includes
devices to provide inputs, well and good. But there’s no reason to
think that inputs must enter the system in some particular way. As to
what brains actually do, it’s at least possible that they don’t only
compute (in whatever way brains might be said to do that) but what is
at issue here is whether the computing part of what they do is the
important part in producing understanding and other features of
consciousness. Professor Harnad continues: “If by ‘CR’ you mean just a
computer, then indeed any computer is just as vulnerable to Searle’s
argument and to the symbol grounding problem, no matter what it is fed
by way of input.” Responding: It isn’t at all clear to me that Searle’s
argument succeeds, though perhaps some still think it does, in which
case what vulnerability? (As I’ve written earlier, I think the Chinese
Room argument fails on a number of counts which we have chosen not to
discuss here because of the risk of distraction. But choosing not to
discuss it is not to agree that it’s unchallengeable or that it’s even
on the right track.) Professor Harnad: “Helen Keller was not a
computer. She was a human with some sensory deﬁcits — but enough intact
sensorimotor capacity for normal human cognitive capacity.” Responding:
Thus she had a source of inputs, albeit more limited than ours. But why
should we think that how those inputs were delivered (so long as they
were sufﬁcient to convey information to her) mattered beyond the bare
minimum of conveying information sufﬁcient for her brain to do its
cognitive work? If the mode of delivery doesn’t matter, of course, then
we are back to the same question of whether her brain relied on
computational type operations to perform the processing that made sense
of the inputted signals. Professor Harnad: “We see, hear, touch and
manipulate the things in the world. Computers just manipulate symbols.
And tempting as it is to think of all of our “input” as being symbolic
input to a symbol-manipulating computer, it’s not. It’s sensorimotor
input to a hybrid analog/ digital dynamical system; no one knows how
much of the structure and function of this T3 system is computational,
but we can be sure that it’s not all computational.” Responding: I
think it’s a fair point to note that we are more than just a digital
operating system. But I don’t think it’s a telling one because no one
denies it and denying it isn’t essential to a claim that the way brains
produce understanding is in a computational way. Professor Harnad: “T2
can be made as ‘complex’ as verbal interaction can be made. So can the
computations on the I/O. But as long they are just computations
(symbol-manipulations), be they ever so complex, the result is the
same: Symbols alone are ungrounded.” Responding: The question is what
does it take to ground a symbol and thereby impute meaning to it? You
suggest that the dynamical relation with the external world is the
grounding and that this grounding establishes the meaning. But I’m
suggesting that grounding is more likely an outcome of the process
which establishes meaning, i.e., that grounding turns out to be one
kind of meaning we ﬁnd in our linguistic activity (as in the word and
object references so common to so many of our words). I’ll add here
that grounding seems to occur in at least one other, albeit somewhat
different, sense: It is a way of describing the mechanism whereby we
build up our representational mapping webs which picture the world. In
that sense it is pre-linguistic though it does form the basis for the
linguistic (and conceptual) capacities which follow it, among which are
included the word-object referential relation which last seems to be
what you mean by grounding-as-meaning — at least part of the time. But
neither use of “grounding” effectively describes, I think, the actual
process whereby meaning occurs. That, I suggest, happens within an
associative process that the brain performs (among its other “duties”)
as I’ve begun to try to elucidate nearby (see the anecdote of the road
sign). Professor Harnad: “A person is looking at the shape on the
screen. And the screen is not part of the computer (it is a peripheral
device). If you see a circle on the screen, that does not mean the
computer sees a circle. The objective of cognitive science is to
explain how you are able to detect, recognize, manipulate, name and
describe circles, not how a computer, properly wired to a peripheral
device, can generate something that looks like a circle to you. “The
shape you see on a screen is indeed generated by a computer. But
neither what you can do with that shape (detect, recognize, manipulate,
name and describe it: T3) — nor what it *feels like* to be able to see
and do all that — are being done by the computer. This is just as true
for being able to read and understand what words mean: T3 is miles
apart from T2. And input to a T3 robot is not input to a computer.”
Responding: My only point was to ask why we should think that a
computational process, consisting of lots of smaller operations, none
of them conscious themselves, should be incapable of working together
in a larger system to produce a feature we recognize as “consciousness”
(or its various components) at a “higher” level of operation? I was
aware that my example of the computer screen and its images could be
misread, but hoped my point would have come through nonetheless. It’s
my error for not going into more detail there. Anyway, my main point is
really twofold: 1) To question the assumption you apparently make that
meaning is found in a dynamical relation with the environment (I think
that certainly plays a part for us but that it is NOT the factor that
accounts for meaning per se — see my comments nearby about the road
sign again); and 2) To ask why, if complexity is the true issue for
meaning (as I have suggested), it should not also be so for what you
are calling the feeling of understanding that accompanies the kind of
understanding we have? I have seen your comment that complexity does
not explain understanding but that strikes me as mainly asserted, as of
now. It hinges on another claim that the dynamical connectivity, as
seen in your T-3 example, explains at least an aspect of meaning, i.e.,
meaningful responses without, but touching on the feeling part. My view
differs in that I don’t see how robot-like dynamics can make much of a
difference or account for the occurrence of meaning in symbols — nor do
I think we must give up the expectation of ﬁnding a way to causally
explain the occurrence of the feeling aspect of understanding as you
argue for. But I have enjoyed reading (and listening to) your
presentation and getting a better sense of where you’re coming from in
this debate. I expect, however, that we will not ﬁnd a lot of
agreement, going forward, given where we now are.

Response to Stevan
Harnad on 2/18 at 17:30 Harnad: “. . . having intelligence is
synonymous with having indistinguishable cognitive capacities. (What
else does it mean?) Robinson: It depends on what one includes under
“cognitive capacities”. For example, what about visual object
recognition? If that’s included, it doesn’t seem to be what Turing had
in mind in his paper on Computing Machinery and Intelligence, because,
manifestly, there was nothing in the imitation game to test for *that*
capacity. Here’s a candidate for a deﬁnition of intelligence that seems
to ﬁt with Turing’s paper: Ability to respond appropriately to a wide
range of novel circumstances. (Appropriate response goes with inability
of interrogators to reliably distinguish. Wide range with absence of
restriction on content of their questions. Novel with the fact that
machine designers do not get the interrogators’ questions in advance.)
No one has legislative power over the key terms in this discussion and
not everyone has to mean my candidate by “intelligence”. But I think my
suggestion at least describes an interesting property, and that it’s
important to distinguish this property from some others (such as
Searle’s *understanding*, and your *feeling*). Once distinguished, we
can, of course, meaningfully ask whether there being an M with
intelligence requires or does not require that it have one or another
further property. Harnad: “. . . one question unanswered: Does the
TT-passer feel . . .?” Robinson: I agree that passing T2 does not
answer this question. I’d say: Passing T2 does not require feeling
(though, of course, something might be a T2 passer and have feelings
too!) I think Turing would give the same answer. What about my shopping
assistant robot, that gets around town under many variable and
unforeseen circumstances, describes what happens accurately, matches
behavior to statements of what it’s about to do, etc.? That also does
not imply that it feels anything at all. But its words are not just
connected with other words: they are appropriately related to the
things that are the causal inputs to its sensors and are manipulated by
its effectors. That’s another important property that we should
distinguish from others. Again, there is not and likely will not be a
standard usage here; but I suggest “understanding” for this one,
because it’s this property that Searle’s argument targeted when he
argued that no system that was just a formal symbol manipulator could
*understand* its words. Harnad: “I . . . believe . . . that it is . . .
unlikely that anything could pass T3 unless it really felt, because,
for me, understanding (etc.) *means* cognitive capacity + feeling.
Robinson: I’m intrigued by the + sign. Because it seems to indicate
that you *do* make a distinction between cognitive capacity and
feeling. This distinction makes it possible to sharpen an issue, so
long as we take some verbal care – needed, because I do not use
“understanding” in such a way that it just *means* cognitive capacity +
feeling (because I’m trying to follow Searle’s usage, and his argument
doesn’t bring up feeling (i.e., feeling in your sense, i.e,
consciousness)). So, I hold that passing T3 could happen with cognitive
capacity, period. You hold that it could be passed only with cognitive
capacity + feeling. The clariﬁed issue is: Why? Why can’t my
description of the feelingless shopping assistant robot coherently
apply to some possible device? Harnad: “the hard problem of explaining
how and why we feel is not just a moral matter.” Robinson: I couldn’t
agree more. My remark about feelings and morality was about the
importance of feelings, a certain difference they make. It was not
directed at the question of explaining how or why we feel at all; it
was in no way intended to be addressing the explanatory gap.

Quick Response to
William Robinson: Okay, I see you are making the distinction I thought
you were missing. Professor Harnad’s use of “feeling” in this context
only refers to the sense of being aware of what we know, when we know
it, the sense that accompanies our instances of understanding or
knowing (at least much of the time). You do, in your comment to me,
seem to be assuming that no robot could have such an awareness though,
just because it’s a robot and I think that’s putting this cart before
the horse here. In fact that is precisely what’s at issue (insofar as
we presume the robot has a computational type processing platform in
its “head” serving as its brain). I agree that Searle addresses what he
terms “understanding” in his Chinese Room Argument (an argument he
later replaced, but did not fully recant, with a later claim that
computational understanding is simply unintelligible — I’m thinking
here of The Mystery of Consciousness for starters). However, he has
often invoked his CRA in the context of discussing consciousness so it
is probably a mistake to make the kind of hard and fast distinctions
you want to make between: intelligence understanding consciousness As
you note, these terms all admit of multiple and somewhat varied
applications. We can speak of intelligent toasters and dumb toasters
and never mean that the toaster could follow a conversation or do a
calculation or comment on Professor Harnad’s paper on a site like this.
We speak of humans as more intelligent than horses and yet we also
speak of intelligent and unintelligent humans where “intelligence”
means something other than having more cognitive capacity than a horse.
The same wide range of uses can be found with “understanding” and,
certainly, with a term like “consciousness”. I think that Marvin
Minsky’s point that “consciousness” is a “suitcase word” is correct
though it is not an especially deep insight as it should be readily
apparent when we look at how we use the term. What we have, whenever we
start applying mental words, is increasing slipperiness of application
and any discussion like this has to deal with it. That said, I think
it’s wrong to take Searle as speaking only about understanding in his
Chinese Room Argument. Aside from his raising his arguments in books
like his Mystery of Consciousness (which speciﬁcally references
consciousness!), he has often applied them to questions about
consciousness. There are, as I recall, a number of on-line videos where
he is to be seen discussing consciousness speciﬁcally in which he
invokes his CRA. If understanding is not a feature of consciousness
then what’s the point? We know that machines can be built to replicate
many kinds of human judgments. But it is the thing Searle calls “strong
AI”, the claim that computers can be programmed to have minds (meaning
our kind of consciousness) that matters to him. And it is that that
should ﬁnally be what matters here. The best way of seeing this is to
recognize that the role played by “understanding” in the CRA is that of
a proxy for consciousness and that what he there calls “understanding”
is presented as one of many features we take to be parts or aspects of
consciousness. Among others he has explicitly included things like
intentionality, awareness and feeling (sometimes called “qualia” though
he isn’t apparently happy with the term, thinking it redundant). As
Professor Harnad notes in his video lecture associated with this paper,
all these words tend to run together and overlap in their meanings.
It’s a function, as I’ve said, of words about our mental lives. We can
and should try to nail down what we mean precisely in discussions like
these but there is something about these words, and their realm of
application, that militates against that and constantly pushes us into
ambiguity. Try as I have for many years to hammer out a clear and
precise jargon for such discussions as this, it has never quite worked.
I always ﬁnd myself being misunderstood or being accused of having
misunderstood others. Most times, on examination, the problem seems to
boil down to the inherent willow-the-wisp quality of our mental words.

Hi everyone! Great
discussion going on in here! Several times Stevan says that there is
more to thinking than computation. there is also feeling. He compares
this to Dave’s Hard Problem and argues that “Turing Machine’s can
explain all that is explainable but do not explain everything there
is.” Yet, at the same time he claims that he is not a dualist, and not
even a “naturalistic” property dualist like Dave. He even rejects
zombies and conceivability arguments in general. So, I am curious as to
what Stevan’s response to the the zombie arguments are. The view as put
forth so far sounds just like property dualism. In fact Dave does
speculate that perhaps all of reality is computational and that
information has dual aspects, one physical one qualitative. How could
feeling be non-computational and yet also physical? For my part, I
think that as science approaches the limit we will be able to make
deductions from facts speciﬁed solely in microphysical terms to facts
about feeling (given that we have actually felt the things in question.
That is, given that I have had a conscious experience of red I will be
able to deduce from some physical description that it is like seeing
red for the experiencer that has it) and this makes qualitative facts
physical. As Stevan says, we are feeling robots, but that we feel can
be explained physically. That we can’t yet see how this can be done is
not a very convincing argument.

RBROWN: “Several times
Stevan says that there is more to thinking than computation.”

Yea, more e’en than
computation + dynamics…!

RBROWN: “there is also
feeling. He compares this to Dave’s Hard Problem”

Indeed, it *is* Dave
Chalmers’s “Hard Problem,” except I am arguing that it is an
explanatory rather than an ontic problem, and that it is insoluble, for
reasons that have to do with the nature of both feeling and causal
explanation.

RBROWN: “and argues that
‘Turing Machines can explain all that is explainable but do not explain
everything there is.’”

That’s not a quote! Turing
machines (i.e., computation) can explain part of cognition, but not all
of our cognitive capacity.

Computation + dynamics can
explain all of our cognitive capacity — verbal (T2), sensorimotor (T3)
and even neurobehavioral (T4) — but they cannot explain how and why
cognitive states are felt states: They cannot explain how and why we
feel.

RBROWN: “Yet, at the
same time [Stevan] claims that he is not a dualist, and not even a
“naturalistic” property dualist like Dave. He even rejects zombies and
conceivability arguments in general. So, I am curious as to what
Stevan’s response to the the zombie arguments are. The view as put
forth so far sounds just like property dualism. In fact Dave does
speculate that perhaps all of reality is computational and that
information has dual aspects, one physical one qualitative. How could
feeling be non-computational and yet also physical?”

Yes indeed. I am not a
dualist, not even a “property dualist.” My gap is truly just epistemic
(i.e., explanatory), not ontic. My question is: How and why do we feel?
It does not even represent an epsilon of explanatory inroad to reply to
me by saying: “Listen, my son, there are many properties under the sun:
bigness, littleness, redness, mass, energy, spin, parity, being a prime
number… but one out of them — namely, feeling — is different: it can’t
be explained in the way all the others are.”

I take that statement to be
true, just so, but empty, and just a confession that my question “How
and why do we feel?” is indeed different, hard, possibly unanswerable.

Well I already knew that,
and telling me there is a “duality” among properties informs me nary a
whit further! (Information is the reduction of uncertainty among
alternatives. “Property dualism” does not reduce uncertainty, it just
re-asserts it.)

Ditto for “zombies”. The
ﬂip side of the “mind/body problem” (which I prefer to call the
“feeling/doing problem”) is the “other minds problem” (you cannot know
for sure whether others feel). Another way of putting this is that you
cannot know for sure that others are not “zombies.”

Now the other-minds problem
can be construed in three different ways. Two are the usual sceptical
ways: (1) soft scepticism and (2) hard scepticism. Soft scepticism is
just a Cartesian/Humean admission that there are some true things that
are true, but you can’t know for sure that they are true: this includes
the truth of scientiﬁc laws, the future reliability of past
regularities, the existence of the outside world, and the existence of
other minds. According to soft scepticism, it’s not that any of those
things are untrue; it’s just that — unlike the necessary truths of
mathematics or the face-valid truth of the Cogito (namely, the fact
that feeling is being felt when feeling is being felt) — these other
truths can only be known with high probability rather than certainty.

In contrast, hard
scepticism (ontic rather than just epistemic) countenances the
possibility that these other truths are false. In particular, the
conjecture that there could really be zombies is an instance of such
hard scepticism about other minds. I don’t go there, for much the same
reason I don’t bother with dualism. I ﬁnd them uninformative and
question-begging, when the question is simple, and straightforward:
“how and why do we feel?” — not “What scope for sci-ﬁ does that
explanatory gap leave for us?” But the third way the other-minds
problem can be construed is (3) as a somewhat more profound uncertainty
than the one that attends the rest of the objects of scepticism. This
is already evident in the case of fellow creatures other than our own
species: Our degree of uncertainty about whether other people really
have minds is pretty much the same as our degree of uncertainty about
empirical regularity, the existence of the external world, etc.,
namely, tiny, and tractable. But our degree of uncertainty about other
minds grows as we move to species other than our own, and becomes
really quite sizeable when it comes down to simple invertebrates,
unicellular creatures, and plants (an especially vexed question for
vegans like me!).

That extra continuum of
uncertainty is special, and sets the other-minds problem apart,
because, unlike, say, uncertainty about our current best explanation in
physics, which can improve with time and further data and ideas, our
uncertainty about whether microorganisms or plants feel remains forever
unresolvable, and its an outsize uncertainty, compared to the others:
plants might or might not really be “zombies.” (I certainly hope they
are!)

That does not make the
possibility that other humans could be zombies one whit more worthy of
serious consideration, except in one perhaps not entirely verbal sense.
The question “How and why do we feel?” can be seen to be equivalent to
the question “How and why are we not “zombies”? This does not give
zombies any more credence or substance; it simply points out an
“epistemic” equivalence of the two questions: they are simply ﬂip sides
of the same coin. We know of ourselves that we are not zombies. The
worry about whether everyone else might be a zombie is just ordinary
soft scepticism (also known as “empirical risk”). It does not make
zombies any more real. Let’s say bets are off with other species. But
the special status of the other-minds problem does reassert itself in
one other very real case, namely, Turing’s: man-made robots.
(Extra-terrestrials simply fall under “other species.) For when it
comes to reverse-engineering the mind, it is quite natural to ask under
what conditions our uncertainty about whether robots feel becomes as
small and negligible as our uncertainty about whether other people
feel. And that necessarily raises questions about which level in the
Turing hierarchy (verbal indistinguishability [T2], verbal and robotic
indistinguishability [T3], or verbal and robotic and neurobehavioral
indistinguishability [T4]) that convergence occurs at. For a
feelingless T3 or T4 robot would indeed be a Zombie!

But, despite the high
esteem in which philosophers of science hold counterfactual
conditionals, I prefer future conditionals. Rather than committing
myself to epistemic contingency plans on what might prove to be the
equivalent of the possibility of squaring a circle, I’ll worry about
crossing the zombie road only if and when we ever get to it, noting
only that the explanatory gap remains the same: How and why do feeling
entities feel?

One last point on this long
digression into epistemic risk: The underdetermination of theories by
data (i.e., uncertainty about whether your complete causal explanation
is the *right* causal explanation) converges on the number of utopian
theories (complete causal explanations) that can successfully explain
the totality of the data, at the end of the day, if we come up with
more than one. The usual stance is to trust that the constraint of
having to scale up to accounting for all data will minimize this
underdetermination, and hence minimize the number of viable theories
left — and then to bet with Occam (on the theories with the fewest
parameters), hoping that they are in some sense notational variants of
one another.

Well, I hope it’s by now
obvious that there’s a strict counterpart to all this in scaling up the
Turing Hierarchy: The possibility of “zombies” is maximal at the “toy”
level of modelling arbitrary toy fragments of our cognitive capacity.
But as we scale up toward T3, the degrees of freedom shrink to
something closer to the size of ordinary scientiﬁc underdetermination
and empirical risk, and with T4, “zombies” are no longer worth giving
another thought…

Last point, on Dave’s
computationalism: I of course think it’s wrong. Cognition is not just
computational, it is hybrid: Computational and dynamical (and no one
yet knows the right blend, since we’re nowhere near T2, let alone T3 or
T4).

So the answer to Richard’s
version of Dave’s question “How could feeling be non-computational and
yet also physical?” is that T3 and T4 robots (including ourselves!) are
not just computational. But that does not help, because even though we
know the generators of T3 power will be hybrid — dynamical +
computational — the “hard” question remains unanswered: “How and why
are those dynamic/computational states felt states?”

Counting properties, or
“kinds” of properties (“property dualism”) is just numerology or
taxonomy. It does not answer that “hard” question. To talk about “dual
aspects” (physical + “qualitative,” i.e., felt) is likewise simply to
restate the very same question, without making the slightest inroad on
answering it…

RBROWN: “For my part, I
think that as science approaches the limit we will be able to make
deductions from facts speciﬁed solely in microphysical terms to facts
about feeling (given that we have actually felt the things in question.
That is, given that I have had a conscious experience of red I will be
able to deduce from some physical description that it is like seeing
red for the experiencer that has it) and this makes qualitative facts
physical.”

Successfully predicting
*that* we feel, or even successfully predicting *what* we feel from its
neurophysical correlates is a goal worthy of pursuing, but it is just
weather-forecasting. What we need is not just correlations and
prediction, but causal explanation.

And no matter how well you
can read my mind and describe and predict my feelings, you have not
explained how and why I feel — until/unless you have explained how and
why I feel.

RBROWN: “As Stevan says,
we are feeling robots, but that we feel can be explained physically.
That we can’t yet see how this can be done is not a very convincing
argument.”

That we are feeling robots
is indisputable. That how-and-why we feel can be explained physically
is highly disputable. In fact, I am disputing it.

And the argument is not
just that we haven’t done it yet, wait-and-see, but that there is
already a systematic and inescapable way to defeat any attempted causal
explanation, showing that whatever causal role is being attributed to
feeling in any Turing-scale mechanism, the mechanism and performance
will be unaltered if feeling is not attributed.

Feeling will remain a
take-it-or-leave-it property in any Turing explanation, and that means
the explanation will never be causal — *except* if psychokinesis were
to turn out to be true, and feeling (doing things because you feel like
it) were to turn out to be an extra, independent causal force in the
universe.

But, not being a dualist,
and being cognizant of the fact that all physical evidence
(conservation laws as well as an unending series of failed
“parapsychological” experiments) goes contrary to the psychokinetic
hypothesis, I give it as little credence as I give to the possibility
of zombies and other counterfactual fantasies…

Reply to Professor
Harnad I was about to go off-line and noted that you’ve already offered
a response to the latest I’d written above, so, being rather obsessive
about these things I’ve delayed signing off. I’ll try to deal with your
reply as brieﬂy as I can: SMIRSKY: “[From the fact that passing T2
through computation alone] fails [to generate understanding] it doesn’t
follow that this is generalizable to all systems of the same
qualitative (computational) type.” You wrote: “It generalizes to all
computational systems. “(I don’t know what you mean by computational
systems of “qualitatively different type.” I only know one type of
computation . . . .)” Response: Sorry, I did not intend to suggest a
reference to different kinds of computational systems. When I wrote
“all systems of the same qualitative (computational) type” the point
was to use “computational” to clarify what I had in mind by
“qualitative” since not all systems are computational in nature. You
wrote: “What Searle shows is that cognition cannot be just computation.
And that means any computation.” Response: I don’t believe Searle shows
that at all. What he shows is that the fundamental constituent elements
of any computational system are not capable of cognition (understanding
on his usage) in themselves and that, when combined in the system he
specs as the CR, they do not succeed in producing cognition either. He
then goes on to construct an argument that purports to generalize from
what the CR cannot do to what any other possible conﬁguration of the
same constituent elements can do. I think his argument fails to provide
support for that generalization. SMIRSKY: “If understanding is a
function of process complexity, as it well may be, then it would not be
surprising that an underspecked system (lacking the scope and
complexity needed) would lack it — or that a more fully specked system
would not.” You wrote: “The reference to complexity is exceedingly
vague. I know of no qualitative difference between “types” of
computation based on the complexity of the computation. Computation is
computation. And cognition is not just computation, no matter how
complex the computation.” Response: The mistake you’re making
apparently hinges on the supposition that I am differentiating between
types of computation. I am not. The reason the reference to complexity
is vague is because I did not want to go too far aﬁeld here and turn
this into a debate about Searle and his CRA (or my own views), in
deference to your interests in this discussion. I certainly can offer a
lot more in terms of detail but perhaps the quickest way to see where
I’m going is to refer to Daniel Dennett’s views since there is not much
daylight between my view and his — at least on this question. SMIRSKY:
“understanding may well be best understood as a system level feature
(the result of the interplay of many processes, none of which are,
themselves, instances of understanding)” Your wrote: “Perhaps, but this
too is exceedingly vague. I can construe it as being somehow true of
whatever it will prove to take in order to pass T3. But it certainly
does not rescue T2, nor the thesis that cognition is just computation.”
Response: The issue really is what is consciousness (or cognition or
understanding or whatever aspect of this we want to settle on)? If we
expect to ﬁnd understanding somewhere in the CR’s particular
constituent processes we must end up disappointed. But if we are
expecting to ﬁnd it at a system level, in the way those constituent
processes interact in a particular set-up (say the infamous Chinese Gym
or, as I would prefer to put it, a Chinese city where each room in each
building on each street is a processor doing some particular thing but
interfacing with and affecting many others) then we may certainly get
luckier. Then we have a system that’s more brain-like in its
complexity. And why should we expect something less from a
computational platform than we get from a brain here? Of course, the
computational platform consists of so many computational processors,
each doing its part, rather like the neurons and neuronal clusters work
to do theirs in brains. And the way they are operating is in terms of
passing signals around according to their received design and
instructions — like computer chips and processors do. I can provide
even more detail as to the kind of complexity I have in mind but it’s
probably best addressed outside this discussion so as not to take us
too far off point. SMIRSKY: “it seems wrong.. that… meaning, is a
matter of a referential relation between symbol and object in the world
(ﬁnally achieved through the addition of a dynamical physical relation
between the CR and the environment external to it).” You wrote: “It
seems wrong because it *is* wrong! Connecting the internal symbols of a
T3 system to their external referents gives the symbols grounding, not
meaning. “Meaning is T3 grounding + what-it-feels-like-to-mean”
Response: Okay. As I now understand you, your position is that to get
meaning we need both grounding, as you have described it, and feeling
(being the sense of knowing what we know). Here your account reaches
the unexplainable part then, right? But then it doesn’t help much
except to reafﬁrm a mystery. But what if a different account of meaning
(as in it being the outcome of a complex layered and interactive system
that resolves into associative picturing and linking) can also tell us
how we get feelings? Why wouldn’t such an account, if it covers the
bases, be preferable to one that leaves something (feeling) out?
SMIRSKY: “there can ﬁnally be no difference between the [T3] robot’s…
connections between symbol and object and that of the [T2 system]. In
both cases, the referencing… involves representation to
representation.” You wrote: “’Representation’ covers a multiple of
sins! If you mean computation, see above. If you mean non-computational
processes, all bets are off. Now the trick is to ﬁnd out what those
“representations” turn out to be, by reverse-engineering them,
T3-scale…” Response: Yes, the term is intrinsically problematic. When
we have a mental image it’s a representation of sorts and when we use a
symbol, it, too, represents something (if it’s meaningful). When the
brain passes information along through its complex neuronal network, we
think of the signals, at least in some cases, as representations, too.
So it’s not entirely clear that “representations” is the best word. But
I haven’t yet come up with a better one for expressing this particular
point. My point above, though, was to note that when we ﬁnd meaning we
do so through an associative process which links different images in
our minds (again see my road sign anecdote). So the representations in
that case were images, but I admit that I don’t know how raw inputted
signals entering and traversing brains transform into mental images, or
why these seem different in important ways from actual sensory images
we have. But all of this is important in any effort to synthesize what
brains do, either computationally or in any other way, I expect.
SMIRSKY: “the external stimuli we can talk (and think) about are
whatever it is we hold in in our heads at any given moment.” You wrote:
“Yes, and then the question is: what has to be going on in our heads to
give us the capacity to do that? The way to ﬁnd out is to design a
system that can pass T3. (We already know it can’t be done by
computation alone.)” Response: I agree with much of this but not your
last statement though perhaps we mean different things by that? We
certainly can’t just throw computational processes together and expect
to turn out a mind. They need to be combined in the right way, doing
the right things, i.e., if mind features are system level rather than
process level, then you need the right system, don’t you? If your
dismissal refers to your notion that T-3 goes beyond mere computation,
I cannot agree entirely. It certainly does add an aspect to the model
which is not computation, as you note, and that aspect could be
essential for building up the contents of a synthetic mind without
relying on a dump process. But I see no reason, from your paper or in
these discussions, to think that it would be essential to the
brain-like processes that perform the operations which, in the
aggregate, would have the features we recognize as understanding and so
forth. SMIRSKY: “I… had images that made sense of the words… the
meaning was not the referen[ts]… nor… my behavior…it was a certain
critical mass of imagery … not… some physical grounding but… the web of
mental images… and their relations to one another that constituted my
understanding, not the grounding of the words.” Your wrote: “No doubt,
but the question remains: what has to be going on in our heads to give
us the capacity to do that? The way to ﬁnd out is to design a system
that can pass T3. (We already know it can’t be done by computation
alone.)” Response: As I’ve already said, I don’t necessarily agree with
your last point. It depends on what you mean by “computation alone”. I
don’t think any AI researcher thinks that there aren’t ancillary and
undergirding elements to any computational system. There is the
platform, of course, and then the avenues for feeding in information
which could be sensory and motor devices (if we want to give the entity
a semblance of real world existence a la what we experience) or data
dumps and/or information fed in piecemeal over time. I don’t think
there is any reason to think that adding robotic capacity does anything
more than provide an alternative way in which such an entity can
obtain/develop necessary content through which it can make “sense” of
its inputs. SMIRSKY: “There seems to be no reason… the images could not
have found their way into a system like me in some other way… [e.g.]
put there as part of a data dump.” You wrote: “I agree that a real-time
prior history — although it is probably necessary in practice — is not
necessary in principle in order to have full T3 power. (E.g., to
recognize, manipulate, name, describe, think about and imagine apples,
you need not have had a real-time history of causal contact with apples
in order to have developed the requisite apple-detectors and
apple-know-how. They could have been built in by the engineer that
built you — even, per impossibile, the Blind Watchmaker. But it is not
very likely. And, either way, whether inborn or learned, that T3 power
will not be just computational. Hence building it in in advance would
entail more than a data-dump.)” Response: I question your insistence on
it not being “just computational”, see above. SMIRSKY: “[just as]
meaning may be a function of complex associative operations within a
given system utilizing retained and new inputs… [so] being aware of
what one is doing on a mental level… is… a function of a sufﬁciently
complex system that can and does collect, retain and connect multiple
representations on many levels over a lifetime?” You wrote:
“’Computational complexity’ is already insufﬁcient to constitute
sensorimotor grounding. It is a fortiori insufﬁcient to explain feeling
(hence meaning).” Response: Yes, you have said this before. But I
haven’t seen any reason in these discussions to take that as more than
an assertion of a position. And if sensorimotor grounding isn’t
essential to meaning, contra your view but as I maintain, then there is
no reason to expect or require that “computational complexity” be
essential to “sensorimotor grounding”. Here we seem to have a strong
divide between our views since I am unconvinced as to the essential
role you claim for this dynamical relation with the environment (and
the apparatuses that make that possible). But what COULD “explain
feeling and hence meaning”, as you put it? Until now I have shied away
from offering too many speciﬁcs here. But perhaps I can no longer do
that? On the view I have advanced, awareness, which is my take on your
term “feeling”, involves the interplay of subsystems, speciﬁcally
including an entity that has a sense of being what it is, a subsystem
(or subsystems) within the larger system dedicated to differentiating
between internal and external inputs and classifying/grouping certain
internal inputs in a way that forms a picture of the entity FOR the
entity. This subsystem, with its pictures, interacts on an ongoing
basis with the subsystems which picture the external environment in
various dimensions and aspects. Once you have the self subsystem (which
works rather like the others but just involves attending to different
inputs), you get awareness because the self subsystem, in interacting
with the other subsystems, manifests those interactions as what we
recognise as awareness. You wrote: “But let me clarify one thing: when
I substitute ‘feeling’ for all those other weasel words for ‘conscious’
and ‘consciousness’ — ‘intentionality,’ ‘subjectivity,’ ‘mental,’ etc.,
I am not talking particularly about the quality of the feeling (what it
feels like), just the fact that it is felt (i.e., the fact that it
feels like something). Response: Understood and, if you recall my
earlier remarks, understood from the start. Recall my mention of
ﬂatness as the absence of strong emotional content. I don’t agree about
calling them “weasel words” though. Our language is not obviously
equipped to handle mental words so we constantly fall into ambiguity in
employing them. It just seems to be a hazard of the business. You
wrote: “As I think I noted in the target essay, I may feel a toothache
even though I don’t have a tooth, and even though it is in reality
referred pain from an eye infection. So it is not my awareness that my
tooth is injured that is at issue (it may or may not be injured, and I
may or may not have a tooth, or even a body!). What is at issue is that
there is something it feels like to have a toothache. And I’m feeling
something like that, whether or not I’m right about my having an
injured tooth, or a tooth at all.” Response: Yes, understood and
recalled from your essay. You wrote: “By exactly the same token, when I
suggest that meaning = T3 capacity + what-it-feels-like to mean, what I
mean is that I may be able to use a word in a way that is T2- and
T3-indistinguishable from the way anyone else uses it, but, in
addition, I (and presumably everyone else who knows the meaning of the
word) also know what that word means, and that means we all have the
feeling that we know what it means. I may be wrong (just as I was about
my tooth); I (or maybe everyone) may be misusing the word, or may have
wrong beliefs about its referent (as conﬁrmed by T3). But if there is
nothing it feels like to be the T3 system using that word, then the
word merely has grounding, not meaning.” Response; Again understood and
agreed. SMIRSKY: “there is no real explanatory gap to worry about once
we recognize that the only point of explanation in this case is to
explain what causes what. Why should an explanation that describes
semantics as a function of a certain level of computational complexity
not also sufﬁce to account for the occurrences we think of as being
aware of what is going on, of feeling what’s happening” You wrote:
“Because computational complexity can neither produce grounding nor
explain feeling.” Response: You have said this before and I have
answered it. Computational complexity need not produce grounding since
grounding is more likely, on my view anyway, to be an outcome of the
process that imputes meaning than it is to be the underpinning of that
imputation. As to explaining feeling, as noted above, I think it does,
indeed, provide a basis for explaining feeling in this sense, i.e.,
sufﬁcient complexity allows for a system that includes subsystems
capable of interacting in certain ways — one of which involves having a
sense of being a something and of being affected by those inputs which
are associated with other subsystems (those that capture and manifest
more external inputs). Well this has been rather long but, given the
points you have made, I thought that this time I ought to try to
address as much as what you have written as possible so there would be
little chance of misunderstanding. But the length will probably work
against that, this time.

SMIRSKY: “[Searle]
shows… that the fundamental constituent elements of any computational
system are not capable of cognition”

I agree: Computation alone
(any computation) is not enough to generate cognition. In other words,
cognition is not all computation.

SMIRSKY: “[Searle] then…
purports to generalize from what the CR cannot do to what any other
possible conﬁguration of the same constituent elements can do. I think
his argument fails to provide support for that generalization.”

I agree: Having shown that
cognition is not all computation, Searle thinks he has shown cognition
is not computation at all. He deﬁnitely has not shown that. (The
T3-passing system could be — and indeed probably is — partly
computational.)

SMIRSKY: “If we expect
to ﬁnd understanding somewhere in [T2 computation's] particular
constituent processes we must end up disappointed. But if we are
expecting to ﬁnd it at a system level, in the way those constituent
processes interact in a particular setup (say the infamous Chinese Gym
or, as I would prefer to put it, a Chinese city where each room in each
building on each street is a processor doing some particular thing but
interfacing with and affecting many others) then we may certainly get
luckier.”

I am afraid these analogies
don’t help me at all, in explaining (or understanding the explanation)
of how and why we feel!

[The "Chinese Gym" argument
(CGA), by the way, was a dreadful anticlimax to the Chinese Room
argument (CRA). The CRA, against pure computation, passing T2, is
perfectly valid, and Searle easily rebuts the "System Reply" (that not
Searle, but the system as a whole, is the one that is doing the
understanding) by memorizing the computer program and doing all the
computations in his head: He becomes the whole system, and can thereby
say, truly, without remainder, that he does not understand (and there's
nothing and no one else). But with the CGA, which is a variant of the
CRA, meant to show that even if the T2-passing system is a neural
network rather than computation, it still does not understand. A neural
net is a parallel, distributed network of interconnected units, passing
activation to one another. So Searle asks us to imagine a bunch of boys
in a gymnasium playing the role of the distributed units, passing
papers. Searle waves his hands and says there's obviously no
understanding going on there. But that's not at all obvious, because
here the "System Reply" would apply: The boys don't understand, but
"the system" does. And, unlike in the CRA, Searle cannot himself become
the system -- if the parallelness and distributedness of the units are
essential to the neural net's success in passing T2, because those are
not implementation-independent, purely computational properties. But
there is a simple solution to show that the parallelness and
distributedness are not essential to the neural net's success in
passing T2. Simply simulate the neural net computationally. That's a
symbol system that Searle can memorize, and thereby become the system.
And it produces exactly the same T2 I/O performance as the neural net.
And Searle -- now again become the whole system -- does not understand.
QED. But only with the proviso that the noncomputational aspects --
parallelness and distributedness -- are inessential to the T2 success,
and we stick to T2. As soon as we move up to T3, sensorimotor
transduction becomes essential, and neither the computational
component, nor Searle's simulation of it, can be the whole system.
Hence T3 and the hybrid system that can pass it are immune to Searle's
CRA.]

But I’m afraid that your
“Chinese city” analogy is as unavailing as Searle’s Chinese Gym, for
much the same reason. The problem, as always, is of course the “other
minds problem”: The only way to know for sure whether a system has a
mind (feels, understands, whatever) is to *be* the system. And
normally, the only system you can be is yourself. The only exception is
T2, when passed by computation alone. This is what I’ve called
“Searle’s periscope,” penetrating the normally impenetrable other-minds
barrier. And the reason Searle’s perisicope works in this one special
case, is that computation is implementation-independent. Therefore, if
something is really a purely computational property, then any and every
implementation of that same computation will have that same
computational property. That’s how Searle can “be” the TT-passing
computational system, and thereby report back to us that it does not
give rise to the property of understanding!

But that fails for the
Chinese Gym. And it fails also for the “Chinese City”: We know no more
about whether a Chinese City does (or doesn’t) understand than we know
about whether Gaia or the solar system does or does not understand.

Besides, even if a Chinese
city or the solar system were to understand — or even, for that matter,
if the brain (which really does understand) or a T3-passing robot
(which probably does understand) — is like a Chinese city, we still
haven’t a clue of a clue as to why and how it understands. Not, that
is, if we don’t lose sight of the fact that understanding is not just
cognitive capacity that is Turing-indistinguishable from that of
someone who really understands, but it also *feels like something* to
understand. In other words: Why on earth would it feel like something
to be like a Chinese city?

SMIRSKY: “Then we have a
system that’s more brain-like in its complexity. And why should we
expect something less from a computational platform than we get from a
brain here?”

Whether the complexity is
in the performance capacity (T2, T3) or in the internal processes
(whether dynamic or computational), there is not a clue of a clue as to
why or how any of it is felt, rather than just “functed.”

SMIRSKY: “your position
is that to get meaning we need both grounding… and feeling (being the
sense of knowing what we know).”

Actually, if you give me a
T3 robot, it’s enough if it feels anything at all; we don’t even need
to fuss about *what* it feels…

SMIRSKY: “Here your
account reaches the unexplainable part then, right? But then it doesn’t
help much except to reafﬁrm a mystery.”

Correct; it reafﬁrms the
mystery. And pinpoints the mystery’s locus: explaining how and why a T3
(or T4) robot feels. And pinpoints also why it cannot be explained:
Because there is no more causal room in the (already successful) T3 or
T4 explanation (as long as one does not allow psychokinesis, for which
all empirical evidence to date is resoundingly negative).

SMIRSKY: “But what if a
different account of meaning (as in it being the outcome of a complex
layered and interactive system that resolves into associative picturing
and linking) can also tell us how we get feelings? Why wouldn’t such an
account, if it covers the bases, be preferable to one that leaves
something (feeling) out?”

Because unfortunately that
would not be an explanation of how and why we feel: it would just be a
Just-So story! What we would need would be a causal explanation — not
mentalistic hermeneutics on top of a complete causal explanation of
cognitive performance capacity.

SMIRSKY: “when we ﬁnd
meaning we do so through an associative process which links different
images in our minds…”

But how and why would “an
associative process which links different images” be a *felt* process?

SMIRSKY: “if mind
features are system level rather than process level, then you need the
right system, don’t you?”

Yes. But the way you ﬁnd
out that you have the right system is to make sure it generates T3
capacity: Then how do you make sure it feels? And what is the causal
role of the fact that it is felt rather than just functed (if it is
indeed felt)?

SMIRSKY: “[T3] certainly
does add an aspect… which is not computation… and… could be essential
for building up the contents of a synthetic mind without relying on a
dump process. But I see no reason… it would be essential to the
brain-like processes that perform the operations which, in the
aggregate, would have the features we recognize as understanding and so
forth.”

T3 adds essentially
capacities if are trying to reverse-engineer a causal model that has
all of our cognitive capacities, Turing-indistinguishable from our own.
(The real objective is not to synthesize, but to explain causally: the
synthesis is just to test the causal powers of the explanation.)

If we insist on brain-like
processes, we can scale up to T4. But neither that — nor taking an
aggregate or “system” view of it all — explains why and how the system
(whether T3 or T4) feels (if it does), rather than just “functs,”
Turing-indistinguishably from a system that feels.

SMIRSKY: “I don’t think
any AI researcher thinks that there aren’t ancillary and undergirding
elements to any computational system. There is the platform, of course,
and then the avenues for feeding in information which could be sensory
and motor devices (if we want to give the entity a semblance of real
world existence a la what we experience) or data dumps and/or
information fed in piecemeal over time.”

This all assumes that the
lion’s share of the work is computational, and that the dynamic part is
trivial I/O. I think there’s no particular reason to believe that
that’s so. It more or less insists on computationalism despite the
contrary evidence, and trivializes sensorimotor capacity.

But never mind! Suppose
it’s *true* that T3 and T4 can be passed by a system that is mostly
computational: The explanatory gap (how and why does it feel) remains
just as gapingly wide.

SMIRSKY: “On the view I
have advanced, awareness, which is my take on your term “feeling”,
involves the interplay of subsystems, speciﬁcally including an entity
that has a sense of being what it is”

Is that a felt “sense of
being what it is” or just a functed sense? If felt, then the question
is already begged, by supposing a felt component without explaining how
or why it is felt.

SMIRSKY: “a subsystem
(or subsystems) within the larger system dedicated to differentiating
between internal and external inputs and classifying/grouping certain
internal inputs in a way that forms a picture of the entity FOR the
entity.”

Differentiation,
classiﬁcation, and even “pictures” (meaning: analogs of sensory
projections) are ﬁne. But why and how are any of them *felt* rather
than merely functed? (And in what sense are they more “for” the entity
— which is presumably the system itself — than are any of its other
adaptive T3 or T4 functions?)

Do you see how the
mentalism creeps in without any independent causal explanation? That’s
what makes this decorative hermeneutics (even if it’s true!), rather
than causal, functional explanation (of feeling).

And that’s what makes the
“hard” problem hard (indeed, by my lights, insoluble).

SMIRSKY: “This
subsystem, with its pictures, interacts on an ongoing basis with the
subsystems which picture the external environment in various dimensions
and aspects. Once you have the self subsystem (which works rather like
the others but just involves attending to different inputs), you get
awareness because the self subsystem, in interacting with the other
subsystems, manifests those interactions as what we recognise as
awareness.”

I am lost. The problem is
not the “self-subsystem,” nor any useful information to which it is
privy. The problem is with the fact (if it’s a fact) that any of this
is *felt*: How? Why? It sounds like all the functions you describe
would do their job just as well — in fact, Turing-indistinguishably
well — if they were all just executed (“functed”) rather than felt. So
how and why are they felt? (This is why “awareness” is a weasel-word:
it conﬂates accessing information with feeling something whilst so
doing. I think Ned Block has garbled this even further, by suggesting
that there are two kinds of consciousness, when what’s really happening
is that we’re conﬂating two things — only one of them a matter of
consciousness — in one: access and felt access. One mind/body problem
is surely enough!)

SMIRSKY: “Computational
complexity need not produce grounding since grounding is more likely,
on my view anyway, to be an outcome of the process that imputes meaning
than it is to be the underpinning of that imputation.”

I’m not sure what “imputes
meaning” means. (For me, “P” means something to a person if that person
can uses “P” the way someone who understand “P” uses “P” *and* it feels
like something for that person to mean that “P”.)

I don’t really know what
computational complexity is (except in the formal complexity-theoretic
sense). Whatever computation it takes to pass T3, that’s the requisite
complexity; and in virtue of the fact that it passes T3, it is
grounded. Now the problem is explain how and why it is felt (if it is
indeed felt).

SMIRSKY: “I think it
does, indeed, provide a basis for explaining feeling in this sense,
i.e., sufﬁcient complexity allows for a system that includes subsystems
capable of interacting in certain ways — one of which involves having a
sense of being a something and of being affected by those inputs which
are associated with other subsystems (those that capture and manifest
more external inputs).”

I understand the potential
functional utility of having a subsystem with such privileged
informational and causal role — but I do not understand how or why its
functions are felt — or, if you really mean it a a homunculus, how and
why it feels.

Response to Stuart
W. Mirsky’s post on 2/20 at 13:59 Amen to your remarks at the end about
the terminological difﬁculties in discussions like this one! And I
accept that my suggestion about how to regiment “robot” was ill
considered. However, I do think that the fact that a robot could
connect its words both to words in conversation and to things it
encounters in the world would not entail that it had any consciousness
whatsoever. Since I think that’s a possible entity, I think it’s ok to
introduce a term for it. As an abbreviation for “feelingless robot”
I’ll introduce “Frobot”. Ability to do well in Turing’s imitation game
is an interesting property. What I think Searle’s “Minds, Brains and
Programs” showed is that having *this* interesting property is not
sufﬁcient for having another interesting property, namely, ability to
connect words to things (either in detection or action). For that, you
need at least a Frobot – and I’ve been proposing that a Frobot is
sufﬁcient for this latter ability. Ability to be conscious is a third,
distinguishable ability. The fact that Searle himself is unclear about
what his argument in MBP actually showed should not lead us to fail to
make this important distinction. Of course, there might be some good
argument that shows that a fully capable Frobot (like my shopping
assistant) would *have* to also be conscious. Such an argument would
have to show that in aligning inputs from detection devices with the
program that makes good word-word connections (and so provides success
at the imitation game) and in aligning both of these with outputs to
effector devices, we would thereby, necessarily, produce the causes of
consciousness. I don’t think we have any idea of how such an argument
would go. I’ve avoided “intelligence” and “understanding” here. I hope
the result will be clarifying, and that I haven’t inadvertently
introduced some further verbal difﬁculty.

Response to Stevan
Harnad’s post on 2/20 at 15:29 Harnad: “*We* are feeling robots . . .
.” Robinson: I’ve conceded (in my reply to Mirsky’s post of 2/20,
13:59) that my suggestion about the regimentation of “robot” was ill
considered. And I’ve offered “Frobot” as an abbreviation for
“Feelingless robot” (since I think we need *some* term for a device
that has certain interesting abilities, but is not conscious). But I
think that to say that *we* are robots is to simply make “robot” a
useless word. One may hold that we are nothing but material things, but
that doesn’t make us robots. In philosophy and literature, robots
(whether Frobots or robots that also have consciousness) have been
understood to be electromechanical devices. Harnad: “Searle’s basis for
concluding that he didn’t understand Chinese would be the same as mine:
When I hear Chinese (or see it written) I have no idea what it means.
It feels like I’m listening to meaningless vocalizations . . . .”
Robinson: The feeling of not understanding might be illusory. Of
course, that would be very strange! But so are reactive dissociation in
pain phenomena, being under the illusion that one’s leg does not belong
to one, Capgras syndrome, etc. The way Searle knows he doesn’t
understand Chinese is that if he thinks “Maybe if I write ‘I’d like a
hamburger, they’ll send me one” nothing happens – he has no reason (and
knows he has no reason) to make one mark rather than another on his
paper, and if he were to just write something by guessing, it would not
produce the sending in of a hamburger. Harnad: “. . . if Turing did
just mean T2, and passing via computation alone, then Turing was
mistaken . . . .” Robinson: But Turing can be read as not having been
mistaken about his own project. Instead, he can be taken to have been
interested in the possibility of machine intelligence, which he
conceived of as the property needed to successfully carry on an
unrestricted conversation. I don’t see the point of the reference to
Gallup polls. The imitation game is not remotely like polling. Of
course, you and I are both interested in two additional properties:
groundedness of symbols, and consciousness. I just think that progress
with these depends on recognizing their distinctness, and keeping their
distinctness clear as we go. Harnad: “The shopping assistant robot is a
toy; it already fails T2 and T3 . . . .” Robinson: I don’t see the
reason for this claim. As to T2, it converses intelligently; as to T3,
it applies the right words to objects in its sensors and to actions and
events that take place within its detector ﬁelds; and it suits its
actions to its words. You can call it a toy, but it’s a toy with two
really interesting properties (i.e., conversational ability and correct
word-world connections). The aim to build a robot like my shopping
assistant does not analytically entail the aim to build a device that
instantiates the causes of consciousness. I don’t see any reason to
think that satisfying the ﬁrst aim would automatically guarantee that
we’d satisfy the second. But I agree with you that our discussion would
be more interesting and valuable if we could identify what the causes
of consciousness are. I’ve argued in the reference below that they’re
to be found in properties of patterns of neural events. So, my view is
that it’s likely that a device like my shopping assistant could be made
*without* also instantiating in it anything that has patterns of
activity like those in our neural events. That is, it could do well in
conversation and suit its actions to its words, and yet not have the
causes of consciousness and, therefore, not be conscious. Reference: W.
S. Robinson (2004) _Understanding Phenomenal Consciousness_ (Cambridge:
Cambridge University Press).

Response to
Richard Brown’s post of 2/20 at 20:05 Brown: “(. . . given that I have
had a conscious experience of red I will be able to deduce from some
physical description that it is like seeing red for the experiencer
that has it) and this makes qualitative facts physical. Robinson: What
we might (I’d even say, probably will) be able to do in the future is
to make inferences of the following kind.

Whenever I
have neural event of kind K, I have a red experience, and I never have
a K event without having a red experience.

Jones is
having a neural event of kind K. Therefore,

Very
probably, Jones is having a red experience. But the availability of
arguments like this one does exactly nothing to explain *why* 1. is
true. And 1. can just as well be true if the redness of experiences is
a different property from the K-ness of neural events (and different
from every other physical property). So, it would not follow from the
availability of arguments like the above that qualitative facts are
physical facts. Now, if we had an argument like the following, we’d
really be cooking:

a. The K-ness of a
neural event entails the occurrence of a red experience.

b. Jones is having
a K event. Therefore,

c. Jones is having
a red experience. But now we’d have to support a. That, however, is one
way of posing the Hard Problem.

yes it is! From the
video, but I suppose you asked us not to quote that. Sorry. It just
came to me as I was writing. I thought that part of your thesis was
that there is no true cognition without feeling and so a T3 system that
lacked feeling would not really be thinking. Is this not the case?

Harnad: “Counting
properties, or “kinds” of properties (“property dualism”) is just
numerology or taxonomy.”

I don’t feel this way,
but given that you shouldn’t mind being a property dualist. If one
starts from a certain position, viz that it is reasonable to expect
macro level truths to be entailed by some relatively small set of micro
level truths, then a property dualist is just one who thinks that there
are properties that exist but that are not entailed by a vocabulary
restricted to that of an idealized physics. It seems to me that you are
a property dualist of this sort, or at least you shouldn’t be bothered
if people say that you are.

Harnad: “Feeling will
remain a take-it-or-leave-it property in any Turing explanation, and
that means the explanation will never be causal — *except* if
psychokinesis were to turn out to be true, and feeling (doing things
because you feel like it) were to turn out to be an extra, independent
causal force in the universe.”

These are the kinds of
things that make it sound like you do endorse property dualism. these
two suggestions correspond to epiphenomenalism and interactive
(property) dualism. Note also that zombies and teh like are not thought
of as real possibilities for our world but as a test case for whether
or not you think phenomenal consciousness is entailed by a completed
physics. But, I guess the overall point I was trying to make is that
you seem to be dismissive of conceivability arguments but then you seem
to give one in support of your claim that there will ultimately be an
epistemic gap. It seems at least conceivable to me that, say, they
higher-order thought theory of conscious turns out to be the right
account of phenomenal consciousness. Suppose, just for teh sake of
argument, that it is conceivably the correct view. Then we may reason
as follows. Pain is the painful stuff, the painful stuff is some
higher-order thought, these higher-order thoughts just are states of
the dorsal lateral PFC. If this was true then we could conclude that
the painful stuff just was the dlpfc states. Why should those states be
the painful stuff? This question is answered by the higher-order
theory. Conscious pains just are the ones which are painful for me to
have. I have a conscious pain when I am aware of myself as being in
pain. That is what explains why it feels like something for me. Now,
you may not like this theory or its proposed explanation but it is
certainly not an objection to it to say that “Feeling will remain a
take-it-or-leave-it property in any Turing explanation, and that means
the explanation will never be causal,” since, at least on an account
like David Rosenthal’s, consciousness doesn’t have much causal impact
in the ﬁrst place. He does not think that consciousness is
epiphenomenal, quite, but most of the functioning of my mental life
continues on without consciousness. So on the higher-order thought
theory on can take away the feeling and leave the system (*mostly*)
undisturbed. Given this the kind of argument you produce doesn’t show
that consciousness can’t be computational. Rather it shows that you do
not ﬁnd the theories which would allow the kind of explanation you ask
for to be very effective. But what we need are independent arguments
against those theories of consciousness.

I just saw William’s
comment. I agree that what we need is the second kind of argument. That
is exactly where a theory of consciousness, like the higher-order
theory, would come in. For instance premise A might be defended with
‘the appropriate higher-order state’ substituted for ‘Kness’…

R.BROWN: “Yes… ‘Turing
Machines can explain all that is explainable but do not explain
everything there is’… is [a quote] from the video”

I knew that couldn’t
possibly have been a quote, because I argue the opposite, so I went
back and checked the video, and what I said was words more like the
following: “Turing’s contribution was that once you can explain
everything you can *do* with your cognitive capacity, you have
explained all that is explainable, but you have not explained
everything there is…” Big difference. It was about whatever system
successfully passes the Turing Test — which is not necessarily just a
Turing Machine (computer). So I was talking about causal explanation,
not necessarily just computational explanation. (I do agree with the
strong Church/Turing thesis, though, according to which a universal
Turing Machine can simulate just about any physical system and just
about every physical process.)

R.BROWN: “I thought that
part of your thesis was that there is no true cognition without feeling
and so a T3 system that lacked feeling would not really be thinking. Is
this not the case?”

Yes.

R.BROWN: “a property
dualist is just one who thinks that there are properties that exist but
that are not entailed by a vocabulary restricted to that of an
idealized physics. It seems to me that you are a property dualist of
this sort, or at least you shouldn’t be bothered if people say that you
are.”

I’m not bothered, but all
I’m really saying is that we can’t give a causal explanation of how and
why we feel…

R.BROWN: “Note also that
zombies… are not thought of as real possibilities for our world but as
a test case for whether or not you think phenomenal consciousness is
entailed by a completed physics.”

I seem to manage to make my
small point without needing ‘em, though…

R.BROWN: “that you seem
to be dismissive of conceivability arguments but then you seem to give
one in support of your claim that there will ultimately be an epistemic
gap.”

Where’s my conceivability
argument? I’m saying we have no causal explanation of how and how we
feel (or anything does), and that we are not likely to get one, because
(except if psychokinetic forces existed, which they do not) feelings
cannot be assigned any independent causal power in any causal
explanation. Hence the causal explanation will alway works just as well
with them or without them. Hence attributing feelings to whatever
system we successfully reverse-engineer (T3 or T4) will just be a
hermeneutic exercise, not an explanatory one — *even if the
interpretation is true*!

R.BROWN: “It seems at
least conceivable to me that, say, the higher-order thought theory of
conscious turns out to be the right account of phenomenal
consciousness.”

And it certainly does not
seem to me that any “higher-order thought theory of feeling turns out
to be the right account of feeling.” That’s just (to me) trading on the
(usual) equivocation between accessing and feeling: Till further
notice, all accessing is unfelt accessing — until the causal role of
feeling itself has been independently explained.

R.BROWN: “Pain is the
painful stuff, the painful stuff is some higher-order thought, these
higher-order thoughts just are states of the dorsal lateral PFC. If
this was true then we could conclude that the painful stuff just was
the dlpfc states. Why should those states be the painful stuff? This
question is answered by the higher-order theory. Conscious pains just
are the ones which are painful for me to have. I have a conscious pain
when I am aware of myself as being in pain. That is what explains why
it feels like something for me."

All I see is DLPFC activity
(and whatever its actual causal role is in generating T3 or T4
performance capacity), plus a claim that DLPFC activity is felt (pain).
Now I want to know how and why DLPFC activity (and all its causal
doings) is felt, rather than just done. It’s correlated with and
predictive of pain reports? Fine. But correlation is not causation, and
certainly does not answer our question of why pain needs to be felt
rather than just “functed” (executed). It ﬁts with a “theory of higher
order thought”? Then I ask the same question of the higher-order theory
of thought: why does pain need to be felt rather than just functed, to
do its job? (And, I’d add, be careful not to be too fancy about your
higher-order thought theory of pain, lest you price out the amphyoxus
and invertebrates, who seem to have their lowly but perhaps equally
aversive “ouch”‘s…)

R.BROWN: “Now, you may
not like this theory or its proposed explanation but it is certainly
not an objection to it to say that “Feeling will remain a
take-it-or-leave-it property in any Turing explanation, and that means
the explanation will never be causal,” since, at least on an account
like David Rosenthal’s, consciousness [feeling!] doesn’t have much
causal impact in the ﬁrst place.”

That may well be, but that
just makes it all the harder to give a causal explanation of it…

R.BROWN: “He does not
think that consciousness [feeling] is epiphenomenal, quite, but most of
the functioning of my mental [internal] life continues on without
consciousness [feeling]. So on the higher-order thought theory one can
take away the feeling and leave the system (*mostly*) undisturbed.”

I would say that was
(*mostly*) bad news for the higher-order thought theory — at least as
an explanation of feeling. (And I would say the news was all bad,
because I am sure that if the higher-order thought theory commits
itself to any realistic adaptive contingencies, it will turn out that
the feeling will prove not just mostly but entirely superﬂuous.)

R.BROWN: “Given this the
kind of argument you produce doesn’t show that consciousness can’t be
computational.”

I argued that (1) cognition
(e.g., understanding) cannot be just computational (T2 passed by a
computer program), but that (2) even for hybrid computational/dynamic
systems that can pass T3 and T4, we cannot explain how and why they
feel.

R.BROWN: “Rather it
shows that you do not ﬁnd the theories which would allow the kind of
explanation you ask for to be very effective. But what we need are
independent arguments against those theories of consciousness
[feeling].”

It is not just that I ﬁnd
all theories I’ve heard to date ineffective: I ﬁnd them *ineffectual*,
because (if they are Turing-like performance-generating theories at
all, rather than just hermeneutic hand-waving), they try to attribute
causal power where it adds nothing, and where the causation works
equally well with or without it.

Professor Harnad, you
have said that: “[T]he point of Searle’s Chinese room argument, and of
the symbol grounding problem, is that without sensorimotor grounding
(“physical embedding”), computation is nothing but the ruleful
manipulation of meaningless symbols. Therefore cognition (not just
“reasoning”: *cognition*) is not just computation.” However, isn’t it
the case that Searle’s argument was rather about understanding, which
by your deﬁnition above is “cognitive capacity + feeling”? The point
would then be that feeling (or consciousness) is not just computation.
I believe Searle would go further than that. He would say that, unlike
consciousness, computation is an observer-dependent phenomenon, not
something that one could discover in nature. That anything of sufﬁcient
complexity can be designated, interpreted and used as a computer. I
understand in this context that any computer can be represented by a
Universal Turing Machine, which is a notional machine, a read-write
head, a tape, a program of instructions. And you I think have said that
there is only one kind of computation, and I take it that that is it?
In that case, as I understand it, the implication of Searle’s argument
is that feeling or consciousness is not computation at all. I would
like to ask therefore how we are to interpret computation in your
sentences: “The brain and body certainly are not a computer plus
peripherals; there’s not just [implemented] computations going on in
the brain, but a lot of analog dynamics too.” and “We are dynamical
systems — probably hybrid analog/computational.” I have to go now but I
would like to come back to say something about sensori-motor grounding
and grounding generally.

B.RANSON: “isn’t it the
case that Searle’s argument was rather about understanding, which by
your deﬁnition above is 'cognitive capacity + feeling'?"

Cognition is cognitive
capacity + feeling.

Understanding,
intelligence, meaning — are all instances of cognition.

B.RANSON: “The point
would then be that feeling (or consciousness) is not just computation.”
No, the points are (at least) two:

(1) Cognition is not just
computation. (You can pass T2 with computation alone, without
understanding.) Moreover, T2 is ungrounded. Solution: scale up to T3
(sensorimotor robot).

(2) Whether for T2 or T3 or
T4, and whether for computation or hybrid dynamics/computation, there
is no explanation of how and why the system feels (if it does).

B.RANSON: “I believe
Searle would go further than that. He would say that, unlike
consciousness, computation is an observer-dependent phenomenon, not
something that one could discover in nature.”

That may well be, but it’s
neither here nor there for the two points above (1) and (2).

B.RANSON: “That anything
of sufﬁcient complexity can be designated, interpreted and used as a
computer.”

That is, I think,
demonstrably false, if it means anything at all. Moreover, it is
irrelevant to (1) and (2) above, which is what my essay was about.

B.RANSON: “I understand
in this context that any computer can be represented by a Universal
Turing Machine, which is a notional machine, a read-write head, a tape,
a program of instructions. And you I think have said that there is only
one kind of computation, and I take it that that is it?”

There is only one
formalization of what computation is: Turing’s, Church’s, Kleene’s and
Goedel’s all turned out to be equivalent.

B.RANSON: “In that case,
as I understand it, the implication of Searle’s argument is that
feeling or consciousness is not computation at all.”

Searle’s argument was about
whether cognition (for example, understanding) was computational.
Searle thought he had shown cognition was not computational *at all*,
but all he really showed was that cognition was not *all computational*.

B.RANSON: “I would like
to ask therefore how we are to interpret computation in your sentences:
“The brain and body certainly are not a computer plus peripherals;
there’s not just [implemented] computations going on in the brain, but
a lot of analog dynamics too.” and “We are dynamical systems — probably
hybrid analog/computational.” I have to go now but I would like to come
back to say something about sensori-motor grounding and grounding
generally.”

Interpret it as pointing
out that (contrary to Searle!) (a) not every physical (i.e., dynamical)
system is a computer (although the strong Church/Turing thesis is
correct, that just about every dynamical system can be formally
*simulated* by a computer) and (b) we — as well as all other T3 and T4
robots — are not just computers either: We are hybrid systems, partly
dynamic, partly computational, and both are essential for cognition.

Thanks for the
thoughtful response Professor Harnad. I will offer just a little more
to further clarify our respective positions: SMIRSKY: “[Searle] shows…
that the fundamental constituent elements of any computational system
are not capable of cognition” S.HARNAD: I agree: Computation alone (any
computation) is not enough to generate cognition. In other words,
cognition is not all computation. SWM: I don’t think the agreement is
as complete as suggested. My position is that the CR shows that the
constituent elements (the computational processes) of the CR are not,
in themselves, capable of cognition and they are not in the limited
arrangement characterized by the CR’s specs. But where you and I seem
to differ is that you’re saying (if I’ve got you right) that what needs
to be added is something that isn’t computation, whereas I am saying
more computational processes doing more things in an interactive way
could conceivably work if they are doing the right things and arranged
in the right way. Now it can be argued (as some have) that adding
parallelism to the conﬁguration (as I think is presumed by this
position if we are to adequately mimic brains) is to go beyond pure
computation, but I think that’s an odd claim (and hope it’s not yours).
After all, parallel processing involves the arrangement and linkage of
lots of processors doing serial processing in the way computational
processes happen. So yes, there are some hardware requirements here
which aren’t needed in straight serial processing but they don’t change
the qualitative nature of the system. ******* SMIRSKY: “If we expect to
ﬁnd understanding somewhere in [T2 computation's] particular
constituent processes we must end up disappointed. But if we are
expecting to ﬁnd it at a system level, in the way those constituent
processes interact in a particular setup (say the infamous Chinese Gym
or, as I would prefer to put it, a Chinese city where each room in each
building on each street is a processor doing some particular thing but
interfacing with and affecting many others) then we may certainly get
luckier.” S.Harnad: I am afraid these analogies don’t help me at all,
in explaining (or understanding the explanation) of how and why we
feel! SWM: The reason it helps me to see what apparently it does not
help others (including yourself) to see has to do with the analysis of
semantics and the role of complexity. If semantics (meaning) includes
the sense of being aware (your “feeling”) then awareness can, I
believe, be fully explained by a model that includes a world picturing
“machine” (like brains) which develop, maintain and use multiple
pictures or mappings based on different classes of inputs. As I’ve
mentioned elsewhere, if you have a self (or selves) consisting of one
or several categories of inputs and a world consisting of several other
categories or inputs, and the self subsystem represents the full entity
while the world subystems represent the external environment and the
two systems relate to one another in a way that involves the full
entity operating in the environment, then you have the basic
ingredients for being aware. Actual awarenesses will vary depending on
the particular elements included (is it a vision dominated system, an
auditory dominated system, and so forth?). Even the occurrence of
emotions and physical sensation is probably optional on this view. But
the awareness, I am suggesting, could be sufﬁciently explained in terms
of the make-up and interaction of these two classes of subsystem within
the overarching system. I agree that we typically look into ourselves
and seem to see something else, something mysterious and different from
the rest of what we experience. But what we seem to see doesn’t have to
be what is actually there. S.Harnad: [The "Chinese Gym" argument (CGA),
by the way, was a dreadful anticlimax to the Chinese Room argument
(CRA). The CRA, against pure computation, passing T2, is perfectly
valid, SWM: I disagree. The system argument is compelling once we get
past the confusion of thinking that its claim depends on the CR being
taken to have understanding. Its force lies in the argument for a
generic system approach, not in arguing for the CR system to be what it
manifestly is not. The Chinese Gym/City (the connectionist model)
simply says that to get minds like our own on a computational platform,
you need a system that is sufﬁciently large and complex and can mimic
brain operations. But this still does not go beyond computation per se.
A fellow over in England, Peter Brawley, suggested the best name for
this version of the System Reply to me when, in a discussion, he came
up with the point that expecting the CR to understand is like building
a bicycle and then wondering why it doesn't ﬂy. Since then I have
tended to refer to this version of the system reply as the Bicycle
Reply (a variant which gets at the need to think in terms of systems
and not the CR system in particular). But I agree that the
computational processes themselves do not understand and cannot be
expected to evidence understanding and that's because the feature(s)
we're looking for is a system level feature, not one that we should
expect to see occur on the level of the system's constituents. S.Harnad
(continuing): and Searle easily rebuts the "System Reply" (that not
Searle, but the system as a whole, is the one that is doing the
understanding) by memorizing the computer program and doing all the
computations in his head: He becomes the whole system, and can thereby
say, truly, without remainder, that he does not understand (and there's
nothing and no one else). SWM: The problem with that is that on this
view he is still not the system. Where, in the original CR, he was a
component in the system (one of its constituents), in his rejoinder to
the System Reply, the system has become a component within him. Just as
we don't know what all our own component organs and systems are doing
in our bodies (including our brains), why should Searle's ignorance of
Chinese while running a system in himself argue against that system's
capacity for comprehension? S.Harnad: But with the CGA, which is a
variant of the CRA, meant to show that even if the T2-passing system is
a neural network rather than computation, it still does not understand.
A neural net is a parallel, distributed network of interconnected
units, passing activation to one another. So Searle asks us to imagine
a bunch of boys in a gymnasium playing the role of the distributed
units, passing papers. Searle waves his hands and says there's
obviously no understanding going on there. But that's not at all
obvious, because here the "System Reply" would apply: The boys don't
understand, but "the system" does. SWM: Yes.

S. Harnad: And,
unlike in the CRA, Searle cannot himself become the system -- if the
parallelness and distributedness of the units are essential to the
neural net's success in passing T2, because those are not
implementation-independent, purely computational properties. SWM: But
they don't change the qualitative nature of the processes which are the
same as in the CR, only now more is going on. If the CRA isn't about
what those processes can do in terms of producing understanding, it's
pointless. S.Harnad: But there is a simple solution to show that the
parallelness and distributedness are not essential to the neural net's
success in passing T2. Simply simulate the neural net computationally.
That's a symbol system that Searle can memorize, and thereby become the
system. And it produces exactly the same T2 I/O performance as the
neural net. And Searle -- now again become the whole system -- does not
understand. QED. SWM: If Searle contains the system it doesn't follow
that he must also be co-extensive with the system. S.HARNAD: But only
with the proviso that the noncomputational aspects -- parallelness and
distributedness -- are inessential to the T2 success, and we stick to
T2. As soon as we move up to T3, sensorimotor transduction becomes
essential, and neither the computational component, nor Searle's
simulation of it, can be the whole system. Hence T3 and the hybrid
system that can pass it are immune to Searle's CRA.] Harnad, Stevan
(1993) Grounding Symbols in the Analog World with Neural.
Think 2(1) 12-78 (Special Issue on “Connectionism versus Symbolism”
D.M.W. Powers & P.A. Flach, eds.). Reprinted in Psycoloquy
12(034-063) Nets SWM: I don’t think you need to go that far to show
that Searle’s response to the System Reply is inadequate. It’s enough
to show that containing a system is not the same as being co-extensive
with it. Moreover, I don’t see a need to go beyond computational
processes running on computers to achieve synthetic understanding.
Whatever may actually be needed in fact to achieve a real world
operating model (such as embedding in the world) is ancillary to the
question of what it takes do produce understanding in principle.
S.Harnad: But I’m afraid that your “Chinese city” analogy is as
unavailing as Searle’s Chinese Gym, for much the same reason. The
problem, as always, is of course the “other minds problem”: The only
way to know for sure whether a system has a mind (feels, understands,
whatever) is to *be* the system. And normally, the only system you can
be is yourself. SWM: This mixes up what it would take to do it vs. what
it would take to test for it. Of course, as I think you mentioned
elsewhere, we never know in the usual sense of “know” that others have
minds like we do or that they have minds at all. I favor the
Wittgensteinian solution to this “problem” but in a testing regimen, it
certainly remains an issue. That’s overcome, however, when we realize
that all we are required to do is to test in the way we operate in the
real world and that’s to interact with the tested entity in terms which
match its capabilities. If it has only “e-mail” capabilities, then one
type of test obtains. If it can move about in the world, act, and so
forth, then another. The test would need to be open ended, of course,
as you say. But it would demand no more of the subject entity than we
demand of every other person on this planet with whom we may happen to
come into contact. S.Harnad: The only exception is T2, when passed by
computation alone. This is what I’ve called “Searle’s periscope,”
penetrating the normally impenetrable other-minds barrier. And the
reason Searle’s perisicope works in this one special case, is that
computation is implementation-independent. Therefore, if something is
really a purely computational property, then any and every
implementation of that same computation will have that same
computational property. That’s how Searle can “be” the TT-passing
computational system, and thereby report back to us that it does not
give rise to the property of understanding! SWM: Except that Searle’s
argument doesn’t show that a sufﬁciently complex arrangement of CR
constituents can’t do what the CR can’t do, thus it does not show that
a understanding can’t be a “purely computational proerty”. The problem
is that now we have hit a certain circularity in our discussion. You
say that Searle’s argument proves that understanding isn’t a
computational property but doesn’t work against a certain kind of
System Reply which goes beyond pure computation, while I say that it
doesn’t prove that and that the System Reply, itself, doesn’t go beyond
pure computation. Our problem is that we both likely agree that
something is required in the way of hardware to support a System Reply
of the Chinese City variety that goes beyond the CR model. I take it
that you think that what is required includes the mechanics that enable
a dynamical relation with the world. I certainly do not. But I do agree
that there would be a need for additional components (enabling linkages
between processors and so forth) in a system conceived on the Chinese
City model. S.Harnad: But that fails for the Chinese Gym. And it fails
also for the “Chinese City”: We know no more about whether a Chinese
City does (or doesn’t) understand than we know about whether Gaia or
the solar system does or does not understand. SWM: Of course the only
“Chinese City” we have is the brain and we know it does understand, at
least in some cases! S.Harnad: Besides, even if a Chinese city or the
solar system were to understand — or even, for that matter, if the
brain (which really does understand) or a T3-passing robot (which
probably does understand) — is like a Chinese city, we still haven’t a
clue of a clue as to why and how it understands. SWM: We do if the
thesis I’ve proposed, that it’s a matter of a certain kind of
complexity in arrangement is right. But how do we determine if it is
right? On a purely philosophical level the issue only requires that we
look for an explanation which best accords with other things we know
and which adequately accounts for all the things that need to be
accounted for. But that still doesn’t establish the truth of such a
thesis. For THAT we need to implement and test in the real world. And
such testing need only be designed to 1) accord with the subject
system’s capabilities and 2) look for responses that are evidence of
the presence of comprehension (or other features if we think these
may/should also be present). S.Harnad: Not, that is, if we don’t lose
sight of the fact that understanding is not just cognitive capacity
that is Turing-indistinguishable from that of someone who really
understands, but it also *feels like something* to understand. In other
words: Why on earth would it feel like something to be like a Chinese
city? SWM: Why on earth does it feel like something to be us? Do we
feel like our brains? If the Chinese City is an analogue for a brain,
then it may or may not be the entire entity, in which case what it
feels like to be it is best asked of it. Of course, the “Chinese City”
is just a metaphor for a model of what a brain is so there’s no reason
to think it feels like anything at all because metaphors don’t do that.
But genuine subjects would, of course. ******* SMIRSKY: “Then we have a
system that’s more brain-like in its complexity. And why should we
expect something less from a computational platform than we get from a
brain here?” S.HARNAD: Whether the complexity is in the performance
capacity (T2, T3) or in the internal processes (whether dynamic or
computational), there is not a clue of a clue as to why or how any of
it is felt, rather than just “functed.” SWM: I don’t quite know what
that means. Any subjective entity with the capacity of responding to
this question ought to be able to say this is what it’s like to be me
under the right circumstances. If we’re hypothesizing such an entity,
whatever its physical platform, why assume it would be unable to
respond from the get-go? Doesn’t that prejudge the answer? I think the
problem is that you are focused on the mysteriousness of being a
subject. But that may only be an artifact of language, that we are
equipped to speak about certain kinds of experiences but not others. If
you ask me what is it like to be me, what kind of an answer would I
give? I could say, well I see this or that, remember this or that, have
this or that emotion at this moment, grew up here, went to school
there, lived here, work there. You might then say, no, I meant to be
you instead of a bat or a rock or your pc. How would I answer such a
thing? Isn’t it enough that I CAN answer while the rock and pc can’t?
The bat, presumably, has something like what I have, a subjective life,
however different than mine, and in that sense I am prepared to grant
that it, too, is aware at some level and can imagine what kind of life
it might live, how it would experience the world. But then I can do
that for a computer that has elements of subjectiveness in its
behaviors as well, can’t I? ****** SMIRSKY: “your position is that to
get meaning we need both grounding… and feeling (being the sense of
knowing what we know).” S.HARNAD: Actually, if you give me a T3 robot,
it’s enough if it feels anything at all; we don’t even need to fuss
about *what* it feels… SWM: Right. Same for the non-robotic computer,
no? SMIRSKY: “Here your account reaches the unexplainable part then,
right? But then it doesn’t help much except to reafﬁrm a mystery.”
S.HARNAD: Correct; it reafﬁrms the mystery. And pinpoints the mystery’s
locus: explaining how and why a T3 (or T4) robot feels. And pinpoints
also why it cannot be explained: Because there is no more causal room
in the (already successful) T3 or T4 explanation (as long as one does
not allow psychokinesis, for which all empirical evidence to date is
resoundingly negative). SWM: But what if it can be explained (say as I
have explained it)? Why afﬁrm a mystery as a mystery without ﬁrst
ascertaining that it cannot be de-mystiﬁed? This, of course, brings us
to the question of whether my approach works to explain the “mystery”.
I am assuming that, since you have had a chance to read my explanations
as to what I mean by “complexity” and how that would work to yield
awareness, you don’t think it works. But then we would need to see if
you have deﬁnitively undermined my claim or merely denied it in favor
of yours. ******** SMIRSKY: “But what if a different account of meaning
(as in it being the outcome of a complex layered and interactive system
that resolves into associative picturing and linking) can also tell us
how we get feelings? Why wouldn’t such an account, if it covers the
bases, be preferable to one that leaves something (feeling) out?”
S.HARNAD: Because unfortunately that would not be an explanation of how
and why we feel: it would just be a Just-So story! What we would need
would be a causal explanation — not mentalistic hermeneutics on top of
a complete causal explanation of cognitive performance capacity. SWM:
If by “causal” you mean an explanation of what brings what about, then
an explanation in terms of how computer processes produce features that
look like the features we have which constitute our mental lives, then
why isn’t this causal in the way we need? If you mean some set of laws
a la the laws of physics, this is not excluded though perhaps not
essential since this isn’t physics but a different phenomenon which
could, conceivably, require something parallel to the already know
physical laws or something that expands them. At the very least, the
fact that subjectness is a different aspect of our experience than the
forces and matter we encounter in the external world in which we exist
may just be enough to explain why the “causal” answers should not be
expected to be found in

chemistry or
physics or astronomy and so forth. As our friend Searle likes to put
it, consciousness is biologically based. Maybe what’s needed is an
understanding of the dynamics of this particular aspect of biology,
i.e., what brains do to make minds. But then that’s the point of
cognitive science. *********** SMIRSKY: “when we ﬁnd meaning we do so
through an associative process which links different images in our
minds…” S.HARNAD: But how and why would “an associative process which
links different images” be a *felt* process? SWM: The feltness is a
function, on this view, of the interplay of layered subsystems in the
overarching operating system of the brain. SMIRSKY: “if mind features
are system level rather than process level, then you need the right
system, don’t you?” S.HARNAD: Yes. But the way you ﬁnd out that you
have the right system is to make sure it generates T3 capacity: Then
how do you make sure it feels? SWM: You ask the right questions and do
the right observations. How do you know that I feel or I know that you
do? S.HARNAD: And what is the causal role of the fact that it is felt
rather than just functed (if it is indeed felt)? SWM: Multiple layered,
interactive subsystems within the overarching system. ****** S.HARNAD:
T3 adds essentially capacities if are trying to reverse-engineer a
causal model that has all of our cognitive capacities,
Turing-indistinguishable from our own. (The real objective is not to
synthesize, but to explain causally: the synthesis is just to test the
causal powers of the explanation.) If we insist on brain-like
processes, we can scale up to T4. But neither that — nor taking an
aggregate or “system” view of it all — explains why and how the system
(whether T3 or T4) feels (if it does), rather than just “functs,”
Turing-indistinguishably from a system that feels. SWM: Complexity =
multiple subsystems interactively operating in a layered way within a
larger system. ****** SMIRSKY: “I don’t think any AI researcher thinks
that there aren’t ancillary and undergirding elements to any
computational system. There is the platform, of course, and then the
avenues for feeding in information which could be sensory and motor
devices (if we want to give the entity a semblance of real world
existence a la what we experience) or data dumps and/or information fed
in piecemeal over time.” S.HARNAD: This all assumes that the lion’s
share of the work is computational, and that the dynamic part is
trivial I/O. I think there’s no particular reason to believe that
that’s so. SWM: And no particular reason to believe you have to add
dynamic interactivity with the world (even you agree a data dump and
individual feeds as needed might be enough). Moreover, it looks like
adding on dynamic connections with the world is superﬂuous. One should
do it only if what one already has under consideration can’t work. But
Searle’s argument doesn’t demonstrate that it can’t. S.HARNAD: It more
or less insists on computationalism despite the contrary evidence, and
trivializes sensorimotor capacity. SWM: If computationalism covers
everything (as I argue — not insist — that it does), why add more to
the explanation? S.HARNAD: But never mind! Suppose it’s *true* that T3
and T4 can be passed by a system that is mostly computational: The
explanatory gap (how and why does it feel) remains just as gapingly
wide. SWM: I don’t see any explanatory gap if the account of a certain
kind of complex system covers everything else and I think it does.
******** SMIRSKY: “On the view I have advanced, awareness, which is my
take on your term “feeling”, involves the interplay of subsystems,
speciﬁcally including an entity that has a sense of being what it is”
S.HARNAD: Is that a felt “sense of being what it is” or just a functed
sense? If felt, then the question is already begged, by supposing a
felt component without explaining how or why it is felt. SWM: Depends
on what “felt” is. Let’s look inside ourselves. Discount emotional and
sensory impressions right off because they’re not what you have said
you’re talking about. What’s left is awareness of this or that. What
does this awareness consist of? Well it looks like its a capacity in us
to associate inputs with an array of stored representations/images.
These are arranged in different groupings. Some involve our sense of
our physical selves, some involve our individual histories (how we were
raised, what experiences occurred to us, how we were schooled, what
kinds of work we’ve done, etc.). Some involve the things we know about
the world, both in our immediate locations, our general locations, the
places in which we live, work, etc., and about the organisation and
geographical information about our planet. The subsystems having to do
with our persons connect in one way, those dealing with the world in
another and all the groupings interact with one another. What you call
feeling and I call awareness can be explained, looks like the
occurrence of instances of recognizing an input in relation to these
subsystems. Not only do we get understanding this way but along with
it, when the self subsystems are pulled into the interaction, we get
impacts on ourselves. And here is all you need to ﬁnd to account for
awareness. ****** SMIRSKY: “a subsystem (or subsystems) within the
larger system dedicated to differentiating between internal and
external inputs and classifying/grouping certain internal inputs in a
way that forms a picture of the entity FOR the entity.” S.HARNAD:
Differentiation, classiﬁcation, and even “pictures” (meaning: analogs
of sensory projections) are ﬁne. But why and how are any of them *felt*
rather than merely functed? (And in what sense are they more “for” the
entity — which is presumably the system itself — than are any of its
other adaptive T3 or T4 functions?) SWM: The entity has various
pictures, many of which relate to a larger subsystem which it
recognizes as itself while others relate to the subsystems that
constitute its mapping of the world outside its “self”. S.HARNAD: Do
you see how the mentalism creeps in without any independent causal
explanation? That’s what makes this decorative hermeneutics (even if
it’s true!), rather than causal, functional explanation (of feeling).
SWM: That we have to refer to the mental when explaining the mental
should not be surprising and is not creeping “mentalism”. If

the point of an
explanation is to say how something happens, is brought about, is
caused, then an account that explains how a sense of awareness occurs
is what’s needed. Otherwise all we are doing is holding out for the
mystery which feels nice (it makes us seem special in the universe) but
is not necessarily the way things really are. S.HARNAD: And that’s what
makes the “hard” problem hard (indeed, by my lights, insoluble). SWM:
By my lights the “Hard Problem” is an illusion, a function of our
desire not to reduce the phenomenon in question to physicalistic terms.
****** SMIRSKY: “This subsystem, with its pictures, interacts on an
ongoing basis with the subsystems which picture the external
environment in various dimensions and aspects. Once you have the self
subsystem (which works rather like the others but just involves
attending to different inputs), you get awareness because the self
subsystem, in interacting with the other subsystems, manifests those
interactions as what we recognise as awareness.” S.HARNAD: I am lost.
The problem is not the “self-subsystem,” nor any useful information to
which it is privy. The problem is with the fact (if it’s a fact) that
any of this is *felt*: How? Why? SWM: The issue is what is it to be
“felt”? What is feeling in this way? Above I have endeavored to explain
it as the recognition of a relation between external non-self and
internal self-related representations. Of course there is no phenomenon
of relation but there is the recognition of how one thing affects
another. Insofar as comprehension is seen to occur with the occurrence
of various associations between inputs and stored representations, the
feeling you are focused on occurs when CERTAIN relations (those between
some of the external and some of the internal subsystems) occur. They
are just THOSE associations. S.HARNAD: It sounds like all the functions
you describe would do their job just as well — in fact,
Turing-indistinguishably well — if they were all just executed
(“functed”) rather than felt. So how and why are they felt? (This is
why “awareness” is a weasel-word: it conﬂates accessing information
with feeling something whilst so doing. SWM: It’s a word we have in
ordinary language. But why should it be more weasely than “feeling” in
this context? Both have other meanings associated with them. *****
SMIRSKY: “Computational complexity need not produce grounding since
grounding is more likely, on my view anyway, to be an outcome of the
process that imputes meaning than it is to be the underpinning of that
imputation.” S.HARNAD: I’m not sure what “imputes meaning” means. SWM:
If there is meaning and not everything has it and not all the time then
somehow what has it must get it. If we agree that a thing has meaning
when that meaning is assigned to it by a subject, then imputing is as
good a term as any for this operation. S.HARNAD: (For me, “P” means
something to a person if that person can uses “P” the way someone who
understand “P” uses “P” *and* it feels like something for that person
to mean that “P”.) SWM: So meaning is grounded + feeling like, and
something means something when it is grounded and someone feels (what?
recognition?) that it is so grounded? S.HARNAD: I don’t really know
what computational complexity is (except in the formal
complexity-theoretic sense). SWM: I’ve already described it. It’s more
things going on in interactive ways. But only certain complex systems
are doing it in the right way. S.HARNAD: Whatever computation it takes
to pass T3, that’s the requisite complexity; and in virtue of the fact
that it passes T3, it is grounded. Now the problem is explain how and
why it is felt (if it is indeed felt). SWM: On my view, passing T-3
includes feltness but the testing of course must look for that as well.
How? By formulating questions or challenges that require a sense of
self (with the consequent awareness of differences and impacts on the
self). **** SMIRSKY: “I think it does, indeed, provide a basis for
explaining feeling in this sense, i.e., sufﬁcient complexity allows for
a system that includes subsystems capable of interacting in certain
ways — one of which involves having a sense of being a something and of
being affected by those inputs which are associated with other
subsystems (those that capture and manifest more external inputs).”
S.HARNAD: I understand the potential functional utility of having a
subsystem with such privileged informational and causal role — but I do
not understand how or why its functions are felt — or, if you really
mean it a a homunculus, how and why it feels. SWM: As noted, this
depends on what we mean by “felt”. I think that if we examine any
instance of feltness in ourselves, we will see that there is no special
feltness that is felt, that is occurring. It’s just a matter of
relations between some aspect of the self-picture and aspects of the
external pictures. What particular images/representations/sensations
are kicked up in the course of the given relations may and likely will
differ. The fact that some ARE kicked up leads us to think there is one
special image/ representation/sensation there among all these others.
But why should there be? It’s enough that some ARE kicked up in the
course of these mental events. It’s the set of these relations with
their ever varying content that constitutes the occurrence of
feltness/awareness on this view. We don’t need to invoke the notion of
mystery at all.

I put in a little time
today watching the video presentation (new version) of this paper’s
argument for a second time. Near the end Professor Harnad poses his
challenge to his critics — provide an account of what he calls the
“feeling” part of understanding. He grants up-front that we can
explain, in fact or at least in principle, the causal factors
underlying human and other thinking entities’ actions, up to and
including how their brains do what they do (though full understanding
of this aspect is still a ways off). But, he concludes, there is no
hope for an explanation of the way something physical (whether brain or
computer) produces that sense of awareness he calls “feeling”. (I’m
presuming that if I’ve misstated this– not an impossibility — he’ll
correct as needed.) We’ve seen in these exchanges that this aspect of
his position continuously comes to the fore in his responses although I
think he’s a little less clear on what he means by “grounding” (it’s
not just connecting symbols by reference to physical elements in the
world but also operating effectively in the world on those elements).
But the crux of his argument seems to be that understanding (getting
the meanings of words and other references) requires the occurrence of
the right kind of responses through the entity’s engagement in the
physical world PLUS having the feeling of doing or at least getting it
(which is especially relevant in cases like my road trip story
recounts). It’s the “feeling” issue that seems to be the critical one
here. William Robinson has argued for the possibility of intelligent
zombies qua robots (I think that’s a fair formulation, anyway) and Josh
Stern, early in the discussion, has argued that there’s no reason to
look for something extra called “feeling” because a comprehensively
physical account will sufﬁce. My own view, which has been stated here
over the discussion’s course in increasing detail (but perhaps, as yet,
without adequate clarity) is that the idea of Stevan Harnad’s
“feeling”, which I prefer to call “awareness” (that “weasely” word!) is
about right, i.e., that John Searle was correct in focusing our
attention on the role of actually experiencing the understanding when
we understand. As I have already indicated, however, I think Searle
draws a mistaken conclusion from his CR thought experiment, because you
cannot generalize from the failure of an underspecked system to the
failure of all possible systems of the same type. But the Harnad
challenge remains outstanding after all the back and forth here: How to
account for the occurrence of the subjective aspect of understanding
which seems to be outside the realm of objective observation? If we
reject a view like Stern’s, that there is nothing left out in a causal
account which ignores subjectiveness (note that Stern doesn’t, as far
as I know, deny subjectiveness, only the need to account for it
separately from the physical description of what brains and any
equivalents do), then we do have to say how feelingness occurs as
Harnad demands of us. I’d like to suggest at this point that my
response, which invokes the complexity of a brain-like operating system
to account for both meaning and the feeling of understanding that
accompanies instances of human comprehension, does answer his challenge
though I expect he will not. At the least the account I’ve offered is
rather complex and abstract and so hard to get hold of conceptually.
After all, how can we suppose that a computer, however massive and
however sophisticated its operations, ever really can understand as we
do? Aren’t all such entities going to end up as nothing more than
Robinson’s robots? Frobots? And if they weren’t how, in Harnad’s way of
looking this could we hope to tell?

So near the end of this
discussion, what have we got from it? And has the Harnad challenge been
met?

I think we have probably
reached an impasse, and our exchanges are getting rather too long and
repetitive, so I won’t use quote/ commentary here, even though I prefer
it.

I think we have one point
of disagreement, and this will not be resolved by further iterations:
You feel that a (“complex”) higher-order componential schema somehow
explains how and why we feel, whereas I cannot see that at all: It
seems to me just to be hermeneutics on top of an (unspeciﬁed) causal
theory of function.

I think there is also a
point of misunderstanding: You seem to think that the problem of
feeling has something to do with cognizers telling us what things *feel
like* (or telling us that they feel like something). But that’s not it
at all. It’s about explaining (causally) how and why they feel anything
at all (if they do).

SWM @ 2/21/2011
17:01 SWM: Josh Stern, early in the discussion, has argued that there’s
no reason to look for something extra called “feeling” because a
comprehensively physical account will sufﬁce. JS: I hope to post again
soon, decided that I should do a little actual composition ofﬂine, and
am ﬁnding the time. Just a quick note here regarding the “feeling”
issues, that apparently SH believes we can do without in the ﬁlling of
some gap that “grounding” fulﬁlls. I also hold that any “feelings”
issue is at least methodologically separable if not moot, but the
feelings (eg qualia) issue yet remains to be explained. But most of all
I revert to Stuart’s note that we have to clarify whatever it is we are
actually talking about, or saying. My major and no doubt most annoying
thesis is that much of the traditional discussion of these topics
suffers from “The myth of the given”, and tries to answer questions
that are not coherent. Yes this is hardly new and very Douglas Adams
(“42″), but hardly less valid for being a classic complaint.
Disclaimer: Stuart and I are Internet friends of old, and share many
sympathies on these issues. Hi, Stuart! And others here who I know from
other Internet fora. More ASAP.

Reply to Stevan Harnad @
2/21 18:06 PM Yes, I think you’re right. At some point discussions like
this reach bedrock and I think ours is here. I have no problem seeing
how a computer system (given adequacy of the platform to run the full
panoply of interrelated programs) could, at least in theory, produce a
thinking/feeling synthetic mind that’s not different, in terms of
functionalities, from our own. On the other hand you seem to draw the
line there. As I read it, the idea just makes no sense to you. I have
discussed this with many who are in agreement with that view and with
some who aren’t and it always seems to come down to that, a matter of
seeing the possibility or not. Note that my view is not that a machine
mind would necessarily be just like ours as there could be any number
of variations in terms of differences in the medium, the platforms,
capacities, etc. But I think it quite conceivable that a sufﬁciently
complex/ sophisticated system of the type I’ve described could have the
kind of feeling you talk about. On the “misunderstanding”: I don’t
think you’re right about that. I have never said here that the capacity
to report or describe particular feelings was the issue though I have
occasionally taken some of your points to allude to that (as when you
questioned the value of the Chinese City model on the grounds that we
could never know how it felt to be a Chinese City). I take it from your
response that you would not agree that I have offered a way that meets
the challenge you laid out in your video. While I didn’t think you
would (and ascribe that to this fundamental difference in how we
imagine or conceptualize consciousness), I wonder what it would take,
on your view, to meet that challenge. Perhaps my response doesn’t work,
but then there must be some sort of account that might, on your view.
I’d be interested to know what the parameters might be. To offer the
challenge after all, you must believe there’s some way to meet it, at
least in principle — even if no one could ever actually do so in fact.
As you also note, our exchanges have become much too long so I will
withold anymore extensive detail in any further comments I post unless
speciﬁcally requested to provide that. Thanks for a good discussion and
I would certainly like to learn more about what you would count, on
your view, as explaining how “feeling” (as you have used the term) is
caused.

S.MIRSKY: “I wonder what
it would take, on your view, to meet… the challenge you laid out in
your video. Perhaps my response doesn’t work, but then there must be
some sort of account that might, on your view. I’d be interested to
know what the parameters might be…”

Happy to oblige (though
please note that my own view happens to be that it is impossible to
explain how and why we feel, for the reasons I have given, but am happy
to repeat when asked).

To meet the challenge of
giving a causal explanation of how and why we feel, you would either
have to:

(1) show that psychokinesis
(“mind over matter”) is possible after all, despite all the contrary
evidence (i.e., show that feeling is a 5th causal force in the
universe); or

(2) show that some of the
functions in a successful T3 or T4 causal mechanism could not do what
they do if they were unfelt functions rather than felt functions; or

(3) show that (for some
reason that needs to be described and supported) there is no need to
give a causal explanation of the fact that we feel.

But what will not work is
hermeneutics, namely, *interpreting* some components or aspects of the
function of a causal mechanism as being felt, and then dubbing that
interpretation as an explanation.

Nor will “wait-and-see”:
There are some challenges on the table that suggest the wait is
destined to be unrewarded. I especially like challenges in category
(2), because it is usually illuminating to show how the alleged need
for and role of feeling in particulal cases always turns out to be
defeasible.

But I’m happy to take on
challenges in category (1) or (3) as well.

What I can’t do anything
about is hermeneutic hypotheses, because, like all interpretations,
they are irrefutable, being merely matters of taste. I can only repeat
that the challenge concerns objective explanation — not subjective
exegesis — of the irrefutable fact that we feel.

Hi Istvan, it’s good to
see you here (and see you discuss consciousness with smart
philosophers, as opposed to discussing Hungarian politics with dumb
right-wingers…). And thanks for the great talk. Let me try to give a
reductio argument against your conclusion that even after having
explained all that can be explained about cognition, there still is the
residue of what you call ‘feeling’. Suppose that there is Bence*** who
passes the relevant Turing Tests (whichever you pick – T3, T4…). But
Bence*** lacks any phenomenal character every day between 12pm and 1pm.
At 11.59.59am, it’s all feelings, but at 12.00.01pm, he feels nothing
(but he is still behaviorally and neurally indistinguishable from me).
If your conclusion is correct, Bence*** must be possible. I’m trying to
show that it is not. Suppose that I eat mango pickle for the ﬁrst time
in my life at 12.30pm one day. I would clearly feel something at that
time. Bence*** also tastes mango pickle for the ﬁrst time. I then keep
on eating mango pickle until 1.10pm. So does Bence***. Now what would
Bence*** feel at 1pm, that is, when his feelings come back? He,
presumably, would feel something new – something he’d never felt
before. If you ask him: “do you feel something new, something you’ve
never felt before?”, he’ll say (if he’s honest): “Hell, yeah”. But I
will not feel anything new – I’d been eating mango pickle for half an
hour by then. So there’s a behavioral (and, presumably, a neural)
difference, which contradicts the original supposition that there isn’t
any behavioral or neural difference betweeen Bence and Bence*** (we’ve
both passed the relevant Turing Tests). But then your conclusion leads
to a contraditction: it must be false… Which step of this argument do
you disagree with?

B.Nanay: “Hi Istvan,
it’s good to see you here (and see you discuss consciousness with smart
philosophers, as opposed to discussing Hungarian politics with dumb
right-wingers…).”

Bence, Szia! (For those who
don’t know what Bence was alluding to: something much more important
than puzzles in the philosophy of mind is happening to smart
left-liberal philosophers in Hungary: the “dumb right-wingers” are the
increasingly partisan and authoritarian government of Hungary: See this link.)

B.Nanay: “Suppose that
there is Bence*** who passes the relevant Turing Tests (whichever you
pick – T3, T4…). But Bence*** lacks any phenomenal character every day
between 12pm and 1pm. At 11.59.59am, it’s all feelings, but at
12.00.01pm, he feels nothing (but he is still behaviorally and neurally
indistinguishable from me). If your conclusion is correct, Bence***
must be possible. I’m trying to show that it is not.”

(1) Presumably B* is an
engineered robot.

(2) Let’s say B* is a T4
robot we’ve designed, and we know how he works because we designed him
and understand the internal causality that generates his T4 success.

(3) We cannot, of course,
know one way or the other whether B* feels (for the usual reasons).

(4) You are *stipulating*
that he feels at some times and not at others.

(5) Presumably this is not
something the designers have done (since they just worked to pass T4),
so there’s no observable difference between the times B* is stipulated
to feel and the times he is stipulated to not-feel.

Well then I suggest we are
no longer talking about the causal basis of T4 performance; we are
talking about what omniscient stipulations entail. As such, they have
no bearing at all on the point I am making. What you get out of an
omniscient stipulation is what you put into it…

B.Nanay: “Suppose that I
eat mango pickle for the ﬁrst time in my life at 12.30pm one day. I
would clearly feel something at that time. Bence*** also tastes mango
pickle for the ﬁrst time. I then keep on eating mango pickle until
1.10pm. So does Bence***. Now what would Bence*** feel at 1pm, that is,
when his feelings come back?”

Bence, I think you may be
misunderstanding the Turing Test: B* is not your doppelganger or your
bioengineered clone. He is just another generic entity capable of
passing T4. If you happen to build him so that he is not only
Turing-indistinguishable from a real person (which is all the TT calls
for), but you also make him identical with you up to time T, then (just
as would happen with your clone), he would begin to diverge from you as
of time T.

There’s nothing relevant or
new to be learnt from that. And it’s the same whether or not B* feels.
(He would diverge behaviorally even if he did not feel.) And
omnipotently turning the feeling on and off for an interval has nothing
to do with anything either. B.Nanay: “He, presumably, would feel
something new – something he’d never felt before. If you ask him: “do
you feel something new, something you’ve never felt before?”, he’ll say
(if he’s honest): “Hell, yeah”. But I will not feel anything new – I’d
been eating mango pickle for half an hour by then. So there’s a
behavioral (and, presumably, a neural) difference, which contradicts
the original supposition that there isn’t any behavioral or neural
difference betweeen Bence and Bence*** (we’ve both passed the relevant
Turing Tests). But then your conclusion leads to a contradiction: it
must be false… Which step of this argument do you disagree with?”

The very ﬁrst step, where
you misunderstand the Turing’s Test to be one of
Turing-indistinguishability between two individuals, rather than what
it really is meant to be, namely, Turing-indistinguishability in
(generic) performance capacity from any real person…

Reply to Stevan Harnad’s
post of 2/22/11 @ 9:39 AM Okay, thanks for responding. Let me ask this:
Suppose the Star Trek character, Data, actually existed and behaved the
way it’s portrayed in the show. Suppose the science were there to give
us a genuine artiﬁcial mind of Data’s type. Now we know that, at least
in the earlier stories, Data hasn’t got emotions (though the acting and
storyline don’t always hew to that line because we often see Data with
a puzzled look and that is a sort of emotion, isn’t it?). At the least
we know that Data has awareness and it’s an awareness that goes beyond
the responsiveness of thermostats. Now perhaps Data is just a zombie
intelligence as William Robinson has proposed is possible (and why
shouldn’t it be possible in this scenario, too?). But then, to what
extent should we expect the other characters to treat Data like a
toaster or other zombie machines instead of as a fellow person? And
would we treat Data thus, if we were among those other characters, or
if a Data was in our presence? (This is not about whether we would be
justiﬁed in treating him thus but whether it would make sense to treat
this particular entity in that way.) By stipulation, Data passes the
open ended lifetime Turing test. Moreover Data not only acts and speaks
intelligently, Data also reports experiences. We are led to believe, by
these behaviors, that what Data sees, even if it doesn’t look quite
like what we see (perhaps it’s more like what the blind character,
Geordie, sees with his visor), is still somewhat like what we are
seeing. However it looks to Data, it meshes sufﬁciently with the things
the other, normal characters are seeing to be taken as real sight, too.
The fact that we grant that a Data, behaving in this way, is seeing
anything at all means we recognize some degree of perception in Data,
no? And the fact that he can report on what he sees and can operate in
terms of it, etc., suggests that he knows what he is seeing and
understands something about it. But here’s the crux: If he has
understanding, then he must have feeling on your view. If, however, we
deny that the understanding behavior he manifests includes feeling (as
you are using that term), for lack of direct access to it as we have it
in ourselves, then we must also deny that he understands, despite all
the behavioral evidence to the contrary. Such a Data is really just a
very clever, automaton, an autonomous puppet, an intelligent zombie.
But this ﬂies in the face of all the behavioral evidence (as it would
if we were applying the same standards to other human beings). So do
you think there must be a test for Data’s feelingness beyond the
ongoing Turing-type test of real life itself? Has something been left
out in this account that would need to be included for us to be sure
that we have, in Data, a real understanding entity? If not, then what
you are calling mere “hermenutic hypothesis”, “irrefutable” because it
is just “a matter of taste”, looks wrong, wouldn’t you agree?

S.MIRSKY: “Suppose the
Star Trek character, Data, actually existed and behaved the way it’s
portrayed in the show… Now we know… Data hasn’t got emotions. At the
least we know that Data has awareness… that goes beyond the
responsiveness of thermostats.”

Data is a ﬁction, and one
can suppose or stipulate whatever one likes about a ﬁction.

If there were really a
robot that could do what Data can do, then it would pass T3.

You used one of the
weasel-words in your description: “awareness.” Can we please go back to
the only word that doesn’t equivocate between access to information and
sentience (feeling)?

Either data feels or he
doesn’t feel. That’s all there is to it; and there’s no way we can know
one way or the other (because of the other-minds barrier), no matter
what Data tells us (T2) or does (T3).

If Data feels anything at
all, then he “goes beyond [mere] responsiveness.” If not, not. (All of
T3 could be “mere responsiveness.”) But of course it has always been
pop nonsense to depict (the ﬁctional) Data as having no “emotion”: What
on earth does that mean? (I’ve met a lot of people with apathy or
“blunted affect” — and I’ve met a few sociopaths too. Their emotional
make-up is different from mine. But presumably they still feel pain,
and whatever the right word to describe their affective state, I assume
they would not take to being held under water with great equanimity.

So, personality aside, it’s
not really about *what* the T3 robot feels, but about whether it feels
at all (and if so, how and why). It not only feels like something to be
angry or afraid; it also feels like something to be in pain, to be
overheated, to smell incense, to hear church-bells, to see a red
triangle, to touch a rough surface, to raise your arm, to say (and
mean) “the cat is on the mat,” and to understand Chinese. Take your
pick. If T3 has any of them, T3 feels. If not, not.

Would it make *practical*
sense to treat a human-built, metal entity differently from any of the
rest of us if it was otherwise Turing-indistinguishable from us? Only
if you could get away with it, I suppose, as with all other forms of
racism. Not if there were enough of them to defend themselves, just as
any of us would.

Would it make “logical”
sense to treat them otherwise? I’d say holding out for T4 — before
being ready to give one’s logical assent to the fact that we can be no
more or less certain that T3 robots feel than that any one of us feels
— would be a rather subjective, ad-hoc stance rather than something
dictated by logic.

S.MIRSKY: “that we grant
that a Data, behaving in this way, is seeing anything at all means we
recognize some degree of perception in Data, no?”

Actually, we are not in a
position to grant or not grant anything since we have no idea whether
or not Data feels when he is successfully displaying opto-motor
performance capacities Turing-indistinguishable from our own. All we
know is that he can do it, not whether it feels like anything to be
able to do it.

S.MIRSKY: “And the fact
that he can report on what he sees and can operate in terms of it,
etc., suggests that he knows what he is seeing and understands
something about it.”

If seeing that someone says
— and acts as if — he feels something were as good as seeing that he
feels something, T3 would already guarantee feeling by deﬁnition
(possibly even T2 would).

But all T3
reverse-engineering does is generate someone about whom you have no
better or worse reason to doubt that he really means what he says when
he says he feels something than you have for doubting it about anyone
else.

The fact that he’s made out
of the wrong stuff? Well, move up to T4 if it really means that much to
you. I confess that scaling up to T4 only interests me if there’s some
T3 capacity that we are unable to generate, and T4 gives us a clue of
how to do it. (Nothing like that has happened yet, but maybe something
we learn from neuroscience will help roboticists, rather than the other
way round.)

As for me, Data would be
enough — not just to prevent me from eating him or beating him, but for
according him the full rights and respect we owe to all feeling
creatures (even the ones with blunted affect, and made of metal).

S.MIRSKY: “But here’s
the crux: If he has understanding, then he must have feeling on your
view.”

Indeed. But for me T3 is
the best we can do for inferring that he has understanding, hence
feeling. Trouble is, we can’t explain how or why he has feeling…

(You seem to think that if
he can report Turing-indistinguishably on what he feels, then he must
have understanding. Seems to me he either does or doesn’t; he’s just
Turing-indistinguishable from someone who does. But that’s good enough
for me — since it’s not really possible to ask for more. T4 is just for
pedants and obsessive-compulsives… T3′s already done the job.)

S.MIRSKY: “If… we deny
that the understanding behavior he manifests includes feeling… then we
must also deny that he understands.”

We can neither afﬁrm nor
deny that he has feelings. And since “understands” (like any other
cognitive state) means “T3indistinguishable cognitive capacity +
feeling”, we can neither afﬁrm nor deny that he understands either,
just that he acts exactly as if he understands, just as any other
person who understands does.

S.MIRSKY: “So do you
think there must be a test for Data’s feelingness beyond the ongoing
Turing-type test of real life itself?”

No. The other-minds barrier
is impenetrable (except in the special case of the hypothesis that
T2-passing computation alone can generate understanding, which
“Searle’s periscope” shows to be false). And brain scans won’t
penetrate the barrier either; all they can do is correlate and predict.

T3 is the best we can do.

S.MIRSKY: “Has something
been left out in this account that would need to be included for us to
be sure that we have, in Data, a real understanding entity?”

Some people think you need
to scale up to T4, but I think that’s just superstition.

But since the name of the
game is not mind-reading but explaining cognition, what has been left
out in the Turing approach is feeling — not just how to test whether
it’s there, but to explain how and why it’s there.

S.MIRSKY: “If not, then
what you are calling mere “hermenutic hypothesis”, “irrefutable”
because it is just “a matter of taste”, looks wrong, wouldn’t you
agree?”

Hypotheses non ﬁngo. I am
not purporting to answer the how/why question about feeling. It is the
hypotheses that purport to do so that are hermeneutical. If I say
“feeling is explained by the fact that you have a subsytem with
privileged access to analog images etc. etc.” how can anyone ever
refute that? It’s just a decorative description you either like or
don’t like.

Istvan, thanks. I do
indeed talk about Turing indistinguishability, because I took you to
deny it at the very beginning of your talk (the video, not the paper),
when you say that your claim is that appealing to “what we can do with
our cognitive capacities can explain everything that can be explained,
but it will not explain everything”. Now you ask, rightly, to use the
written text, so using the slogans in the written test, this amounts to
saying that what “cognition does (or, more accurately, [what] cognition
is capable of doing)” (p. 1) will not explain everything because it
will leave out feeing. Now, it does follow from this claim that two
individuals’ being exactly identical in terms of what their ‘cognition
does/is capable of doing’ does not entail their being identical when it
comes to feelings. In other words, it does seem to follow from your
claim that Turing-indistinguishability (be it T3 or T4) does not imply
‘feeling’-indistinguisability.

And I was arguing
against this last claim. If you don’t endorse this claim, then I was
misreading you (or, rather, I was reading too much into what you said
in the video).

B.NANAY: “It does seem
to follow from your claim that Turing-indistinguishability (be it T3 or
T4) does not imply ‘feeling’indistinguisability…”

Turing-indistinguishability
means indistinguishability in (generic) performance capacity, not
indistinguishability in individual state. Indistinguishability in
performance capacity does not even imply feeling, let alone
indistinguishability in individual feeling state. My paper is not on
the ontic aspects of the mind/body problem, just the epistemic ones:
Explaining how and why we can do what we can do does not explain how
and why it feels like something to be able to do what we can do.

Thanks for the response
Professor Harnad. I’ll try to be brief in my responses in deference to
your earlier concerns. But I’ll continue to use the interspersed text
approach since I also want to be sure I don’t misrepresent any of your
statements. (I will cut where I can though): . . . SH wrote: Data is a
ﬁction, and one can suppose or stipulate whatever one likes about a
ﬁction. SWM: Yes, of course. So is the Chinese Room and your T-3 and
T-4 entities. That’s the point. We are constructing hypothetical though
at least possible scenarios in order to explore the concepts we apply
to them. SH: If there were really a robot that could do what Data can
do, then it would pass T3. SWM: Yes. SH: You used one of the
weasel-words in your description: “awareness.” Can we please go back to
the only word that doesn’t equivocate between access to information and
sentience (feeling)? SWM: I thought I had made clear that I equate your
“feeling” with “awareness”. However, I am no more comfortable with
“feeling” than you seem to be with “awareness” but I am quite prepared
to stipulate that by “awareness” I mean just what you have said you
mean by “feeling”. However, perhaps “sentience” would be a good
compromise? SH: Either data feels or he doesn’t feel. That’s all there
is to it; and there’s no way we can know one way or the other (because
of the other-minds barrier), no matter what Data tells us (T2) or does
(T3). SWM: You pointed out above that the solution I offered to your
challenge would not pass muster with you because it is just
“hermeneutics”, as you put it, and a matter of subjective
interpretation (not objectively observable). My response is to suggest
that, since Data’s (observable) behavior meets all the usual criteria
for understanding as seen in others, either that IS enough to sustain a
judgment that a Data programmed with the kind of system I’ve presented
has understanding OR your decision to equate understanding with
“grounding + feeling” must be mistaken. It hinges, of course, on the
fact that observable criteria are all we ever have for judging the
presence of “feeling” in others. SH: If Data feels anything at all,
then he “goes beyond [mere] responsiveness.” If not, not. (All of T3
could be “mere responsiveness.”) SWM: My point, of course, is that the
observable evidence attests to the fact that he “goes beyond mere
responsiveness.” SH: But of course it has always been pop nonsense to
depict (the ﬁctional) Data as having no “emotion”: What on earth does
that mean? (I’ve met a lot of people with apathy or “blunted affect” —
and I’ve met a few sociopaths too. Their emotional make-up is different
from mine. But presumably they still feel pain, and whatever the right
word to describe their affective state, I assume they would not take to
being held under water with great equanimity. SWM: Yes, but this IS an
exercise in hypotheticals. And it seems perfectly feasible that one of
William Robinson’s intelligent robot zombies could do a lot of what
Data does convincingly (though I doubt it could do it all). On the
other hand, you’re arguing that there’s a missing link that must be
present in any real instance of understanding but which will forever be
excluded from discovery in others. If that’s so, then the Other Minds
problem is resurrected though with no more impact on the actual
research project of building conscious machines than it has on our
daily contact with other human beings like ourselves. So your challenge
will be unmet because you stipulate it so. SH: So, personality aside,
it’s not really about *what* the T3 robot feels, but about whether it
feels at all (and if so, how and why). SWM: Yes, and the Data of this
scenario gives every indication of having feels — just as we do. SH: It
not only feels like something to be angry or afraid; it also feels like
something to be in pain, to be overheated, to smell incense, to hear
church-bells, to see a red triangle, to touch a rough surface, to raise
your arm, to say (and mean) “the cat is on the mat,” and to understand
Chinese. Take your pick. If T3 has any of them, T3 feels. If not, not.
Harnad, S. (2001) Spielberg’s AI: Another Cuddly No-Brainer.SWM: Yes, but
you argued that my explanation of how feeling arises in computational
terms fails because it’s based on a subjective interpretation of the
available information. My response is that it’s no more subjective than
our judgment of whether other people, like ourselves, feel. If
understanding requires feeling, as you put it, then every judgment we
make that anyone else but ourselves has understanding hinges on the
same kind of assessment, i.e., that feeling is present. So unless you
want to say there’s nothing objective about that kind of assessment,
then why worry about whether a Data like entity has it if it ﬁts the
same bill? S.MIRSKY: “[P]erhaps Data is just a zombie intelligence… why
shouldn’t it be possible…?)” SH: A rather awkward way of saying that
“If not, not.” Yes, either outcome looks possible, for all we know. Or
maybe it’s not possible to pass T3 without feeling — but if not, then
how and why not? Back to square one. SWM: The issue, I think, is
whether anything could meet the challenge you’ve presented and, as of
now, I am inclined to think that the only reason it cannot is a
stipulated one. But then it wouldn’t be a fair challenge. S.MIRSKY:
“[W]ould we treat Data… like a toaster or other zombie machines instead
of as a fellow person? (not about whether we would be justiﬁed… but
whether it would make sense…)” Well the moral question would certainly
trouble a vegan like me: I deﬁnitely would not eat a T3 robot. Strauss,
S. (1990) Is it an ant? A cockroach? Or Simply a Squiggle?
Toronto Globe & Mail

SWM: I excluded the
moral issue above by differentiating between whether we are justiﬁed or
whether it makes sense. SH: . . . Would it make “logical” sense to
treat them otherwise? I’d say holding out for T4 — before being ready
to give one’s logical

assent to the fact that we
can be no more or less certain that T3 robots feel than that any one of
us feels — would be a rather subjective, ad-hoc stance rather than
something dictated by logic. SWM: If we assume that Data has a
“positronic brain” that replicates the functionalities of a human brain
(as the narrative demands), then we have a case of T-4 being passed,
too. But that still leaves the problem that you insist there is
something else, not accessible to us, which must be there and which, if
it isn’t, undermines the validity of a claim that Data has
understanding. Doesn’t your view essentially reduce to the Other Minds
problem and, if it does, can there be real implications for the science
of cognition? S.MIRSKY: “that we grant that a Data, behaving in this
way, is seeing anything at all means we recognize some degree of
perception in Data, no?” SH: Actually, we are not in a position to
grant or not grant anything since we have no idea whether or not Data
feels when he is successfully displaying opto-motor performance
capacities Turing-indistinguishable from our own. All we know is that
he can do it, not whether it feels like anything to be able to do it.
SWM: But isn’t that to beg the question since the issue is whether the
behavioral criteria provide us enough information so that we can have
such an idea? S.MIRSKY: “And the fact that he can report on what he
sees and can operate in terms of it, etc., suggests that he knows what
he is seeing and understands something about it.” SH: If seeing that
someone says — and acts as if — he feels something were as good as
seeing that he feels something, T3 would already guarantee feeling by
deﬁnition (possibly even T2 would). SWM: Yes. Why doesn’t it, other
than the fact that you demand a test for computational devices (whether
pure or hybrid) that you don’t demand for other beings like ourselves?
SH: But all T3 reverse-engineering does is generate someone about whom
you have no better or worse reason to doubt that he really means what
he says when he says he feels something than you have for doubting it
about anyone else. SWM: Yes. And why isn’t that enough, unless this is
just the Other Minds problem imported into cognitive science? (As I’ve
previously noted, I think Wittgenstein’s solution in the Investigations
effectively undermines the issue of Other Minds as a real problem.) SH:
The fact that he’s made out of the wrong stuff? Well, move up to T4 if
it really means that much to you. I confess that scaling up to T4 only
interests me if there’s some T3 capacity that we are unable to
generate, and T4 gives us a clue of how to do it. (Nothing like that
has happened yet, but maybe something we learn from neuroscience will
help roboticists, rather than the other way round.) SWM: I think we
effectively have but that even at T-3 we have enough. If a chair
suddenly started behaving consciously (and we successfully discounted
all other possibilities) then we would have no choice but to deal with
it on those terms — and very likely change our ideas about what it
takes to be conscious (brains no longer needed, etc.). SH: As for me,
Data would be enough — not just to prevent me from eating him or
beating him, but for according him the full rights and respect we owe
to all feeling creatures (even the ones with blunted affect, and made
of metal). SWM: So in terms of your own interactions you’re comfortable
treating Data as a sentient entity but still want to hold out for
something more in the ﬁeld of cognitive science? Why? What can possibly
be gained? S.MIRSKY: “But here’s the crux: If he has understanding,
then he must have feeling on your view.” SH: Indeed. But for me T3 is
the best we can do for inferring that he has understanding, hence
feeling. Trouble is, we can’t explain how or why he has feeling… SWM:
We can if a detailed analysis determines that it is this program that
does these things run on his “positronic brain” which result in his
behavior (including behavior that attests to the presence of feeling).
SH: (You seem to think that if he can report Turing-indistinguishably
on what he feels, then he must have understanding. Seems to me he
either does or doesn’t; he’s just Turing-indistinguishable from someone
who does. But that’s good enough for me — since it’s not really
possible to ask for more. T4 is just for pedants and
obsessive-compulsives… T3′s already done the job.) SWM: If it’s good
enough in daily life, why shouldn’t it be good enough in terms of
cognitive science? S.MIRSKY: “If… we deny that the understanding
behavior he manifests includes feeling… then we must also deny that he
understands.” SH: We can neither afﬁrm nor deny that he has feelings.
SWM: That, I take to be a mistake. You have already said that you would
treat him as if he did, i.e., no differently than others like
ourselves. That IS an afﬁrmation he has feeling. SH: And since
“understands” (like any other cognitive state) means
“T3-indistinguishable cognitive capacity + feeling”, we can neither
afﬁrm nor deny that he understands either, just that he acts exactly as
if he understands, just as any other person who understands does. SWM:
Same response here. S.MIRSKY: “So do you think there must be a test for
Data’s feelingness beyond the ongoing Turing-type test of real life
itself?” SH: No. The other-minds barrier is impenetrable (except in the
special case of the hypothesis that T2-passing computation alone can
generate understanding, which “Searle’s periscope” shows to be false).
And brain scans won’t penetrate the barrier either; all they can do is
correlate and predict. T3 is the best we can do. SWM: Whether it is or
isn’t, it is good enough because it’s no different than how we deal
with others like ourselves. And that’s all that’s required to have
objective observation of the presence of feeling in an entity.
S.MIRSKY: “Has something been left out in this account that would need
to be included for us to be sure that we have, in Data, a real
understanding entity?” SH: Some people think you need to scale up to
T4, but I think that’s just superstition. But since the name of the
game is not mind-reading but explaining cognition, what has been left
out in the Turing approach is feeling — not just how to test whether
it’s there, but to explain how and why it’s there. SWM: I agree with
you that we need to account for the presence of what you call feeling
and I call awareness, but I disagree with your conclusion that we
cannot discern its presence nor account for it. In fact I think the
evidence is clear that we can do both. S.MIRSKY: “If not, then what you
are calling mere “hermeneutic hypothesis”, “irrefutable” because it is
just “a matter of taste”, looks wrong, wouldn’t you agree?” SH:
Hypotheses non ﬁngo. I am not purporting to answer the how/why question
about feeling. It is the hypotheses that purport to do so that are
hermeneutical. If I say “feeling is explained by the fact that you have
a subsytem with privileged access to analog images etc. etc.” how can
anyone ever refute that? It’s just a decorative description you either
like or don’t like. SWM: Build the systems and put them through their
paces. Thanks for a good discussion. I now believe I understand your
position much better than at the start. I hope I have managed to also
make my own clear enough as I have not always succeeded in doing that.

I think we know one
another’s respective views now, and are unlikely to inspire any
changes. I’d like to close by pointing out a recurrent misunderstanding
that has dogged our exchanges from the outset. You, Stuart, seem to
think that the task is to (1) design a robot that we can mind-read as
reliably and effectively as we mind-read one another (and we do agree
that T3 is that candidate),

(2) explain his performance
capacity (and we do agree that whatever mechanism successfully passes
T3 explains that) and then (3) explain how and why it feels: This is
the point at which we disagree. You think certain interpretations of
the mechanism that successfully generates T3′s performance capacity
explains how and why he feels. I think what’s missing is a causal
explanation of how and why he feels, along the same lines as the causal
explanation of how and why he can do what he can do — and I give
reasons (not stipulations!) why that causal explanation is not likely
to be possible. You are satisﬁed that your mental interpretation
provides that explanation, and your reason is that our mind-reading is
right. I reply that whether or not we are right that T3 feels, the
mechanism that generates T3 does not explain that fact that he feels,
even if he does feel because that mechanism would (for all we know, or
can know) produce the very same (performance) outcome even if he didn’t
feel. (And if a feelingless T3 [or T4] is impossible, no one can
explain how or why it’s impossible. Mentalistic interpretation of the
T3/T4 mechanism certainly does not explain it.) So this is not just a
manifestation of the other-minds problem, but of the anomalous causal,
hence explanatory status of feeling.

Reply to Stevan Harnad’s
remarks of 2/23/11 @ 6:52 AM Yes, we have done one of the only things
discussions like this can do — we have reached the bottom line issue(s)
that divide us. In this case your point that “whether or not we are
right that T3 feels, the mechanism that generates T3 does not explain
that fact that he feels” neatly sums it up I think. Why there is still
a division between us, I believe, is because I think that a description
of what kinds of processes performing what tasks produce feeling
behavior DOES causally account for feeling. This hinges, of course, on
my view that feeling is adequately revealed in behavior, as you note —
something you apparently don’t share. And it looks unlikely I can bring
you around to sharing it. But let me say a bit about why I think mine
is the right view. I think (following Wittgenstein) that language has
its genesis in a public environment and so our words (and thus our
concepts) have their basis in publicly accessible criteria. Thus the
ideas we have about mind (characterized by all those “weasely” words
you bridle at) are necessarily grounded in public criteria. That’s WHY
the words seem “weasely”. They are public phenomenon words at bottom,
being applied in what is intrinsically a non-public context. As a
result, they denote much more imprecisely than public criteria words do
when applied in a public venue, and they often lead us into
conceptualizing in inapplicable ways. Hence the notion that minds are a
distinct kind of thing (often conceived of as souls or other
non-physical entities, whatever that might be). I don’t want to
introduce a debate here about language or Wittgenstein but I do want to
make clear why what doesn’t seem reasonable to you seems to perfectly
reasonable to me. On my view, when we speak of mental phenomena we’re
really stepping outside the area in which language works most
effectively. I don’t want to suggest that we cannot reference mental
phenomena because we do all the time and sometimes to very good effect.
But I want to say that, especially in doing philosophy, we often end up
playing in the penalty zone without realizing it. Speaking about minds
and consciousness and feeling and so forth IS, on this view, to speak
about behavioral criteria, not about some ethereal entity hidden from
view in others. But I agree with David Chalmers’ point that we often
have private aspects of our own experience in mind, too, when using
these words. There are public AND private referents involved in mental
words, reﬂecting our own experience of our mental lives as well as our
experience of the public zone we share with others. The problem, as I
see it, lies in the fact of differentiation. Just because when I speak
of “feeling” I have in mind a range of my own personal experiences
(feelings of physical sensations, feelings of emotional events,
feelings of being aware of the things I am aware of and so forth),
doesn’t mean that that is the main use of the word “feeling” for me. Or
the important one in a public context. If our Data creature acts in
every way like us (meets your T3 test) then even you have agreed you
would not eat him (assuming he were edible), chop him up, enslave him
and so forth. You would act toward him as if he were a feeling entity.
And you would be right to do so. So the question is not, on my view,
whether any entity, Data or other human being, has a mental life that
is just like mine but whether it, or he, has a mental life at all. And
behavior tells us whether he does or not. It is the only thing that can
tell us and we have no reason to expect anything more. But if there is
no reason to expect direct access to another’s mind, then identifying
the processes in a given operating platform (brain, computer or
whatnot) which produce the requisite behaviors surely meets the
standard of identifying a cause of those behaviors. Of course, the more
ﬁnely grained the description, the better the understanding of the
causative factors. What else could “cause” mean in this case than that?
By the way, I see that John Searle has penned a piece in this morning’s
Wall Street Journal concerning the recent Watson business on Jeopardy.
He invokes the usual Chinese Room scenario to show that Watson, a
computer, doesn’t understand. Of course he’s right in the sense that
Watson lacks plenty, even at this stage. Watson is still underspecked
like the Chinese Room. Searle’s assertion is that this shows that
Watson-like entities lack the potential to cause understanding though,
as usual, he does not say anything about what it takes to actually
cause it except that “the brain is a causal mechanism that causes
consciousness, understanding and all the rest” and Watson, being a
computer, lacks that causal capacity. But not telling us what that
capacity is becomes, I think, an easy out. What Searle’s argument
continues to miss is that being a limited system, like Watson, doesn’t
mean say anything compelling about other less-limited systems like
Watson so long as there is no reason to believe that brains’ causal
powers in this regard are necessarily different from what computers can
do. (His later argument, after the CRA, asserts that computational
processes can’t have any causal power at all because they are man-made
and take their nature of being computational from their makers which
denies them real world causal efﬁcacy — but I think that’s a worse
argument than the original CRA.) I’m thinking of penning a rebuttal if
I have the time this morning but it’s unlikely to be brief enough for a
letter to the editor. So perhaps I’ll just pass for now.

Addendum to recent reply
to Stevan Harnad: In my effort to keep my text short, I neglected to
make an important point in my last response and this could turn out to
fuel further misunderstanings between us. So let me just add it now.
You’ve indicated that you think the problem resides in the fact that a
T3 entity could pass a T3 test with or without the mental stuff we
have, i.e., that it could act as if it feels while, in fact, not
feeling. My point above, that we ascribe feeling to entities that act
in a feeling way, is meant to cover this case though I failed to be
sufﬁciently explicit. That is, on the view I’m espousing, there can be
no difference between two entities that pass T3 in the same way. This
is partly premised on my earlier point that all that we mean by
ascriptions of feeling in such cases is that they behave in a certain
way. But it is also partly reﬂective of my view that the notion of
philosophical zombiedom is untenable. In this (as in many things) I’m
inclined to agree with Dennett. Given an entity that does everything we
do in the same generic way (and Dennett takes this up to what you
characterize as the T4 test), it can make no sense to suppose anything
is missing. Your own acknowledgement that you would not, in your
behavior toward such an entity, treat it as if anything WERE missing,
implicitly conﬁrms this. As I noted earlier on, if a chair or other
inanimate object suddenly began behaving in a conscious way (speaking
intelligently and autonomously to us, reacting to situations in
apparent fear or concern), however hard this might be to imagine, we
would be left with little choice but to treat it as another conscious
entity — once we had successfully eliminated other possibilities (is
there a hidden operator? are we under the inﬂuence, etc.?), of course.
We would likely have to revise our understanding of the world though.
Maybe brains would no longer be thought of as a necessary (or causal)
condition for minds after all. But what we would not have to have, what
we would never require, is access to the chair’s “mind”. We don’t need
it in normal circumstances so why would we need it in abnormal ones?
(Sorry for the extra text but I just wanted to be clear.)

Hi Stevan, I think I
missed your response in all the hubbub. Let me try to sum up the
disagreement here.

1. You
claim that we will never be able to explain how and why we feel. So,
even when we have a T3/T4 robot we will be unable to say whether it
feels or not and that is because we can always ask “what causal power
does feeling confer?”

2. This
looks to me just like saying ‘zombies are conceivable,’ yet you seem to
deny that you are talking about, or even need to talk about, zombies.

3. If
feeling turns out to have relatively little causal role then it cannot
be an argument against a theory that adding feeling doesn’t add any
causal powers. It then becomes an open question whether adding feeling
does add any causal powers, with whatever the answer turns out to be
providing us with evidence for or against various theories of
consciousness. At the ﬁrst online consciousness conference David
Rosenthal gave his argument
that we have overwhelming evidence that adding feeling doesn’t add
(signiﬁcant) causal powers, which then turns out to be
evidence for the higher-order theory.

R.BROWN: “1. You claim
that we will never be able to explain how and why we feel. So, even
when we have a T3/T4 robot we will be unable to say whether it feels or
not and that is because we can always ask ‘what causal power does
feeling confer?’”

No, the causal question of
how/why an entity feels is not the same as the factual question of
*whether* it feels. Even if we had God’s guarantee that a T3 or T4
robot feels, we would only know, from the T3/T4 theory, how/why a T3/T4
system can do what it can do (its know-how), not how/why it feels
(i.e., how/why it feels like something to be able to do what it can do,
to be the T3/T4 system).

R.BROWN: “2. This looks
to me just like saying ‘zombies are conceivable,’ yet you seem to deny
that you are talking about, or even need to talk about, zombies.”

No, it’s not “zombies are
conceivable,” it’s “feelings are inexplicable.” You don’t need zombies
for that.

(But, if you like, they can
be marshalled thus: It is inexplicable how and why we are not zombies;
or how and why there cannot be zombies. Same thing as: It is
inexplicable how and why we — or T3/T4 robots — feel.)

R.BROWN: “3. If feeling
turns out to have relatively little causal role then it cannot be an
argument against a theory that adding feeling doesn’t add any causal
powers.”

We can’t even explain
how/why feelings have relatively little causal role: We can’t explain
how/why/whether they have *any* causal role.

And the only theory is the
theory of how T3/T4 systems can do what they can do. That’s a causal
theory, a causal explanation.

“Adding” something to it
that has no causal role is adding nothing to the *theory*. It’s a datum
— an unexplained datum. (We know T3/T4 feels, because God told us; and
we know we feel, because of the Cogito. But, theoretically speaking —
i.e., explanatorily speaking — that leaves us none the wiser than
T3/T4′s causal powers and explanation alone already leave us.) R.BROWN:
“It then becomes an open question whether adding feeling does add any
causal powers, with whatever the answer turns out to be providing us
with evidence for or against various theories of consciousness.”

You lost me. If we can’t
give any causal explanation of feeling’s causal role, what kind of a
“theory” of feeling (consciousness) is that?

R.BROWN: “At the ﬁrst
online consciousness conference David Rosenthal gave his argument that
we have overwhelming evidence that adding feeling doesn’t add
(signiﬁcant) causal powers, which then turns out to be evidence for the
higher-order theory.”

What is a “higher-order”
theory? What does it explain? Does it tell us how or why we feel? If
not, it is probably just hermeneutics (as I’ve had occasion to suggest
more than once in this discussion!). The reverse engineering of T3/T4
is the explanatory theory. And it explains doing, but not feeling.

You can say that you
don’t need them but every time someone asks why feeling can’t be
explained you appeal to a zombie like intuition; we can imagine that
being done in the absence of feeling. What is more zombie than that?
But I guess ultimately this doesn’t matter

Harnad: You lost me. If
we can’t give any causal explanation of feeling’s causal role, what
kind of a “theory” of feeling (consciousness) is that?

It would be something
like the higher-order thought theory of (feeling) consciousness.

Harnad: What is a
“higher-order” theory? What does it explain? Does it tell us how or why
we feel? If not, it is probably just hermeneutics (as I’ve had occasion
to suggest more than once in this discussion!). The reverse engineering
of T3/T4 is the explanatory theory. And it explains doing, but not
feeling.

Well, I don’t want to
rehearse the high-order theory here but the basic jist is that to feel
pain is to be aware of myself as being in pain. This, to be very brief
about it, explains why it feels painful for me. It does so because that
is how my mental life appears to me and that is all that there is to
feeling. This is deﬁnitely not hermeneutics, whatever that is. Now I
get that you don’t accept this as a theory of feeling but the point is
that your argument stands or falls with the success of actual theories
that try to explain feeling and so has to be evaluated by how well
these theories fare. Higher-order theories are particularly relevant
since these theories predict that feeling will have little, if any,
signiﬁcant causal role to play in doing, just as you suggest. If you
are going to claim that something is unexplainable you need to
understand the various proposed explanations in order to say that they
don’t work; especially if you are not trying to give an a priori
conceivability argument.

R.BROWN: “every time
someone asks why feeling can’t be explained you appeal to a zombie like
intuition”

In saying: “Thank you
(T3/T4) for explaining how and why we can do what we can do; now please
explain to me why it feels like something to do all that…,” am I
appealing to a zombie-like intuition? But I don’t even believe in
zombies! I just want to know how and why we feel.

R.BROWN: “the high-order
theory [of (feeling) consciousness]… is that to feel pain is to be
aware of myself as being in pain. This… explains why it feels painful
for me… because that is how my mental life appears to me and that is
all that there is to feeling. This is deﬁnitely not hermeneutics,
whatever that is”

This is one of the (many,
many) reasons I urge dropping all the synonyms, euphemisms, paralogisms
and redundancies and just call a spade a spade:

“to feel (something) is to
feel something. This… explains why it feels like something …because
that is how my feeling feels, and that is all that there is to feeling.”

Pared down to this, it’s
deﬁnitely not hermeneutics; it’s tautology. Put back the synonyms,
euphemisms, paralogisms and redundancies and it becomes hermeneutics: a
Just-So Story.

But it’s certainly not
explanation!

R.BROWN: “Now I get that
you don’t accept this as a theory of feeling but the point is that your
argument stands or falls with the success of actual theories that try
to explain feeling and so has to be evaluated by how well these
theories fare.”

But my argument is that no
one has proposed a (non-psychokinetic) causal theory of feeling, and I
give reasons why I don’t think there can be one (no causal room, and
everything works just as well without feeling). Such an argument does
indeed stand or fall on the success or failure of actual causal
explanations. But the theories (when they are causal theories at all)
seem to fail; so the argument would seem to stand.

Hermeneutics is not a
causal theory, but a Just-So story.

R.BROWN: “Higher-order
theories are particularly relevant since these theories predict that
feeling will have little, if any, signiﬁcant causal role to play in
doing, just as you suggest.”

Well that would be
convenient: I am asking for a causal explanation — of something that it
seemed perfectly reasonable to expect to be causally explained — and
instead I am given a theory according to which feelings have “little,
if any” causal power.

Let’s take these one at a
time: If little, then what’s the causal theory of how and why feelings
have this (little) causal power?

And if feelings have *no*
causal power, for theoretical reasons — well that does seem to be a
rather handy way of deﬂecting the call for a causal explanation,
doesn’t it? Feelings are acausal ex hypothesi ergo propter hypothesem!
(One wonders if there are other things one can get out of explaining
causally by hypothesizing that they are acausal: or are feelings the
only thing? To me that sounds more like explanatory shortfall than
theoretical triumph, if a Just-So Story tells me that that’s just the
way things are…) But does it really work, to say that feelings are
acausal? Our intuitions prepare us, somewhat, to accept that passive
feelings might be just decorative, not functional, and in that sense
acausal — though one can’t help wanting to know why on earth they’re
there at all, then — and not just there, but center-stage.

It’s not just passive
feelings that are at issue, however. There’s also feeling like doing
something; doing something because you felt like it, not because you
were pushed. That’s a harder distinction to wave away as merely
decorative.

Bref: I’d say that any
“higher-order theory” that declared that one should not be troubling
one’s head about how and why we feel — because feelings are acausal —
was simply begging the question.

R.BROWN: “If you are
going to claim that something is unexplainable you need to understand
the various proposed explanations in order to say that they don’t work;
especially if you are not trying to give an a priori conceivability
argument.”

Deﬁnitely no a-priori
conceivability arguments (or zombies!). So I’m all ears: According to
the higher-order theories, how and why do we feel? I’m happy to
consider each theory, one at a time. This, after all, was the challenge
at the end of my video (though I think Richard’s edited version cut it
out!). (All I ask is that the theory should reply without wrapping
feeling inextricably into further synonyms, euphemisms, paralogisms and
redundancies that might camouﬂage the fact that they don’t provide any
answer.)

Adding my two cents, I
have to say that it’s strange to end an argument of this sort simply by
declaration. It smacks of an appeal to intuition, at the least. I think
it’s reasonable to say, as Stevan does, that the concept of feeling is
not the same as the concept of doing and that it follows from that that
feeling does not equal doing. But that doesn’t imply the next statement
in Stevan’s last response to me: “IF GENERATING DOING ALSO GENERATES
FEELING, WE DON’T (AND CAN’T) EXPLAIN HOW AND WHY”. Just being
different notions does not imply an incapacity to explain why the
referent of one of those notions occurs. It’s a separate claim to
insist that we can’t explain the how and why of feeling’s occurrence
and has no apparent logical dependence on the prior statement that
feeling doesn’t equal doing (or vice versa). At least not without some
argument that leads to such a conclusion. Of course, I think I gave a
fair account of how we can be perfectly comfortable that we do have a
way of explaining the occurrence of feeling: 1) Feeling is revealed in
certain behaviors. 2) We can test for and observe those behaviors. 3)
Therefore we can test for and observe the occurrence of feeling in a
behaving entity. The ﬁrst premise stands on a Wittgensteinian analysis
of the concept of feeling (the way we use the word) and on its
implication, that it’s just unintelligible to imagine we could have
perfect replicas of feeling entities (both in terms of actions and
internal functionality) without the feeling part. The rest of the
argument is relatively simple and seems to be self-explanatory; not in
need of extensive support. Based on this, I suggest that an account
which speciﬁes the particular functions that need to be performed for
feeling toi occur, coupled with the argument that these functions can
be achieved computationally, can then provide a perfectly acceptable
explanation of how and why feeling occurs, i.e., it is just this and
this and this set of tasks being performed by the entity’s relevant
components. This does not guarantee, of course, that a computationalist
account is the right one. But it does provide an explanation that could
be true.

S.MIRSKY: “I think I
gave a fair account… of feeling: 1) Feeling is revealed in certain
behaviors. 2) We can test for and observe those behaviors. 3) Therefore
we can test for and observe the occurrence of feeling in a behaving
entity.”

(3) We can test for and
observe the correlation of those behaviors (and brain-states) with
(reported) feeling.

(4) Therefore we can
provide a causal explanation of those behaviors (T3) (and brain-states,
T4).

(5) We cannot, however,
provide a causal explanation of the feeling which is correlated with
those behaviors (and brain-states). Correlation ≠ Causation (nor does
correlation explain causation).

S.MIRSKY: “The ﬁrst
premise stands on a Wittgensteinian analysis of the concept of feeling
(the way we use the word) and on its implication, that it’s just
unintelligible to imagine we could have perfect replicas of feeling
entities (both in terms of actions and internal functionality) without
the feeling part.”

Whether a feelingless T3
(or T4) robot is imaginable or unimaginable, intelligible or
unintelligible, possible or impossible is *irrelevant* to the question
of whether we can explain how and why we (or a T3 or T4 robot) feel.

(By the way, if a
feelingless T3 or T4 robot is indeed impossible — as I rather suspect
it is — that is an undemonstrated ontic impossibility, not a formal
proof of impossibility, nor even contradictory to an empirical (causal)
natural-law. Hence it is an unexplained impossibility.)

Wittgenstein on private
states and private language in no way resolves or even casts any light
at all on the problem of explaining how and why we feel.

S.MIRSKY: “an account
which speciﬁes the particular functions that need to be performed for
feeling to occur, coupled with the argument that these functions can be
achieved computationally, can then provide a perfectly acceptable
explanation of how and why feeling occurs, i.e., it is just this and
this and this set of tasks being performed by the entity’s relevant
components.”

And the reason the causal
mechanism underlying “this and this and this set of tasks” needs to be
a *felt* one, rather than just a “functed” (i.e., executed,
implemented) one…?

S.MIRSKY: “This does not
guarantee, of course, that a computationalist account is the right one.
But it does provide an explanation that could be true.”

The problem is exactly the
same for a computationlist, dynamicist, or hybrid computational/dynamic
account: The causal account accounts for doing, not for feeling. The
causal explanation that can generate the doing is hence true (and
complete) for doing, but completely empty for the feeling that is
piggy-backing on the doing (if it is).

SWM: On a
Wittgensteinian analysis of the word “feeling”, when applied in a
public context (i.e., to entities other than ourselves), the term
denotes certain behaviors. The fact that we also use the word to refer
to the vague (because impossible to particularize) sense of being aware
that we have when we’re attending to (are aware of) things, is not
intrinsic to the public usage. And it’s the public usage that’s at
issue when we’re trying to determine if another entity has feeling (as
in “is aware of” what’s going on around it, what’s happening to it,
etc.) This issue, I expect, is the one that really divides us. Stevan
seems bent on applying a vague term, for an indistinct referent, in a
public usage venue where a different application makes sense. We never
need access to other human minds to be assured that they have minds
(i.e., that they are feeling, as in having experience) when they are in
an awake state. And we don’t need to do anything different with regard
to other kinds of entities, whether chairs, tables, computers,
arthropods, cephalopods, mammals, or aliens from outer space. Demanding
something more looks like a mixing of categories which effectively
imports a metaphysical problem, that of Other Minds, into a scientiﬁc
milieu. There is no question that, absent an analysis like
Wittgenstein’s, the Other Minds problem appears intractable. But
science isn’t about such concerns but about accounting for the
phenomena we deal with in the public domain where things are observable
(if not in fact, then at least in principle). SWM: Reports are only
part of the story. Creatures that cannot report “I am feeling this,
seeing that, etc.” can still be observed to behave in feeling ways
which is why we cringe when we see an animal in pain. Our response is
to their behavior, not their reports (though reports ARE a sub-class of
behaviors). SWM: On the view I’ve offered, the behaviors are understood
as expressive of feeling, not merely ancillary to them. A perfect
imitation (right down to the internals) would not be conceivable, even
if partial imitations are. The problem is that, because we can conceive
of successful partial imitations (lifelike robotic models, convincing
computational question answerers, like highly sophisticated Watsons),
we think we can also conceive of the so-called philosophical zombie
type imitation, the one that passes an open ended Turing test for a
lifetime at all levels (verbal, behavioral, internal functionality).
But, confronted with such an entity, even you have assured us you would
not eat it, torture it, treat it like an inanimate object, etc. So on
one level you recognize the inconceivability of feeling behavior
without feeling, while on the other you hold out for the Other Minds
solution, which is irrelevant to the scientiﬁc question of whether
machines can have conscious minds (feeling). SWM: True. The two
concepts are distinct and correlation doesn’t explain causation. What
correlation does do is provide a tool for imputing causation. SWM: If a
T3 or T4 robot is unintelligible then having the feeling behavior in
the right context IS having the feeling, and any explanation for how
that feeling behavior is generated explains how the feeling comes
about. SWM: If Wittgenstein’s point about the publicness of language is
right, it renders the question meaningless, hence it can neither be
possible nor impossible. This is a different question, however, from
whether it is possible that minds, consciousness (or what you want to
just call feeling) exist outside a physical framework entirely. But for
that kind of claim (dualism) to be upheld, I think we would need
different information about the world than we currently have. Barring
evidence of minds divorced from bodies, our current information seems
to accord quite well with the way our language generally works re:
questions of mental phenomena in other entities so there’s no reason to
look for something non-physical in explaining feeling behavior. But I
don’t think you are saying otherwise, which is why I’m surprised at
your insistence on asserting the impossibility of causal explanation
for feeling. SWM: It casts light on what we mean by “feel” in the
different contexts. I don’t dispute with you that we have a private
sphere of experience, nor did Wittgenstein (he often spoke of mental
pictures). The issue revolves, rather, around the question of whether a
theory that accounts for the causality of feeling behaviors in a
machine (or any other entity) explains the occurrence of feeling itself
(my reply to your challenge). I am arguing that it does, in the only
meaningful sense of this question, and that to suppose otherwise is to
shift the underlying ground from the scientiﬁc to the metaphysical.
Since the questions of cognitive science are manifestly scientiﬁc,
metaphysical concerns have no role here. Thus an explanation of the
occurrence of feeling (your use of that term), in terms of system
operations, could work in cognitive science. I agree, though, that if
one doesn’t accept the Wittgensteinian solution to the Other Minds
problem, THAT issue persists. But my point is that it isn’t a scientiﬁc
problem any longer. SWM: “Needs to be”? The reason the entity achieves
feeling is because, on this theory, feeling is that certain set of
processes which, combining in a dynamic system, collect information at
one level, transform it at other levels and link it with other
information maintained by the system. If a machine run by such a system
is seen to behave in ways that manifest feeling (expressing, by report
or behavioral demonstration, an awareness of itself, of other entities,
of what it is doing and thinking, etc.), then that is sufﬁcient to tell
us that feeling qua sentience is present. If we turn off aspects of the
system and the behaviors cease, then we can say that those aspects
(those processes performing those tasks) are the causal elements of the
feeling and, if we turn them back on and the feeling behavior returns,
we have even stronger evidence for this thesis. SWM: This brings us
back to the original problem, i.e., does “feeling” denote only the
vague and difﬁcult-to-pick-out sense we have of being subjects, of
experiencing, which you describe as “piggy-backing” (epiphenomenalism)?
Or does the term denote something (s) in the shared public sphere of
our experiences, i.e., the ongoing complex of behaviors we recognize as
feeling in others? My view is that the subjective application of the
term which you have made paramount is really secondary to (and
derivable from) the public application. But if you don’t share that
view, it’s not hard to see why we are at loggerheads. However, in that
case, my argument comes down to this: Insofar as this is about
questions of cognitive science, the metaphysical problem of Other Minds
that you have raised hardly seems relevant.

Feeling means feeling,
whether I’m talking about him or about me. When I say I feel tired, I
mean I feel tired. When I say he feels tired, I means he feels tired —
not that he’s *behaving* tiredly, but that he’s *feeling* tired. By the
same token, if I say he’s lying, I mean he’s lying. Not that he’s
behaving mendaciously but that he’s lying. I may be mistaken. He may
not be feeling tired. He also may not be lying. But I mean what I mean
in both cases. The only difference is that when I say he’s lying,
there’s a way to settle the matter for sure. When I say he feels tired
there’s not.

There’s nothing vague in
any of these four cases: feeling/lying, me/him.

S.MIRSKY: “We never need
access to other human minds to be assured… they are feeling… Demanding…
more… imports a metaphysical problem… Other Minds, into a scientiﬁc
milieu.”

Good thing we don’t need
access to other minds to infer they are feeling!

The other-minds problem is
an epistemic, not an ontic problem, but never mind.

The scientiﬁc problem is
not to be “assured” that others are feeling, but to explain how and why
they feel.

S.MIRSKY: “the Other
Minds solution, which is irrelevant to the scientiﬁc question of
whether machines can [feel].”

The scientiﬁc question is
not *whether* machines can feel but how and why…

S.MIRSKY: “If a
[feelingless] T3 or T4 robot is unintelligible then having the feeling
behavior in the right context IS having the feeling, and any
explanation for how that feeling behavior is generated explains how the
feeling comes about.”

No. If a feelingless T3 or
T4 robot is impossible then if T3/T4 behaves as if it feels, it must
(somehow) be feeling. Fine. Now we know it feels.

Now: How and why does it
feel? (Please don’t reply that the answer is that it would be
impossible for it not to! That’s not a causal explanation.)

S.MIRSKY: “a theory that
accounts for the causality of feeling behaviors explains the occurrence
of feeling itself”

No, a theory (T3/T4) that
accounts for behavior accounts for behavior. Why and how the behavior
is felt needs an account of its own.

S.MIRSKY: “to suppose
otherwise is to shift the underlying ground from the scientiﬁc to the
metaphysical….Thus an explanation of the occurrence of feeling in terms
of system operations could work.”

How have I shifted from
scientiﬁc to metaphysical in asking for a causal explanation of how and
why we feel? If “system operations” theory answers the question, let’s
hear the answer. The trick will be to show how it does the causal work
that wouldn’t be identically done without feelings.

(Don’t remind me that it’s
impossible for T3/T4 to be able to behave exactly as if it feels
without feeling: explain to me why and how it feels — or, if you like,
why and how it’s impossible. [Assume I already agree that it's
impossible: No contest.] No other-minds problem-problem. T3/T4 is
feeling. It is indeed impossible for it not to be feeling. I get a
headache just thinking about it. Now, just explain, causally, how and
why it’s impossible.)

S.MIRSKY: “The reason
the entity achieves feeling is because, on this theory, feeling is that
certain set of processes which, combining in a dynamic system, collect
information at one level, transform it at other levels and link it with
other information maintained by the system.”

And why and how is that
processing, combining, collecting and linking felt?

S.MIRSKY: “If a machine
run by such a system is seen to behave in ways that [behave as if]
feeling… then that is sufﬁcient to tell us that feeling… is present.”

Indeed. Agreed. And now,
the explanation of how and why this deﬁnitely present feeling is
caused…?

S.MIRSKY: “If we turn
off aspects of the system and the behaviors cease, then we can say that
those aspects… are the causal elements of the feeling and, if we turn
them back on and the feeling behavior returns, we have even stronger
evidence.”

Correlation does not
explain causation. If you turn off the correlates of feelings, the
feelings are gone: Now: how and why do the correlates of feeling cause
feelings?

S.MIRSKY: “does
“feeling” denote only the vague and difﬁcult-to-pick-out sense we have
of being subjects, of experiencing, which you describe as
“piggy-backing” (epiphenomenalism)? Or does the term denote
something(s) in the shared public sphere of our experiences, i.e., the
ongoing complex of behaviors we recognize as feeling in others?”

There’s nothing public or
social (or vague or difﬁcult-to-pick-out) about feeling a migraine. And
until it’s explained how and why it’s felt, the only way it can be
described is as piggy-backing (somehow, inexplicably) on the causal
mechanism of T3/T4 capacity.

SH: None of your
comments (which are only about the Chinese room and symbol grounding)
bear on the problem of feeling (“qualia”); but the target essay does.”
JS: Yes sir, that is exactly right. But what this shows is that you
have given up on the symbol grounding problem, and are now completely
engaged in a qualia grounding problem. – OK, I said that earlier, and
then said I would post again. I will clarify the above brieﬂy, because
I now realize it must be misread, then take a different cut at the
whole problem. – First, when I said what I said above, what I meant was
that Harnad has given up on grounding symbols, and is now trying to
ground not feelings in general, but Searle’s claimed feeling that
“something is missing”. What is it like to be John Searle? It must
include a feeling that something is missing from the CR. Harnad takes
this as worth addressing. I don’t. It is not an argument. However,
there are other things that Searle says that constitute actual
arguments that I believe are worth addressing, and in fact, that I
believe Searle gets right, in spite of the fact that on the major
question I believe the “systems argument” is 100% conclusive, always
has been, always will be. OK, what could that possibly mean? I make one
claim here, that Harnad has granted above – that computation is itself
a physical and causal process. We will see that that is enough, that it
justiﬁes T2 as the proper test, and computation as the only issue in
cognition. Searle’s CR is a lovely little intuition pump from back in
the day when functionalism was supposed to be the foundation of AI, of
cognitivism, and of philosophy of mind. “Anything that has the function
of X is X”, it was claimed, giving us multiple realizability as a
conjunct of positivism, of T2 as a behavioral test. But the
functionalist claim was not good enough for Searle, and he protests it
in a number of ways. As in the legal doctrine that it’s OK to claim in
a single defense that, “I never borrowed the pot, it was broken when I
got it, and it was intact when I returned it,” it is not necesessary
that everything Searle ever claimed about the CR ﬁts together. What I
believe Searle got right is that the claims of functionalism are not
fundamental, they are descriptive but not explanatory. The T2 test is
descriptive but not explanatory, it does not give us a constructive
solution, “Write this program and it will pass T2”. Until and unless,
ﬁrst, something actually *does* pass the T2 test, and second, that we
are able to learn from this what it is that allows it to pass T2, then
Searle has a point. Harnad shifts the argument in this paper from
understanding generally to symbol grounding: “Let us say that unlike
his Chinese symbols, Searle’s English symbols are “grounded” in his
sensorimotor capacity to interact with the things in the world that his
symbols refer to: they connect his words to their referents.” (p 4)
Harnad then moves from this to arguments in favor of T3, and then
against a T4. If you want to ground a symbol, these might be good
moves. However, it is tendentious to assert that grounding symbols
answers Searle’s complaint. I concede I have no real idea how to
directly answer Searle’s complaint, which I do not consider valid in
the ﬁrst place. Ex Falso Quodlibet. But, besides his complaint, Searle
makes an implicit demand – build me a CR! He simply assumes this in the
paper, pretending to do a reductio ad absurdum on it. The reductio
fails but the demand is valid. To repeat the previous point, the
implicit demand is dual

– ﬁrst, show me, second,
explain yourself. Build a machine that passes T2 (or T3 or TX), and
explain just how you did that, how it can be duplicated. Short of doing
the empirical demonstration, we amuse ourselves with discussions of
principles. What could allow a system to pass T2, or T3, or TX, or what
prevents anything from ever passing T2, or T3, or TX? Searle is
entirely reasonable on this. He agrees that machines can pass T2 (etc),
because humans are machines. That is reasonable, but it is still not
explanatory, it is an observation but still not constructive. Searle
has no constructive insight into that. He observes that humans do (pass
T2), and that computers do not (yet, empirically) pass T2. The
questions of what principles we have, and what valid arguments have
been made, and can be made, pro or con regarding whether digital
electronic computers qua computers can ever pass T2 (etc), are what I
believe need to be argued. I will sketch such an argument. Let us take
a computer system and have it attempt T2. Let it fail.* We add
sensorimotor capacities, and then attempt T3. We succeed. Hurrah! Now
we turn off the computer. The robot’s battery is still charged and the
power switch is on, but the robot doesn’t move and now fails T3. What
has changed? Hasn’t Harnad said that it is the combination which is
needed, and haven’t we broken the combination? Well, I claim that the
critical element is the computational element, and the rest is
gingerbread, and we have just shown this. Let’s unpack it further. What
have we done, by turning off the computer? We have removed the ability
to have causal interactions, textual or mechanical. Harnad claims that
mere textual interactions can never convince anyone of anything, but
this ﬂies in the face of the medium you are reading this on right now,
as it ﬂies in the face of the original idea of the Turing Test. Just as
we have granted that computation is itself a physical and causal
process, so is any textual exchange necessarily a physical and causal
process. There may be some difference in degree between T2 and T3, but
not of kind, even the T2 test has sensorimotor aspects. And, it turns
out, those aspects need embedding in a causal sequence. To summarize,
computers can (in principle) pass a T2 because they are the proper sort
of machine, that which can participate in physical, causal
interactions. All the rest is detail. I suggest a separate issue is
whether we ever wanted to ground symbols in the ﬁrst place. Is even
symbolic computation, really symbolic? Well, no, not really. That is,
the only symbols that a computer might be said to crunch are ones and
zeroes. And, truth be known, there are no ones and zeroes inside of
your computer chip. There are (again) physical machineries, circuits,
ﬂipping around bags of electrons, again physical, in a sequential,
causal manner. The symbols are as much a matter of convention and
degree as arms and eyes might be. The T2 test does not suggest the
computer sends only ones and zeroes in its messages, there is already
some kind of arbitrage and minimal levels of interaction speciﬁed. It
is a methodological convenience to say that computation is symbolic –
although of course, the neural network folks might argue even that. To
make a long story short, Turing got this all right seventy years ago.
The Turing Machine reduction is a cannonical form of all such related
problems. We don’t have to use the cannonical form, it is not generally
convenient, but all roads lead to ones and zeroes. In granting that
some machines pass the test of consciousness, Searle leaves open the
idea that some causal machinery other than biological humans, may do
so. I suggest the Turing Test (T2) is actually much stronger than it
looks, what it tests is exactly what needs testing, and its main fault
is only what we demand further of it that goes beyond testing, a
constructive answer to “Just how is that done?” rather than just a
descriptive declaration of, “Congratulations you passed!”. I have seen
no result, and few arguments, that computation cannot pass T2. Most of
all, I want to oppose the tendency of my compatriots in compsci to give
up any philosophical argument in favor of engineering to say “OK, we’ll
just imitate until you can’t tell and pass T2 that way, but sure, it’s
not *real* intelligence”. Searle grants that, too, but I think we can
do better. I see no issues of principle, having explained that the
arguments of physicalism are moot since computation is already
physical. Two things remain, a demonstration, and an explanation, and
yes, the explanation had better come ﬁrst, which is why we need to
focus on the right issues and keep up the discussion. What is called
for is “Do this and this, and the resulting program will be just as
intelligent, for exactly the same reasons, as humans.” That was the
original statement of the cognitivist, computational movement, that if
a machine passed T2 it would be for good reason. Much more needs be
said, but perhaps that is enough for now. – *Harnad expresses doubt
that T2 can be passed, but he does so only by breaking the T2 paradigm,
“In fact, without the sensorimotor capacities of a robot, it is not
clear how even the email T2 could be passed successfully: Would it not
arouse immediate suspicion if our pen pal was always mute about photos
we sent via snail mail?” (p 2). Well, I agree this far, that one can
escalate demands on T2 or the CR to any extreme. I’d suggest such
questions as, “How are you today?”, or “Have you changed your mind yet
about what we talked about yesterday?”, or imperatives like “Jump up!”,
or statements of fact like, “Your shoe is untied”, are interesting
tests, but do not change the minimalist validity of T2. In any case,
until T2 is easily passed we already have enough to worry about.

J.STERN: “Harnad… is now
trying to ground… Searle’s claimed feeling that “something is missing”

Symbols (not “claims”) are
the things that need to be grounded (causally connected) to the things
in the world they refer to (e.g., via T3 robotic capacity).

Grounding symbols is not
enough to give them meaning (nor to make them understood). For that you
need grounding plus what it feels like to mean, or to understand.

That’s what Searle
(correctly) says he would be missing with the Chinese squiggles and
sqoggles.

You don’t have to have
written a computer program that passes T2 in order to be able to see
this.

J.STERN: “The T2 test is
descriptive but not explanatory”

Correct. It would be the
explanation of the causal mechanism of the system that could
successfully pass T2 that would be explanatory.

There’s no such explanation
today. There need not be, in order to discern that passing T2 via
computation alone would not be enough.

J.STERN: “a computer…
[plus] sensorimotor capacities [passes] T3…turn off the computer… The
robot’s battery is still charged and the power switch is on, but the
robot doesn’t move and now fails T3. What has changed?”

You no longer have a system
that can pass T3. (That’s why we don’t try to interact with people
while they’re in delta sleep, or a coma, or brain-dead…)

(BTW, I don’t think that
the hybrid dynamical/computation system that passes T3 will just be a
computer plus peripherals, but never mind.)

J.STERN: “by turning off
the computer…[w]e have removed the ability to have causal interactions,
textual or mechanical.”

Indeed. And your point is…?

J.STERN: “There may be
some difference in degree between T2 and T3, but not of kind, even the
T2 test has sensorimotor aspects.”

The kind of difference in
degree that distinguishes apples from fruit: T3
(Turing-indistinguishable sensorimotor capacity) includes T2
(Turing-indistinguishable verbal capacity).

The sensorimotor I/O for T2
is trivial, and nothing substantive hangs on it one way or the other.

J.STERN: “T2 test does
not suggest the computer sends only ones and zeroes in its messages”

The coding is arbitrary
convention and the hardware is irrelevant. What matters is the
algorithm, which is formal.

Nothing hangs on any of
this, one way or the other, for the points I made about T2, T3,
grounding, meaning and feeling.

J.STERN: “I have seen no
result, and few arguments, that computation cannot pass T2.”

I think you’ve missed the
point of the Chinese Room Argument. The point was not that computation
alone could not pass T2. (I happen to think it can’t, and I give
reasons; but neither Searle’s point about understanding, nor mine about
either grounding or feeling depends on whether or not computation alone
could pass T2.) The point was that if computation alone could pass T2,
it would not be understanding — nor would it be grounded, nor would it
be feeling.

B.RANSON: “The crucial
distinction between our capacities and those of computer/robots can be
brought into focus by considering the distinction made by Searle
between “observer-dependent” and “observer-independent” phenomena…
[observerdependent:… money and computers… observer-independent:… metals
and plastics… physical processes in our brains, and consciousness]” S
HARNAD: “Interesting way to put the artiﬁcial/natural kinds distinction
(except for consciousness, i.e., feeling, which does not ﬁt, and is the
bone of contention here); but how does this help explain how to pass
the Turing Test, let alone how and why T3 or T4 feels?” And later
B.RANSON: “a robot/computer combination is no improvement on a computer
alone. The crucial, observer-independent features are still missing.” S
HARNAD: “It’s certainly an improvement in terms of what it can do
(e.g., T2 vs. T3). And since, because of the other-minds problem,
whether it feels is not observable, it’s certainly not
observer-dependent. Thank you very much for all your stimulating
responses Professor Harnad. I do think that the
observer-dependent/observer-independent distinction is important in
clarifying this issue so I must try to explain why. In the last
sentence I quoted I believe you are not using observer-dependent in
quite the sense which I am borrowing from Searle. If we consider money
(as contrasted with metal in coins) it is highly questionable whether
that is observable, but its existence is observer-dependent in Searle’s
sense. Money is only money when somebody says it is. Also, I don’t
think the other-minds problem is much of a problem. We know what causes
feeling, it’s our nervous system, and people and (some) animals with
nervous systems feel. You would want really good reason to think that a
walking, talking electriﬁed pile of metal and plastic was also capable
of feeling, without a nervous system, and in the case of computers and
robots there isn’t one. I don’t understand why you think consciousness
“doesn’t ﬁt”; for me it does. The existence of consciousness is
observer-independent; consciousness would be there (say in early
mammals) whatever anybody might say about it. The existence of
computation is observer-dependent. Something is only computation when
somebody says it is, something is only a computer when somebody says it
is. B.RANSON: “Professor Harnad suggests that our robots and computers
are able to do a tiny fraction of what we do. My submission is that
they are not able to do even that tiny fraction, that what they do do
is not even related to what we can do, or at least, not to the relevant
part of what we do.”

S. HARNAD “Let’s not
quibble over how tiny is tiny.” But I wasn’t saying tiny, I was saying
none. Computers don’t do any consciousness or feeling at all, what they
do isn’t related to feeling. They are in the wrong ontological category
to have that kind of causative power. Essentially, a computer is an
idea, and a very complex and developed one, so it can hardly be the
cause of simple, basic ideas, or feelings. B.RANSON: “The syntax in the
associated computer is observer-dependent, but also the semantics and
even the computer’s very status as a computer.”

S. HARNAD: “Syntax is
syntax. The fact that it was designed by a person is not particularly
relevant. What is deﬁnitely observer-dependent is the syntax of an
ungrounded computer program. To free it from dependence on an external
observer, it has to be grounded in the capacity for T3 sensorimotor
robotic interactions with the referents of its internal symbols. Then
the connection between its internal symbols and their external
referents is no longer observer-dependent.” The grounding can’t take
place because it doesn’t have any internal symbols, it doesn’t have any
“internal” at all, except, again, observer-dependently. It’s the
observer who decides what constitutes the computer, where it starts and
ﬁnishes. Syntax isn’t syntax, not in itself, those symbols are only
symbolic in the mind of the observer. The electrical currents and
moving components are observer-independent, but their identiﬁcation as
symbols is observer-dependent. A feeling being does have an “internal”,
there is the feeling, and the thing being felt. In the deﬁnition of
“cognitive capacity” you kindly provided you say that this is what an
“organism” can do. My Concise Oxford Dictionary deﬁnes “organism” as
“an individual animal, plant, or single-celled life form, or, a whole
with interdependent parts, compared to a living being. A computer or a
computer plus a robot is not an organism, not because it doesn’t ﬁt
this deﬁnition, but because it isn’t an “individual”, or a “whole”, not
in the way that matters. That is what is missing.

Yes, but we don’t know how
or why (and I’ve suggested reasons why we never will).

Our nervous system also
causes our cognitive performance capacity, but there’s no reason to
believe we won’t eventually be able to reverse-engineer that so as to
give a causal explanation of how and why we can do what we can do.

The latter is called the
“easy” problem; the former is called the “hard” problem (insoluble, by
my lights).

Well, the former, hardly!
But even if it were so, the problem is explaining how and why observers
feel.

B.RANSON: “Computers
don’t [feel] at all”

Agreed. So how and why do
the kinds of things that *do* feel, feel?

B.RANSON: “symbols are
only symbolic in the mind of the observer”

Agreed. And symbols are
only grounded (i.e., causally connected to their referents) in a T3
robot (or higher); not in just a T2 computer. But that does not explain
why and how it feels like something to have a symbol in mind.

B.RANSON: “A feeling
being does have an “internal”, there is the feeling, and the thing
being felt”

Internal to its body (or
brain) is not quite the same sense of “internal” as in “having
something in mind,” or as in “the thing being felt”…

B.RANSON: “A computer or
a computer plus a robot is not an organism… because… it isn’t an
“individual”, or a “whole” in the way that matters”

We agree about computers.
Maybe we don’t agree about T3 robots. But the problem is not with being
an “individual” or a “whole” but with explaining how and why an
individual or a whole feels (if it does).

Response to Stevan
Harnad’s Comments of 2/25/11 @18:32 We obviously disagree on the
linguistic issue. If you think that words like “feeling” are used in
precisely the same way when speaking of our experiences and of others’
experiences, then of course you will expect to be at a loss in
ascertaining that what’s going on in me is what’s going on in you. Is
this ontic or epistemic? The Other Minds problem puts us in an odd
position of never being sure about a great deal of the world in which
we ﬁnd ourselves. But, of course, we ARE sure enough (as even you
attest in your reports of how you would behave). So there is this great
conundrum posed by the supposition that, to know that others have a
subjective life like we do, we have to guess and guessing isn’t always
reliable. Andl, yes, we can guess wrong — but to be able to guess wrong
we have to know what counts as guessing right (or we cannot know when
we have guessed wrong when we do it). So we’re stuck with a sense that
we’re missing direct access to others’ minds which, if only we had,
would make a difference. But, in fact, there is no difference needed.
The issue of lacking direct access to other minds goes nowhere on a
scientiﬁc level and Wittgenstein unpacks it philosophically so that it
loses its force. If you don’t embrace that view, then I suppose it will
continue to seem an obstacle. But it still has no more bearing on the
scientiﬁc question of what it takes to produce a feeling mind than the
absence of such indubitable certainty has on any other scientiﬁc
question. Your wrote: “The scientiﬁc problem is not to be ‘assured’
that others are feeling, but to explain how and why they feel.” But a
description of the processes that lead to the occurrence of feeling in
another entity is enough to explain the how and the why (if the theory
bears out empirically, of course); it’s no more a problem to test for
feeling in a machine than in another human being. We look at the same
kind of phenomena. Now, it just looks like we are each repeating
ourselves though so perhaps we shall just have to accept that ours are
markedly different understandings of what understanding is. You wrote:
” If a feelingless T3 or T4 robot is impossible then if T3/T4 behaves
as if it feels, it must (somehow) be feeling. Fine. Now we know it
feels. “Now: How and why does it feel? (Please don’t reply that the
answer is that it would be impossible for it not to! That’s not a
causal explanation.)” I have replied previously (in response to the
challenge) that the how/why questions can be answered by a theory that
proposes that feeling is just the occurrence of so many layered
processes (of a computational type) performing certain functions in an
interactive way — and I’ve spelled out the kinds of functions and
layering I think such a system would need. Not in great detail, of
course, because all one can do in discussions like this is speak in
general terms. Actual speciﬁcs must come from actual system designers
and implementers and, as there are many ways to do this, it follows
that they might not all work (indeed, none may work). But here I am
only required to say what WOULD sufﬁce to provide an explanatory
description (if it worked) and that is what the thesis I’ve proffered
does. I think this all hinges on what may be the differing ways in
which you and I are conceptualizing this thing you call “feeling”. On
my view, when I consider my own subjective life introspectively, once I
get past the obvious givens of being a self, of having awareness of
things, of recognizing meaning and the like, I see nothing that cannot
be broken down into more basic functions. That is, it seems to me this
can be explained adequately in a systemic way, that “feeling” need not
be presumed to be some special feature of the universe rather than the
outcome of a lot of things a brain does. I have to conclude, given our
failure to agree on this very basic issue that this is NOT how you see
“feeling” at all. So our disagreement boils down to what may just be
competing conceptions of mind. You wrote: “a theory (T3/T4) that
accounts for behavior accounts for behavior. Why and how the behavior
is felt needs an account of its own.” If feeling in others is always
known through behavior and there are certain behaviors which, in the
right context, represent or express feelings to us, then all we have to
look for to test the theory is behavior in context, then a theory about
what processes like those we run on a computer can do in the right
conﬁguration is testable. So one cannot plead non-testability. Of
course, that’s a different issue from whether the theory can explain
feeling as I think it can. You asked: “How have I shifted from
scientiﬁc to metaphysical in asking for a causal explanation of how and
why we feel? “How have I shifted from scientiﬁc to metaphysical in
asking for a causal explanation of how and why we feel?” But I have
already given such an explanation by proposing that mental features,
say understanding, may just consist of certain processes (refer back to
my road sign anecdote) which are plausibly performed by computational
processes and brain processes. You have deﬁned “understanding” as
grounding + feeling requiring the robotic (T3) model. I have replied
that, on my view, a T2 model can achieve sufﬁcient grounding (an issue
we’ve since left behind). But this doesn’t seem to be the real crux of
our difference. Rather it’s that the feeling part in your deﬁnition,
which Searle and you are (to my mind, rightly) insistent on, can be
explained as the way(s) in which different subsystems within the
overarching system interact, i.e., the self subsystem recognizes
(through the same kinds of associative operations already alluded to)
the occurrence of inputs captured and passed through various other
distinct subsystems. This recognition is pictured as relational to the
self subsystem, i.e., as with the other systems, it builds
representations of its “observations”. I don’t argue that the brain
isn’t complex or that a computer type platform wouldn’t have to achieve
an equivalent complexity to do what brains do. I only argue that we can
conceivably explain even the occurrence of feeling in this kind of
systemic way. The reason I say you’ve shifted from science to
metaphysics is because science depends on observations while you have
imported an unsolvable metaphysical problem (because of the absence of
observability) which only seems to muddy the waters. Of course, this
doesn’t look like a real problem to me because of my Wittgensteinian
orientation but, even if it does look like such a problem to others, it
still has no bearing on what science needs to do in order to ﬁgure out
what it is that brains do which produces consciousness. You added:
“explain to me why and how it feels — or, if you like, why and how it’s
impossible. [Assume I already agree that it's impossible: No contest.]
No other-minds problem-problem. T3/T4 is feeling. It is indeed
impossible for it not to be feeling. I get a headache just thinking
about it. Now, just explain, causally, how and why it’s impossible.)”
What I say is that it makes no sense to suppose that an entity behaving
in a feeling way within a sufﬁcient testing regimen isn’t feeling. I
don’t deny we can have behaviors that fool us in limited situations. My
denial is that it can be intelligible to suppose that an entity, ANY
entity, that passes the lifetime Turing Test you have speciﬁed is
missing the underlying feeling — because my position is that that’s all
it means when we ascribe feelings to other entities. I agree that that
ascription implies that we are presuming that the entities in question
experience as we do. But that presumption is a valid one and poses no
problem for cognitive science. Anyway, above I have offered a sketch of
a theory for why and how an entity with feeling behaviors can be
presumed to have feelings. My point is that the absence of direct
access poses no problem for testing the theory.

You wrote: “There’s
nothing public or social (or vague or difﬁcult-to-pick-out) about
feeling a migraine. And until it’s explained how and why it’s felt, the
only way it can be described is as piggy-backing (somehow,
inexplicably) on the causal mechanism of T3/T4 capacity.” There are
different kinds of feeling, of course. A migraine is a physical
sensation while the feeling connected to instances of understanding is
something quite different. I have said previously that a machine mind
might not feel as we do and by that I meant that it might not have
physical sensation or, if it does, it might be nothing like our
physical sensations. After all, its medium and sensory apparatuses will
be of a very different type than ours. But the real issue you’ve raised
is the one about feeling in instances of understanding. And that, as my
experience on the road up from the Carolinas shows (I hope) is
something quite different, i.e., it’s being aware of mental pictures
and how they relate to other mental pictures. So I think we have to be
very careful here, even with your word of choice, “feeling”, since it
doesn’t always denote the same phenomenon. It’s clear we have very
different views and I thank you for the chance you’ve provided for me
to understand yours a little better — and the opportunity to raise some
questions and offer my own view. I suspect that the reason we’re at
loggerheads is that we have a very deep difference in how we think
about consciousness.

S.MIRSKY: “we’re missing
direct access to others’ minds which, if only we had [it], would make a
difference”

Not a difference to what
I’m saying: I think readers will ﬁnd this tedious, but all I can do is
repeat it till it is taken on board: I am talking about explaining how
and why we feel, not just how and why we do. And for that, it would not
help even if we had a God’seye view of whether (or even what) others
feel. As I said before: Knowing WHETHER (and WHAT) ≠ knowing HOW and
WHY. So why are we again speaking about the other-minds problem — and
whether?

S.MIRSKY: “a description
of the processes that lead to the occurrence of feeling in another
entity is enough to explain the how and the why”

Till further notice, the
processes that lead to feelings happen to be processes that lead to
doings (bodily doings in T3 and both bodily and brain doings in T4). To
explain how and why we feel requires explaining how and why those
doings are felt doings. Otherwise all you have is a correct, complete
explanation of doings, with which the feelings are inexplicably
correlated.

S.MIRSKY: “I have
replied previously… that the how/why questions can be answered by a
theory that proposes that feeling is just the occurrence of so many
layered processes (of a computational type) performing certain
functions in an interactive way — and I’ve spelled out the kinds of
functions and layering I think such a system would need.”

And I’ve replied previously
that this all sounds like a just-so story, not a causal theory
explaining how and why we feel.

So, really, Stuart, we will
need to stop repeating this. I understand that you feel you have a
causal explanation, and I think you understand that I feel you do not.

S.MIRSKY: “I see nothing
that cannot be broken down into more basic functions… “feeling” need
not be presumed to be some special feature of the universe rather than
the outcome of a lot of things a brain does. I have to conclude, given
our failure to agree on this very basic issue that this is NOT how you
see “feeling” at all.”

Correct.

S.MIRSKY: “feeling in
others is always known through behavior”

Yes, feelings are known
through behavior, but feelings are not behavior.

S.MIRSKY: “feeling… can
be explained as the way(s) in which different subsystems within the
overarching system interact…”

Nope, I don’t see this at
all, and I really don’t think anything is gained by continuing to
repeat it: There is no need to come back and revisit this point…

S.MIRSKY: “you’ve
shifted from science to metaphysics… because science depends on
observations while you have imported an unsolvable metaphysical problem
(because of the absence of observability) which only seems to muddy the
waters.”

And I thought all I’d said
was that we clearly don’t just do but feel; but then it seems a
perfectly reasonable and natural question to ask: how and why do we
feel?

How is that a shift from
science to metaphysics? Even my reasons for thinking the question will
not be answerable do not invoke metaphysics. It just looks as if a
causal theory of doing covers all the available empirical evidence, and
yet it does not explain feeling. I’m not saying feeling is magic or
voodoo: just that we can’t explain how and why we feel. We do feel. I’m
sure our brains cause feeling, somehow: I just want to know how — and
why (because otherwise feeling seems utterly superﬂuous, causally).

S.MIRSKY: “it makes no
sense to suppose that an entity behaving in a feeling way… the lifetime
Turing Test… isn’t feeling… because… that’s all it means when we
ascribe feelings to other entities… I agree… we are presuming that
[they feel] as we do… [A] machine mind might not feel as we do… it
might not have physical sensation or… nothing like our[s]…”

This is beginning to sound
a bit incoherent to me: We can tell that others feel. The lifetime TT
is the way. But they may not feel as we do; they might not even have
sensations at all. (Are we still within the realm of the “intelligible”
here? I thought “behaving feelingly” was all it took, and all there was
to it — on condition that the right internal subsystems interact, etc…)

S.MIRSKY: “But the real
issue you’ve raised is the one about feeling in instances of
understanding.”

No, the issue I raised was
about feeling anything at all.

S.MIRSKY: “I suspect
that the reason we’re at loggerheads is that we have a very deep
difference in how we think about [feeling]…”

S. HARNAD Well, the
former, hardly! No really! That is an important part of the point I was
trying to clarify. Feeling is observer-independent in the relevant
sense, and if you think it isn’t then you are misunderstanding the
intended meaning of “observer-dependent”. Yes, it is an inherent
characteristic of feeling that it is felt by someone, but that is not
observer-dependency in the relevant sense. [quote] But even if it were
so, the problem is explaining how and why observers feel.[/quote] I
don’t think there is a “why”. As for how, my point is that it isn’t
anything to do with computation. That’s got to be a helpful
contribution to the search for an explanation: if you are looking to
explain the how of consciousness through computation, you are looking
in the wrong place. B.RANSON: “symbols are only symbolic in the mind of
the observer”

S. HARNAD “Agreed. And
symbols are only grounded (i.e., causally connected to their referents)
in a T3 robot (or higher); not in just a T2 computer. But that does not
explain why and how it feels like something to have a symbol in mind.”
But they are also only “grounded” in the mind of the observer. This
grounding, the symbols, the connections with the referents and the
referents themselves, all are observer-dependent. Wheras, any grounding
that is involved in having a symbol in mind is not observer-dependent.
B.RANSON: “A feeling being does have an “internal”, there is the
feeling, and the thing being felt”

S. HARNAD: “Internal to
its body (or brain) is not quite the same sense of “internal” as in
“having something in mind,” or as in “the thing being felt”.” No, and
I’m not talking about that sense, not talking about physical
internality. I’m saying that the very occurrence of feeling creates the
distinction we describe as “internal/external”. But because
computers/robots don’t have any feeling at all, they don’t have the
basis for that distinction.

B.RANSON: “A computer or
a computer plus a robot is not an organism… because… it isn’t an
“individual”, or a “whole” in the way that matters”

S. HARNAD: We agree
about computers. Maybe we don’t agree about T3 robots. Perhaps we will,
when you have taken the point of the observer-dependent/independent
distinction. S HARNAD: But the problem is not with being an
“individual” or a “whole” but with explaining how and why an individual
or a whole feels (if it does). The point of nearly everything I’ve been
saying, I think, is that it has to be an (observer-independent)
individual or whole ﬁrst. That is a major element of the answer to the
“how” question. An organism that can be conscious somehow is the
relevant (observerindependent) kind of “whole” or “individual”, and a
computer/robot isn’t. That’s what is missing, that’s what we should be
looking for. And it is missing in computers and robots, so we aren’t
going to ﬁnd it there.

This reply will illustrate
yet another reason why it is so important to stick rigorously to the
spare saxon word “feeling” — rather than any of the countless other
elaborate and equivocal synonyms, euphemisms, paralogisms and redundant
variants for “consciousness” — if we want to keep ourselves honest
whilst trying to sort out what’s what, how and why.

In particular, one of the
most common bits of self-delusion one falls into when one lets
entity-names proliferate beyond necessity is the profound equivocation
gap between “observation” and “*felt* observation.”

This is *exactly* the same
thing as the (to my mind) nonsense that has built atop the incoherent
distinction between (AC) “access consciousness” and (PC) “phenomenal
consciousness.”

The difference between the
two putative “consciousnesses,” AC/PC is immediately seen to be merely
verbal (and vacuous), when one transcribes it thus: “access” vs. “felt
access.” For unfelt access is no kind “consciousness” at all: it’s just
data-ﬂow — or, even more objectively, it’s just dynamics, event-ﬂow.

You could say that a
computer has access to signals from the Mount Palomar telescope; but
that’s no kind of “consciousness” at all. And remains so until and
unless the access becomes *felt* access.

So all you needed was the
“felt”; the “access” has nothing to do with it. (So much for “AC”: and
“PC” just becomes feeling!)

Ditto for “observation”:
You can talk about “experimental observations,” but (as even the
quantum foundationalists will tell you, in puzzling over their own
“hard problem”), an experimental datum really only becomes an
observation if/when a human observes it. And humans feel: It feels like
something to observe. Otherwise there is only data-collection and
data-processing going on — or, rather, just dynamics, events.

(I hope no one will take
this as an invitation to digress into the alleged role of consciousness
in the collapse of the quantum wave-packet: *Please* let’s not go
there: We have enough koans of our own to contemplate!)

The reason for all this
preamble is to prepare the way to pointing out that Bernie Ranson’s
reliance on an “observer-dependent/ observer-independent” distinction
is deeply equivocal when discussing feeling itself, and the causal
status of feeling.

Yes, whether or not
something is a chair or some other object we invent or use is
observer-dependent; so is whether or not we agree to call it a chair.
Yes, whether squiggles and squoggles mean my weekly salary, as
calculated by a payroll program, is observer-dependent, and so is
“salary.”

And, yes, whether or not
“squiggle” refers to chairs (or anything at all) is observer-dependent,
even when it’s grounded in a T3 robot.

But the buck stops with
feeling itself. Feeling *is* observation. Whether what we feel is
veridical is what we usually like to call an “observer-independent”
truth. (Is it really raining outside, or am I misreading what I seem to
see from my window? Do I really have something wrong with my tooth, or
is my toothache just referred pain from conjunctivitis?)

But feeling itself is just
about as observer-*dependent* as you can get, even though the fact that
it *feels* like whatever it feels like is beyond any doubt (as
canonized in the Cogito).

So with every feeling there
are two questions we can ask: (1) Is there an external,
observer-independent state of affairs that *is* like what this feeling
makes it feel as if there is? (Never mind the problem of
incommensurability here; interesting, but not really relevant;
“reliably correlated” will do just as well as “resembles”; and, no,
Wittgenstein on “private language” does not settle this matter either,
because now we are talking about public language and public
correlations.) That’s the thing we’re usually talking about when we
talk about what we do or don’t have “access” to.

But then there’s the
question: (2) Is there anything going on that feels like anything at
all, whether veridical or not? That’s the real question of feeling. And
it’s the causal basis of *that* that my persistent how/why questions
keep insisting on. Not whether we have veridical access to facts about
the world: The T3 story can take care of all that without any need of
extra help from feeling. And that’s the point!

B.RANSON: “Feeling is
observer-independent in the relevant sense, and if you think it isn’t
then you are misunderstanding the intended meaning of
‘observer-dependent’”

Bernie, you seem to be
getting a lot of intuitive mileage out of this notion of
“observer-dependence/independence.” What’s at issue here, however, is
not that, but whether the observation is felt or unfelt.

B.RANSON: “I don’t think
there is a “why” [we feel]…

Well, it would be odd if
something as ubiquitous as feeling did not have any neural or adaptive
function, would it not? Is it not natural to ask why everything
wouldn’t work just as well without it? (And by “work” I mean causality
— everything for which T3 and T4 *can* explain how and why we do it.
Not only does T3 and T4 explanation leave out the explanation of the
fact that we feel, completely, but there does not seem to be any
(independent, causal) room to include feeling, even if we wanted to.)

B.RANSON: “As for how…
if you are looking to explain the how of [feeling] through computation,
you are looking in the wrong place.”

We’ve already agreed on
computation’s shortfall. Searle’s Chinese Room Argument shows (and the
symbol grounding problem explains) that even if T2 could be passed by
computation alone, cognition is not just computation, because
computation alone leaves feeling out.

But even if you add
dynamics and grounding (T3, T4), you still have not explained feeling.
So if that’s the wrong place to look too, there’s no place else!

B.RANSON: “grounding…
the symbols, the connections with the referents and the referents
themselves… all are observer-dependent… [But symbols] are also only
“grounded” in the mind of the observer… any grounding that is involved
in having a symbol in mind is not observer-dependent.”

The capacity of a T3 robot
to pick out apples when it tokens “apples” internally (along with all
the other word/world interactions empowered by T3) is just the
grounding of robot-internal symbols. Having a symbol “in mind,” in
contrast, requires having a mind, i.e., feeling. That’s no longer just
a matter of symbol-grounding.

And, frankly, the issue of
“observer-dependence/independence” casts very little light on whether
things are felt or unfelt (pace both Wittgenstein and Searle), let
alone how or why.

(I think the
misapprehension that the observer-dependence/independence
distinctionmight somehow prove helpful here is, once again, a symptom
of the free proliferation of equivocal synonyms, euphemisms,
paralogisms and redundant variants for “consciousness” — in this case,
“intentionality”: In a nut-shell: T3 grounding guarantees only unfelt
“aboutness,” whereas having a mind requires felt “aboutness.” So, as
usual, it is not intentional/nonintentional that is the “mark of the
mental,” but felt/unfelt.)

B.RANSON: “the very
occurrence of feeling creates the distinction we describe as
“internal/external”. But because computers/ robots don’t have any
feeling at all, they don’t have the basis for that distinction.”

There are, again, two
(hence equivocal) “internal/external” distinctions. The unproblematic
one (states occurring inside vs outside a computer, robot or person)
and the problematic one (felt vs unfelt states).

And you prejudge the matter
(and beg the question) if you assume that “robots” don’t feel, since
robots are just autonomous causal systems, and hence surely feeling
organisms are robots too.

(If by “robot,” however,
you just mean a computer plus I/O peripherals, I agree that just a
computer plus peripherals probably could not pass T3, let alone feel.
The dynamics in the hybrid dynamic/computational T3 robot will probably
have to be a lot deeper than just add-on peripheral I/O devices.)

B.RANSON: “The point of
nearly everything I’ve been saying… is that it has to be an
(observer-independent) individual or whole ﬁrst. That is a major
element of the answer to the “how” question. An organism that can
[feel] somehow is the relevant (observerindependent) kind of “whole” or
“individual”, and a computer/robot isn’t.”

I hear all the language
about “observer-independence/dependence” but I am not getting any
causal (how/why) insight (or even clues) from any of it…

Reply to Stevan Harnad’s
response of 2/26/11 @ 22:02 S.MIRSKY: “we’re missing direct access to
others’ minds which, if only we had [it], would make a difference” SH:
Not a difference to what I’m saying: I think readers will ﬁnd this
tedious, but all I can do is repeat it till it is taken on board: I am
talking about explaining how and why we feel, not just how and why we
do. And for that, it would not help even if we had a God’seye view of
whether (or even what) others feel. As I said before: Knowing WHETHER
(and WHAT) ≠ knowing HOW and WHY. SWM: Then, if nothing but a God’s eye
view would help, it’s not a scientiﬁc question and belongs in a
different arena, namely that of metaphysics. Funny, though, but I have
the same sense about this as you: that all I can do is repeat my point
until it’s understood. But I’ve been down this path before and it
doesn’t seem that repetition does any more good than argument. It has
to do with our apparently holding very different conceptions of
consciousness (or feeling or whatever term we choose to use for it).
You speak of a difference between “knowing whether” and knowing why or
how, as if these were simple distinctions. But there a lots of ways we
can ask why and/or how. Why is the sky blue is a different kind of why
question from, say, why is the earth the third planet from the sun or
why do we love our parents or why did you drink the scotch instead of
the bourbon or why do humans have the kinds of brains they do or
opposable thumbs and so forth. So asking why certain kinds of processes
produce a feeling is analyzable in more than one way. In the scientiﬁc
way it’s very much answerable by hypothesizing that feeling itself is
just processes doing certain things at bottom — and then determining
which kinds of things must be done and what kinds of processes it takes
to do them, how they need to be arranged — and then we implement the
system that meets those criteria and test it. But if one starts with
the assumption that feeling is not process, cannot be, then obviously
THAT kind of hypothesis seems excluded and any explanation based on it
seems like it must be the wrong kind. SH: So why are we again speaking
about the other-minds problem — and whether? SWM: Because you bring it
up elsewhere in relation to this (most recently in response to Mr.
Ranson) and because the claim that we cannot see the other’s mind and
therefore cannot explain it depends on that Other Minds problem
presumption. But, of course, we don’t have to be able to directly
access the minds of others to explain their occurrence (how they are
produced and so forth). S.MIRSKY: “a description of the processes that
lead to the occurrence of feeling in another entity is enough to
explain the how and the why” SH: Till further notice, the processes
that lead to feelings happen to be processes that lead to doings
(bodily doings in T3 and both bodily and brain doings in T4). To
explain how and why we feel requires explaining how and why those
doings are felt doings. Otherwise all you have is a correct, complete
explanation of doings, with which the feelings are inexplicably
correlated. ~~~~~~~~~~~~~~~~~~ S.MIRSKY: “I have replied previously…
that the how/why questions can be answered by a theory that proposes
that feeling is just the occurrence of so many layered processes (of a
computational type) performing certain functions in an interactive way
— and I’ve spelled out the kinds of functions and layering I think such
a system would need.” SH: And I’ve replied previously that this all
sounds like a just-so story, not a causal theory explaining how and why
we feel. SWM: I think this has to do with your conception of
consciousness (though you call it “feeling”) which assumes it is a
bottom line type of thing, not analyzable into the physical processes
of the brains which produce consciousness. Because if you could see it
as analyzable, could for a moment step away from the idea that it is
some special feature of the world which requires a “God’s eye” view to
access in all its manifestations, then I think you would have no problem
with the idea that consciousness is unpackable into elements that are
not, themselves, conscious. But I admit your view is very strong in all
of us, a kind of basic intuition we have of ourselves. It is hard to
shake off. SH: So, really, Stuart, we will need to stop repeating this.
I understand that you feel you have a causal explanation, and I think
you understand that I feel you do not. SWM: Yes. I don’t see how either
of us can bring the other around. The difference in our views is deeply
rooted in what I take to be competing intuitions. That is, we all of us
have the intuition that consciousness is a uniﬁed irreducible something
based on how we see our own experience. But I think there is another
intuition at work, i.e., the one that tells us that the physical world
is paramount, that we are elements in that world. What happens when
these two intuitions meet, like opposing polarities that repel?
Sometimes we get philosophy and usually one intuition proves dominant.
The right move, though, is to ﬁnd a way to balance both. S.MIRSKY:
“feeling in others is always known through behavior” SH: Yes, feelings
are known through behavior, but feelings are not behavior. SWM: I
agree. But certain behaviors ARE consonant with what it is we mean we
ascripe feelings to others in public space. S.MIRSKY: “feeling… can be
explained as the way(s) in which different subsystems within the
overarching system interact…” SH: Nope, I don’t see this at all, and I
really don’t think anything is gained by continuing to repeat it: There
is no need to come back and revisit this point… SWM: I repeated it
because you repeated your challenge to me to offer an explanation even
as you focused on my talking about how we recognize feeling. My view of
how we recognize feeling (what we mean by its ascription) certainly
informs my proposal as to how best to explain feeling. But they are not
the same. Hence, I have found myself obliged to repeat my explanation
periodically in response to your suggestion that I haven’t offered one.
I grant that you do not ﬁnd my explanation congenial and I have
suggested why I think that is above. But my only point is that it IS an
explanation and it is based on a series of points that can be
elucidated here in a reasoned way. S.MIRSKY: “you’ve shifted from
science to metaphysics… because science depends on observations while
you have imported an unsolvable metaphysical problem (because of the
absence of observability) which only seems to muddy the waters.” SH:
And I thought all I’d said was that we clearly don’t just do but feel;
but then it seems a perfectly reasonable and natural question to ask:
how and why do we feel? SWM: A question to which I have proposed an
answer (though my answer depends on understanding feeling as the
workings of certain kinds of processes such as those found in brains —
now if brain processes are capable of producing feeling it only remains
for us to see how they do it; computational operations at some level
are at least a reasonable possiblity). SH: How is that a shift from
science to metaphysics? Even my reasons for thinking the question will
not be answerable do not invoke metaphysics. It just looks as if a
causal theory of doing covers all the available empirical evidence, and
yet it does not explain feeling. I’m not saying feeling is magic or
voodoo: just that we can’t explain how and why we feel. We do feel. I’m
sure our brains cause feeling, somehow: I just want to know how — and
why (because otherwise feeling seems utterly superﬂuous, causally).
SWM: But if we can explain it physically, in terms of a given set of
processes doing a given set of things, then we do know how and why. The
only reason to think this kind of explanation couldn’t explain it is to
suppose that feeling is inherently mysterious because it is a-physical.
But there is no reason to think it is since it is never discovered
apart from some physical platform (i.e., brains). Therefore there is
good prima facie reason to look at the physical activities of brains.
And if we do that, then unless we want to say that feeling springs like
Athena from the head of Zeus into existence, the only other alternative
is to suppose it is a composite of some things that are not,
themselves, feeling. And what is more manifestly not-feeling than
purely physical things? (Questions of panpsychism being placed on hold
for the moment.) S.MIRSKY: “it makes no sense to suppose that an entity
behaving in a feeling way… the lifetime Turing Test… isn’t feeling…
because… that’s all it means when we ascribe feelings to other
entities… I agree… we are presuming that [they feel] as we do… [A]
machine mind might not feel as we do… it might not have physical
sensation or… nothing like our[s]…” SH: This is beginning to sound a
bit incoherent to me: We can tell that others feel. The lifetime TT is
the way. But they may not feel as we do; they might not even have
sensations at all. (Are we still within the realm of the “intelligible”
here? I thought “behaving feelingly” was all it took, and all there was
to it — on condition that the right internal subsystems interact, etc…)
SWM: I carefully distinguished between feeling as sensation and feeling
as awareness. This is where your insistence on the term “feeling”
misleads. You have claimed that other words are “weasely” while
“feeling” isn’t. But what makes them “weasely”, you have told us, is
that they have different meanings (“awareness” suggests to you paying
attention). But note that “feeling” doesn’t get off scot free in this
analysis. A feeling can be a certain mood we ﬁnd ourselves in, a
particular physical sensation, or the awareness of things that comes
with experiencing. “Feeling” has other uses, too, as in feeling our way
in the dark, i.e., reaching out for familiar things, and so forth. The
distinction I made in the text you quoted above from was feeling as in
having blinding headaches (migraines) and feeling as in being aware of
the information we had taken in (as in Searle’s sense of understanding
what the symbols in the Chinese Room meant). S.MIRSKY: “But the real
issue you’ve raised is the one about feeling in instances of
understanding.” SH: No, the issue I raised was about feeling anything
at all. SWM: As I point out above, there is feeling and there is
feeling and the uses we put that word to don’t all mean the same thing.
The feeling relevant to the Chinese Room scenario, which you have
previously invoked, has to do with the sense of the man in the room
that he gets it. Any of us may lack certain sensation capacities.
Indeed, we may have some of those capacities deadened through
anaesthesia, and yet still retain feeling qua awareness. I think it’s a
mistake to ﬁx to rigidly on a single term when dealing with this mental
area because the referents are slippery here, because of their
non-public provenance. We have to keep repeating and clarifying to
avoid falling into confusion. S.MIRSKY: “I suspect that the reason
we’re at loggerheads is that we have a very deep difference in how we
think about [feeling]…” SH: and about explanation! SWM: That could be
though I think it more unlikely. I suspect we would both agree on most
uses of the term. But it’s clear we cannot ﬁnd a lot of common ground
on matters of mind.

Emendation: Ah, sorry,
in the above I misread your reference to a “God’s eye view” so that
part of my response should be disregarded. I won’t bother to re-write
though as there is quite enough dialogue between us as of now. Unless
it becomes an issue down the road I will leave this as the error it is
and rely on the rest of what I said in my response to you. Thanks for
the (hoped for) forebearance.

(Stuart, could I ask you,
please, if you continue responding, to pare down your quotes to just
the necessary essentials, rather than quoting the entire verbatim
dialogue each time, with a few small interpolated responses? Neither I
nor ConOn readers can wade through all that text every time, especially
after there has already been so much repetition. Quote/commentary is
good, but please apply Occam’s Razor: It might even help focus
thoughts…)

S.MIRSKY: “But if one
starts with the assumption that feeling is not process, cannot be, then
obviously THAT kind of hypothesis seems excluded and any explanation
based on it seems like it must be the wrong kind.”

No causal hypothesis is
excluded. “Feeling is process” is not a causal hypothesis — or if it
is, it calls for a causal explanation: How/ why are some processes
*felt* processes? (With your substitution, this would become “How/why
are some processes *processed* processes?” That, I think lays the
question-begging bare.)

S.MIRSKY: “I carefully
distinguished between feeling as sensation and feeling as awareness.
This is where your insistence on the term ‘feeling’ misleads.”

And, as I’ve replied many
times, I think this is just equivocation on quasi-synonyms and the
multiplication of pseudo-entities. Sensorimotor “processes” can be felt
(seeing, hearing, smelling, tasting, touching) or merely “functed”
(optical, acoustic, molecular, mechanical input processing), so can
other internal states (emotion, motivation — which, if unfelt, are
merely activations of various functional sorts and sign), including
thoughts (e.g., what it feels like to think and mean that “the cat is
on the mat”).

All feelings feel like
something. If I am aware that I have a toothache or that the cat is on
the mat, that just means I am feeling what it feels like to have a
toothache or what it feels like to see or think or mean that the cat is
on the mat.

It is not my insistence on
the term “feeling” that misleads, but the long tradition of imagining
(and talking as if) “feeling X” and “being aware of X” were two
different kinds of things, rather than one and the same thing. (The
rest is just about whether X is or is not true, and if so, how we
access and act upon that datum; i.e., the rest is just T3!)

Reply to Stevan Harnad’s
Comments of 2/28/11 @ 9:27 AM You have a point about paring down. I
have been trying to walk a ﬁne line, aware that failure to quote fully
may leave something important you have said out — or lead to changes in
meanings (as, I fear, I think some of your elided quotes from things
I’ve said have sometimes done). But I am painfully aware of the
downside of extensive quoting, especially in a forum like this, which
you rightly point out. So I will try to be more selective in
responding. I don’t get your point that it is “question begging” to
suppose that feeling is explainable by treating it as processes and
then to offer an explanation in terms of processes. After all, isn’t it
equally question begging to suppose that it isn’t, and then denying any
possibility of such an explanation on the grounds that it isn’t? In
either case what’s happening is that an underlying assumption is being
deployed to address a particular thesis, except that they are opposite
assumptions. What’s really in play here is the disputed assumptions,
no? Can a phenomenon like consciousness be explained mechanistically or
must something else be invoked to explain its occurrence? The Challenge
Redux Your challenge was for someone to explain how and why what your
call “feeling” (and I prefer to call “awareness”) happens in a system
like a T3. Your claim is that no one can do that. I responded by
suggesting, ﬁrst, that the distinction between the T2 and T3 standards
is likely not essential to achieving the desired result because
grounding can be done entirely within a symbols framework. Now we
haven’t really dealt with that and probably don’t need to at this point
because the main issue dividing us really is the one to do with
“feeling” as you have rightly noted. So what’s really key here is the
question of explaining feeling itself, the point of your challenge. I
have argued that feeling can be explained computationally (whether that
is the RIGHT explanation is a different question) since it can be
determined to be present behaviorally, hence non-access to it in other
entities poses no special problem in a testing regimen. Against my
view, that computational explication is at least feasible, you have
suggested that no amount of computational description can satisfy
because all it can ever tell us is what observable events will result
from whatever operations are performed by the computing (or any)
mechanism in question. So here we have the sharp difference between us.
I’ve said that observed operations (behaviors) are all that’s required
to recognize that feeling is present in the behaving entity and
therefore, on my view, a description of a process-based system that
produces what we will both agree is feeling behavior, is enough to tell
us how feeling occurs. But you have demurred, on the grounds (if I am
reading you right) that feeling remains inaccessible to the outside
observer so all we can ever know about the behaving entity is that it
is operating LIKE a feeling entity. On your view, something IS still
left out, namely the feeling of the feeling which only the particular
observed entity itself can ever have — just as only we can feel our
feelings. I think there is some confusion in this. If “feeling” is
ascribed on the grounds of observed behaviors, then we know all we need
to know. So nothing is left out. And if nothing is, then an explanation
that describes how certain processes (whatever they are) combine to
produce feeling behavior (meeting the lifetime TT standard) is enough.
Such an explanation could be the wrong one. But the issue here is not
whether it’s right or wrong but whether it could explain “feeling”. On
the Matter of Terminology Here I think is where a lot of the difﬁculty
in sorting out our opposing views may be found. I am aware that your
claim that all words referring to the mental aspect of our subjective
lives, except “feeling”, smack of equivocation. I profoundly disagree.
As I’ve pointed out, your “feeling” works in the same way. We can agree
to stipulate our meaning, whether for “feeling” or “awareness” or
“consciousness” or “intentionality” and so on and so forth and this
will surely help in any discussion, though it probably won’t prevent
our slipping and sliding on the meanings nonetheless. That’s because
words about the mental aspects of our lives, derived as they are from a
public venue, are inevitably going to be slippery because they’re being
used at a remove from that venue. Witness your own shifting between
“feeling” meaning sensations (as in migraine headaches) and “feeling”
meaning comprehending, as in the man in the Chinese Room having the
feeling of knowing what the symbols he is reading mean. It is certainly
fair to say both instances can be characterized as “feeling”, as you
have done, but they are surely not the same thing. Moreover, I think it
is a mistake to create a word like “functed” to substitute for “felt”
when you want to avoid using the latter term. Insofar as “felt” seems
to be called for, it’s arbitrary to make this substitution. And it
makes little sense to deploy a word that has no real purpose of its own
(if it had a real purpose our language would have already included it).
If you just mean to substitute for words like “performed” or “did” or
some other action word, then why not stick with those ordinary uses?
You say “all feelings feel like something”. Well, that’s true by
deﬁnition. Feelings are felt, of course. But the question isn’t whether
it’s true (who denies it?) but whether any given system feels and
that’s a different kind of question. We have seen that “feeling” can
denote a wide array of features including the sensations we are
accustomed to having (heat, cold, pain, pleasure, hard, soft, rough,
smooth, bright, dark, dull, sharp, salty, sweet, sour, loud or barely
audible, etc.). It also includes the states we ﬁnd ourselves in (happy,
sad, ebullient, morose, angry, desperate, etc.), and the senses we have
of being awake (when we are) as well as a sense of understanding things
such as geometry problems, English letters, Chinese ideograms and so
forth. But just because we can refer to all of these phenomena as
having feelings, broadly speaking, doesn’t mean they are the same
thing. Indeed, I’ve suggested previously that any given entity that
feels may have a wide variation in its feelings. That a T3-passing
robot, which passes the T test on a comprehensive and open-ended
(lifetime) basis, may lack certain equipment, or may have equipment of
a different type than ours (and so have a different set of sensations
or, perhaps, no physical sensation that is recognizable to us as such
at all), says nothing about whether it will or will not have awareness
in terms of understanding its various inputs. Feeling qua sensation is
not the same as feeling qua understanding. So “feeling,” on
examination, does not appear to be one simple thing at all but a wide
array of different features that a system of a certain type may have.
The word is fairly broadly deployed, as with most of our notational
words so it can surely be as misleading as any of those other words you
dismiss as “weasely”. What does it “feel like to think and mean that
‘the cat is on the mat’” as you have put it? Well what does it feel
like to suddenly get the meaning of some words on a sign, as in my
drive up from the Carolinas? I have already suggested that the latter
event involved my suddenly having a range of mental pictures that
differed from the previous ones I’d had. In recognizing those for what
they were, I was feeling them, on your use.

This recognition was
accompanied by another and different kind of feeling, in this case a
sense of relaxation, i.e., I no longer had to keep rooting around for
mental images that ﬁt the words on the sign within the context and so I
breathed a mental sigh of relief, you might say. I had a sense that
something that had been unclear to me before was now clear, a feeling
of cessation of perplexity — and, a kind of satisfaction in that
cessation. As I had also recently been debating the implications of
Kripke’s Meaning and Necessity for Wittgenstein’s meaning-as-use
insight with a Direct Reference Theorist, my abrupt recognition of what
that moment of understanding consisted of had the further effect of
making clear to me why my DRT interlocutor was wrong about something he
had said. So I had yet another moment of satisfaction, too, feeling
that I had solved a difﬁcult problem that was of more longstanding
concern to me than the momentary confusion on the road. So lots of
feelings occurred in me at the moment of sign recognition and, while
related in various ways, they were not all the same, despite the use of
a single word to name them. Your point has been that feeling is feeling
and I want to suggest that it is not, that we use the word “feeling”
for many things and so it’s no help to jettison the other mental words
(to which you object) in favor of the one you don’t object to on the
grounds that your choice alone is clear. It isn’t any clearer than its
sibling words though our coming to some kind of stipulative agreement
about using it (or one of its alternatives) can certainly aid us in
this particular discussion. Conclusion My main point, then, is that we
don’t really need anything more than a process-grounded account to
explain the occurrence of feeling, even if we don’t yet have an
entirely satisfactory one we can accept (the jury is surely still out
on the kind of theory I’ve sketched here). And holding out for
something more just moves us outside the scope of scientiﬁc inquiry. It
sets up an insoluble problem for the scientiﬁc investigation of
consciousness and thinking, a problem which there is no reason to
expect science to solve — or to suffer from not solving.

No such thing. Just
behavior, simpliciter (possibly correlated with what it feels like to
do that). Your Carolina highway experience makes you feel you’ve
explained that via “processes,” but alas it doesn’t make me feel that
way. I think those processes just explain behavior and feeling remains
to be explained (and probably can’t be, for the reasons I’ve suggested).

S.MIRSKY: “It is
certainly fair to say both [headaches and understanding] can be
characterized as “feeling”, as you have done, but they are surely not
the same thing.”

I didn’t say all feelings
were the same; I just said they were unexplained.

Our exchanges are really
just repetition now. I apologize but I won’t be able to reply again
unless something new and substantive is said. Thanks for the exchange.

IGNORANCE OF AN
EXPLANATION DOES NOT ENTAIL LACK OF AN EXPLANATION (reply to Harnad)

SH: “But I don’t even
believe in zombies!” If you think that you can have a T4 robot that
lacks feeling then you do believe in (the conceivability of) zombies!

SH: The
(“higher-order”) explanation (now transcribed here minus the synonyms,
euphemisms, paralogisms and redundancies) seems to be: “to feel
(something) is to feel something. This… explains why it feels like
something …because that is how my feeling feels, and that is all that
there is to feeling.” Pared down to this, it’s deﬁnitely not
hermeneutics; it’s tautology. Put back the synonyms, euphemisms,
paralogisms and redundancies and it becomes hermeneutics: a Just-So
Story. But it’s certainly not explanation!”

If one were inclined to
be uncharitable, one could parody your view along the lines of ‘feeling
can’t be explained because feeling is unexplainable’. Luckily, I am not
so inclined At any rate these are not the same thing! On the one hand
we hand we have being conscious OF something in our environment and on
the other hand we have feeling. You are right that being conscious of
something in the relevant sense is something that can be functed and
does not have to be felt. On the higher-order view one has feeling when
one is conscious of oneself as being in a toothache state or thinking
that the cat is on the mat. So, when one has those states without being
conscious of them, there is no feeling. When one is conscious of being
in those states there is feeling. This does not amount to ‘feeling my
feeling’ or some other tautology since the relevant way of being
conscious of is just the same one that ﬁgured in the story earlier. So,
when you funct that you funct in the right way you get feeling is
something like what the theory says. You keep reading things into
‘conscious of’ that aren’t there. At this gross level that doesn’t look
like an explanation. The ‘a ha’ moment comes from the details, if it
comes at all. If you want to see what I take to be the basic argument
for the HO approach you can here: HOT
Qualia Realism. I will just say here that we know that
one can be conscious of something in the relevant functing sense
without one being conscious of one’s so functing from subliminal
perception, masked priming, etc. So the basic idea is when you have
that kind of thing directed at one’s own mental functioning you get
feeling. The argument for this starts by considering winetasting type
examples. We know from these kinds of cases that acquiring new concepts
leads to new qualities in one’s phenomenology. So it is not out of the
question that we have a funct-thought to the effect that some mental
quality is present and this results in our feeling. You may not think
that this is true, but why not? It cannot come from the kind of
considerations that you have developed so far. You must show that there
is something wrong with the explanation. But what is it? Just so we are
all on the same page here, I am not claiming that the HOT theory IS
true or even that I THINK that it is true. Rather what I want to claim
is that it COULD be true and IF IT WERE true it would provide an
explanation for feeling. This is enough to show that the argument you
have presented doesn’t work.

R.BROWN: “Ignorance of
an explanation does not entail lack of an explanation”

True. But I did not just
say we lack an explanation; I also gave some reasons why we lack one,
and are unlikely to be able to ﬁnd one. (T3/T4 explanations work
identically well whether or not there is feeling; and there is no room
for assigning feeling an independent role in any causal explanation —
except through psychokinesis, which is false.)

R.BROWN: “If you think
that you can have a T4 robot that lacks feeling then you do believe in
(the conceivability of) zombies!”

I don’t believe in zombies,
and both my suggestion that we do not have a causal explanation of how
and why there cannot be zombies (which would be equivalent to a causal
explanation of how and why we feel) and my suggestions as to why we
will never have one are independent of whether I believe in zombies.

As to the “conceivability”
of zombies: I’d rather not get into that, and I don’t think I need to.

I understand what empirical
evidence is; I understand what deductive proof is; and I understand
what plausibility arguments are. But I do not understand what a
“conceivability” argument is. I can “conceive” Meinongian objects (my
images of them are rather Escherian), but I cannot construct them,
because they are logically impossible. That’s worse than whatever might
be the reason there cannot be zombies: we certainly don’t know what
that reason is, otherwise we’d have an explanation of why and how we
feel! I also don’t believe in psychokinesis, but I’m not sure you would
want to argue that it’s inconceivable. (In fact, I’ll wager that most
people can not only conceive it, but believe it!) Now if psychokinesis
had really existed (along with the requisite independent evidence for
it — a detectable ﬁfth causal force in the universe, empirically
testable), then it would be easy to explain how and why we feel, as
well as to explain how and why there cannot be zombies: because zombies
would lack psychokinesis. There would even be ways of empirically
distinguishing — through ordinary reverse-engineering (the “easy”
problem) — the presence or absence of feeling on the basis of the
presence or absence of the psychokinetic force. And hence it would be
possible to explain, causally, why, if a robot lacked psychokinesis, it
could not manage to pass T3: It could *do* this but not that.

But there is no
psychokinesis. So whether or not it is “conceivable” that there could
be zombies, we are powerless to say how or why not.

R.BROWN: “one could
parody your view along the lines of ‘feeling can’t be explained because
feeling is unexplainable’”

Nope. Feelings cannot be
explained for the reasons I’ve already mentioned. No tautology. (T3/T4
explanations work identically whether or not there is feeling; and
there is no room for assigning feeling an independent role in any
causal explanation — except through psychokinesis, which is false.)

R.BROWN: “On the one
hand we hand we have being conscious OF something in our environment
and on the other hand we have feeling. You are right that being
conscious of something in the relevant sense is something that can be
functed and does not have to be felt.”

Consciousness is *felt*
access. Without the feeling, access is not conscious. So drop the
“conscious” and just worry about explaining feeling.

R.BROWN: “On the
higher-order view one has feeling when one is conscious of oneself as
being in a toothache state or thinking that the cat is on the mat. So,
when one has those states without being conscious of them, there is no
feeling. When one is conscious of being in those states there is
feeling.”

What a complicated story!
Is it not fair to say that I feel the states I feel, and I don’t feel
the states I don’t feel? (The rest is just about what I feel.”

I can’t get my head around
notions like “being in a toothache state… without [feeling that I'm in
a toothache state]".

Despite the mirror-tricks
and other fun it allows, I do not ﬁnd higer-orderism any more useful
than any of the other synonyms and paralogisms. Speech-act theorists
may get some mileage out of “knowing that I know that you know that I
know…” but insofar as feeling (and hence consciousness) is concerned,
“feeling that I feel” is just feeling. Explain the bottom-order one,
and you’ve explained it all; leave it out or take it for granted and
you’ve explained nothing.

R.BROWN: “This does not
amount to ‘feeling my feeling’ or some other tautology since… ‘when you
funct that you funct in the right way you get feeling’ is… what the
theory says…. At this gross level that doesn’t look like an
explanation. The ‘aha’ moment comes from the details, if it comes at
all.”

For me, it doesn’t come at
all (and it *does* sound like tautology, dressed up in higher-order
just-so hermeneutics…)

R.BROWN: “we know that
one can be conscious of something in the relevant functing sense
without one being conscious of one’s so functing from subliminal
perception, masked priming, etc.”

That’s not consciousness,
it’s access (detection), and it is indeed just functing, if it’s
unconscious.

R.BROWN: “So the basic
idea is when you have that kind of thing directed at one’s own mental
functioning you get feeling.”

“That sort of thing” is so
far just unconscious access to data. To make it mental, you have to
make it felt. And then you have to explain how and why it is felt, not
just say it is so.

R.BROWN: “consider…
winetasting… acquiring new concepts leads to new qualities in one’s
phenomenology.”

That’s my research area:
category learning and categorical perception. Learning new categories
(new ways to sort things) can alter their appearance: Within-category
differences are compressed and between-category differences are
enhanced.

But that’s all felt (though
our models for it are just neural nets). Plenty of reason why it would
be adaptive to make consequential differences more discriminable — but
not a clue of a clue why any of that should be felt, rather than just
functed.

R.BROWN: “So it is not
out of the question that we have a funct-thought to the effect that
some mental quality is present and this results in our feeling. You may
not think that this is true, but why not? It cannot come from the kind
of considerations that you have developed so far. You must show that
there is something wrong with the explanation. But what is it?”

Shall I count the
proliferating equivocations? What is “mental quality” as opposed to
just a property? That it feels like something to perceive it (e.g.,
red, or the fruitiness of Beaujolais). Being able to recognize things
as red, fruity or Beaujolais is one thing, and that’s just functing
(T3). Feeling what it *feels like* to see red, to taste something
fruity, or to recognize something as Beaujolais is another, and it’s
not just that your explanation is wrong — it doesn’t even touch the
problem!

R.BROWN: “I am not
claiming that the HOT theory IS true or even that I THINK that it is
true. Rather what I want to claim is that it COULD be true and IF IT
WERE true it would provide an explanation for feeling. This is enough
to show that the argument you have presented doesn’t work.”

I think not. I think the
HOT theory is no explanation at all. It doesn’t explain how or why some
processes are felt rather than just functed: It projects a mentalistic
interpretation on them, and then is read off as if it had been an
explanation.

SH (@February 26, 2011
at 20:43): The sensorimotor I/O for T2 is trivial, and nothing
substantive hangs on it one way or the other. Now wait a minute! The
classic argument against computation, that Searle makes and you repeat
here in various ways, is that computation “is just formal”. The
“trivial” aspect of T2, that it is in some modest way physical,
sensorimotor, is thereby _hugely_ important.

I suggest it shows that
the formality argument is moot. It also threatens the entire grounding
argument, which is much stronger as a matter of type than a matter of
degree. You might agree with that. But most of all, it suggests we have
not yet located, enumerated, or addressed whatever prevents us from
passing the modest T2 test. The CR _assumes_ something passes the T2
test. I suggest that, until that is actually done, we don’t know how to
even begin to talk about T3 or T4 tests. And once it is done, we will
ﬁnd there is no need for T3 or T4 tests. – Various implications ﬂow
from this, some of which I have outlined. But would you not also agree,
whatever its merits or faults, that the CR is a claim against type? –
Just a note, that in all of my posts here I am emphasizing differences,
because I believe they matter. There are many points at which any two
out of three of Harnad, I, and Searle may agree, and even a few where
all three may agree. For example, we all agree that a (computational)
system may pass the T2 test without it representing real cognition,
real consciousness. However, we reason very differently from that point
on, and perhaps the greatest points of contention are directly adjacent
to the points of agreement – as is so commonly the case. What if a
system *does* pass T2? I suggest we all still have issues. First, we
have the issue of other minds. Then we have the issue of Humean
skepticism. Then we have various sorites arguments about whether it
really passed, or not. Then we have issues of attribution versus
realism, and on which side must mechanism fall? None of these go away
with T3 or T4 tests. None of them are easy, or easily discussed in
brief. Again, what if a system does pass T2? Some say that is not
enough. Well, and yet, it seems a remarkable challenge that has not yet
been met. My focus remains on considering how it might actually be
done, what it must entail if it is to be done, and what it must imply
if it were done – and when it is done.

J.STERN: “The ‘trivial’
aspect of T2, that it is in some modest way physical, sensorimotor, is
thereby _hugely_ important. I suggest it shows that the formality
argument is moot.”

I’m afraid I can’t agree.
I/O is already part of the Turing machine, but not in the sense of
robotic dynamics; just enough for 1/0 in and 1/0 out.

J.STERN: “What if a
system *does* pass T2? I suggest we all still have issues… other
minds…. Humean skepticism…. sorites arguments … attribution versus
realism… None of these go away with T3 or T4 tests”

If a system passes T2, or
T3, or T4, you still have no explanation of why and how it feels (if it
does). And that’s without any recourse to the other-minds-problem,
Humean skepticism, sorites, or realism…

“T3/T4 explanations work
identically well whether or not there is feeling; and there is no room
for assigning feeling an independent role in any causal explanation —
except through psychokinesis, which is false” This is false as the fact
that the HOT theory is possible shows but to see that you would have to
come to understand the theory in detail. I recommend Rosenthal’s 2005
book Consciousness and Mind.

That sounds a bit
harsher than I intended. I am not trying to be glib, the point is that
there are answers to your questions but they can only be answered by
taking the theory seriously and you seem set to dismiss it from the
outset based on a priori considerations about what you think is
causally superﬂuous in a T4 machine.

R.BROWN: “there are
answers to your questions but they can only be answered by taking the
[HOT] theory seriously”

My questions are simple; if
the answers are complicated and book-length, the questions have not
been understood and the answers are answers to something else.

Besides, I’ve already
understood enough of higher-order approaches to see that they beg the
question: awareness of toothache, or awareness of awareness of
toothache are merely felt states, like toothache itself. Account
(“bottom-up”) for feeling, and you’ve accounted for it all: Try to
account for it top-down, with a higher-order something taking a
lower-order something as its object (of “awareness”), and declare that
“consciousness” and you are hanging from a hermeneutic skyhook.

If the job can be done, it
can be done speaking only about feelings — no synonyms, substitutes or
skyhooks.

Reply to Stevan
Harnad’s comments of 2/28/11 @ 23:09 Some behavior of things is feeling
driven, some not. Thus it makes sense to distinguish between “feeling
behavior” and behavior which isn’t. The Carolina experience involved no
overt behavior on my part, just thoughts which, in your terminology,
are felt. After consideration, it seemed to me perfectly reasonable to
understand the thinking that occurred, in relation to that sign
reading, in terms of what a computer could have done. From the
connection of the words to particular representations, to the
connection of those representations to other representations, all in
terms of mental images (which are, when you think about it, rather
different from visual images), everything that happened could be
replicated in a computer format. Nowhere, on examination, do I see any
special feature that somehow lies outside the kinds of physical
processes computations instantiate. Thus, I concluded that my brain
processes, which underlay the stream of experience I had at the moment
of getting the meaning, could reasonably be replicated computationally,
from the picture of the world in which the sign existed, to the
historical associations in which the meaning of the sign’s words were
found, to the sense of self that found and anchored the meanings in the
array of images generated. But yes, this surely comes down to our
differing ways of seeing our private mental lives. There is a sense in
which one’s experiences look like the whole ball of wax and another in
which they look veridical in some special way that sets them apart from
everything else apprehended within those experiences. There is a sense
that being aware (my version of your “feeling”) seems to be a special
phenomenon among all the vast panoply of phenomena we experience. But
should we take this at face value? Science has often taught us that
things aren’t always what they seem and are often quite a bit more
complicated and/or strange. I understand your feeling that we have
reached the end of our exchange here. In fact, I think we reached it a
while back so I understand your reluctance to continue. There comes a
time when we have dug as deeply as we can, when we hit bedrock and can
dig no further. I appreciate the time you’ve taken and the opportunity
you afforded me here to discuss your thinking with you ﬁrst hand and I
certainly won’t be offended if you don’t reply further. We shall just
have to agree to disagree.

Addendum to my
last response to Stevan Harnad: I meant to address this but slipped up
You wrote: “I didn’t say all feelings were the same; I just said they
were unexplained.” What you said, which prompted my response
delineating a wide array of feeling referents, was that “feeling” was a
better (as in less confused) term for what we mean by consciousness
than other so-called “weasely” words like “consciousness”,
“intentionality”, “awareness”, and so forth. My reply that “feeling”
refers to a wide range of different things was to show that your choice
of THAT word is no better than others’ choices of words like
“subjectness” or “awareness” or “intentionality”, etc. All have
different uses and referents, which may prompt confusion. Thus none are
less “weasely” in this sense than the others. By responding that you
never said all feelings were the same (only that they were
unexplained), I’m afraid you missed the point I was making. Given the
alleged unexplainability you say characterizes what you call “feeling”,
my point was that since feelings, given their range of types, COULD be
explained in different ways, it was conceivable that a robot
intelligence could have one kind of feeling but not another. That a
robot intelligence might not have migraines (as you put it) would not
then be evidence it did not have the kind of feeling(s) associated with
understanding. If understanding can be explained computationally (which
I’ve argued it can be), then the absense of physical sensation feeling
in a robot does not pose a problem for this kind of explanation and the
challenge you posed at the outset could be said to have been met. Of
course we have already agreed that agreement that it has been met rides
on a deeper agreement about what feeling, itself, is (to explain
something we have to know what it is we’re explaining). And here we
seem to be in very deep disagreement, i.e., I see “feeling” (awareness)
as reducible to non-feeling constituents like physical processes
whereas you see a gap that cannot be bridged. Note that given this deep
level disagreement, it follows that getting clear on what your term
“feeling” denotes was important enough to warrant some discussion.
Anyway, as long as I’m adding this, I just want to say that I noticed a
small error in my last post to you which I might as well correct here
for the record. At the end of the third paragraph in my preceding
response to you I wrote: “Nowhere, on examination, do I see any special
feature that somehow lies outside the kinds of physical processes
computations instantiate.” Of course, I should have written “outside
the kinds of physical processes which instantiate computations” or some
equivalent. I got careless. Otherwise I believe the points made above
remain sound. Again, thanks for the opportunity to engage on this. I’ve
found it helpful.

J.STERN: “The ‘trivial’
aspect of T2, that it is in some modest way physical, sensorimotor, is
thereby _hugely_ important. I suggest it shows that the formality
argument is moot.” SH (02/28/2011 16:17): I’m afraid I can’t agree. I/O
is already part of the Turing machine, but not in the sense of robotic
dynamics;

just enough for 1/0 in
and 1/0 out. Well, I agree with that (I think), but you have to realize
that many people say things like, “The Turing Machine is just an
abstraction, it works by just syntax, it has no physical realization,”
and the related argument, “Turing Machines have inﬁnite tapes, you
don’t, so obviously all computation is just an abstraction.” They would
hold that I/O of 1/0 is just abstract or formal. Physical and causal
are the points here. If you and I (at least) recognize computation as
physical, we have departed from Searle in what I consider to be a
hugely signiﬁcant way, even at the T2 stage (even before the T2
stage!). You believe there is more to be said by expanding on the
physical aspect to ground symbols, meaning, … something. I understand
the move, but put it in a different game. If you want to understand why
any string in a T2 has a certain meaning, you need to relate it to
something in the real world. In most philosophy this is done by some
variety of hand-waving about reference, correspondence, truth, rigid
designators, acquaintance, etc. Your “grounding” combines many of these
to a physical particular. I salute all of this. However, as I have
(brieﬂy) argued, I believe the crux of the matter stays within the
computational aspect, once that is seen to be physical and causal.
J.STERN: “What if a system *does* pass T2? I suggest we all still have
issues… other minds…. Humean skepticism…. sorites arguments …
attribution versus realism… None of these go away with T3 or T4 tests”
SH: If a system passes T2, or T3, or T4, you still have no explanation
of why and how it feels (if it does). And that’s without any recourse
to the other-minds-problem, Humean skepticism, sorites, or realism… But
the problem of grounding, at least, goes away with T3… I suggest T2 is
necessary and not sufﬁcient, but most of all I note that it is very
difﬁcult, at least circumstantial evidence that it may also be very
signiﬁcant. Let me see if I have your position right on this. You have
posited that perhaps no computer could pass T2, but what about a human?
If we judge that something has just passed T2 and it turns to be a
human, you would say that is because the human was grounded, had the
right stuff – even if all we ever knew of the contestant was the
“ungrounded” symbols coming across the teletype. Yet I, at least,
stipulate that some kind of really dumb computer system might pass T2
and still not be intelligent because it lacks the right stuff – though
my right stuff is not your right stuff. Certainly *Searle* has conceded
this, and the compsci/ AI contingent takes it as a matter of faith –
“We can fake it well enough to pass your tests.” What is it that
differentiates the fake that passes the test, from the “real” that
passes the test, that is, what *could* it be that differentiates the
one from the other? Won’t you simply say, “Oh, the programmers, who are
human, privileged, and grounded, ﬁnally found a way to capture
grounding and include it in the program”? One can continue along these
lines, but the bottom line for me always comes down to the fact that
Turing saw right through all of this in his combined 1937 and 1950
papers. Searle’s CR expressed some bogus doubts but sets up the one
demand that Turing did not address, because neither of Turing’s two
papers was constructive. The CR demands a *constructive* answer. Searle
suggests there cannot be one, but that is ﬂuff. Searle bases this
heavily on the idea that computation is the wrong sort of stuff, not
even physical. You and I have agreed (I think) that computation is
physical – this might be a neo-computationalism, but if so, I’m happy
with it, and have offered a few points of argument for it above. Let me
ﬁnally offer a few notes of support for T3. There is an ancient
question, which is, “We know there is nothing essential in this
squoggle of ink on paper, so how is it that it comes to mean ‘cat’?”
Some answer needs to be made to that, in simply linguistic terms, even
if we are speaking only of humans, not of computers. Some empirical
story needs to be told involving sensorimotor details, also
computational details. Some number of agents have to have some number
of conventions about interpreting sqoggles as some reference to distal
objects. T3 respects *all* of that. Excellent. T4 would be another
matter entirely, I will skip here rather than launch another topic.
Anyway, I will address it from the other side right here. – SH: “T3/T4
explanations work identically well whether or not there is feeling; and
there is no room for assigning feeling an independent role in any
causal explanation — except through psychokinesis, which is false” RB
(@ 2/28/2011 16:53): This is false as the fact that the HOT theory is
possible shows but to see that you would have to come to understand the
theory in detail. I recommend Rosenthal’s 2005 book Consciousness and
Mind. I just want to put my vote in with RB here, and whether this is
exactly his theory (or wildly at odds with it), I would assert that the
feelings (qualia, phenomenalism) can ﬁt within a physical and causal
framework, something along the lines of a HOT in no way precludes
physicality or causality individually or together, and indeed may teach
us more about just how such physical and causal systems need to be
constructed.

Right you are, since T2 is
a subset of T3. (But I also think being able to pass all of T3 is
necessary in order to be able to pass T2.)

J.STERN: “You have
posited that perhaps no computer could pass T2, but what about a human?”

We’re looking for (causal)
*explanations* of T2 capacity (reverse-engineering), not just examples
of it. The model’s capacities have to be indistinguishable from those
of a real human — but it has to be a model that we built so we can
explain how it can do what it can do: Humans are
“Turing-indstinguishable” from one another, but so what, since we have
no idea how they pass the test?

J.STERN: “What…
differentiates the fake that passes the test, from the ‘real’ that
passes the test?”

Feeling. But the whole
point of the test is that once you can’t tell them apart, you can’t
tell them apart. (And I would not call any robot that can pass the T3
for a lifetime a “fake”!)

J.STERN: “[TT] demands a
constructive answer.”

Yup, you’ve got to build
it, so you know how it can do what it can do. Otherwise there was not
point…

J.STERN: “You and I have
agreed (I think) that computation is physical”

I don’t think anyone would
seriously disagree that computation (software) has to be physically
implemented in a dynamical system (hardware), otherwise it’s just inert
code on paper. (But the code is still implementation-independent:
that’s just the hardware/ software distinction.)

J.STERN: “I would assert
that the feelings… can ﬁt within a physical and causal framework,
something along the lines of a HOT…”

Hi Stevan, Unlike some
of the other contributors I don’t feel we have drilled down to the
bottom of our disagreement, it feels to me like we haven’t penetrated
the surface. For example you say that “feeling itself is just about as
observer-*dependent* as you can get”; but that is clearly not the case
if we use the term “observer-dependent” in the way I have tried to
explain and exemplify. It’s not fair to say that you aren’t getting
anything out of the observer-dependency talk if you aren’t thinking in
the same language it uses. The metal in coins and the paper in notes
are observer-independent. They will continue to exist whatever anybody
says or thinks about it. Consciousness is observer-independent in that
sense. “Money” only exists when somebody says it does. That’s what
observer-dependent means here, and consciousness isn’t like that. Yes,
feeling is a special case because of the way we seem to observe our own
consciousness, but that is really not important here, and it isn’t what
“observer-dependent” means here. In the introduction to his book Mind
(2004) Searle says “There are two distinctions that I want you to be
clear about at the very beginning, because they are essential for the
argument and because the failure to understand them has led to massive
philosophical confusion. The ﬁrst is the distinction between those
features of a world that are observer independent and those that are
observer dependent. … we also need a distinction between original or
intrinsic intentionality on the one hand, and derived intentionality on
the other. For example I have in my head information about how to get
to San Jose. I have a set of true beliefs about the way to San Jose.
This information and these beliefs are examples of original or
intrinsic intentionality. The map in front of me also contains
information about how to get to San Jose [...] but the sense in which
the map contains intentionality [...] is derived from the original
intentionality of the map makers and users. [...] These distinctions
are systematically related: derived intentionality is always
observer-dependent. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Steve,
you said that the observer-dependent/independent stuff didn’t seem to
cast much light on the question you are interested in. The thing is, it
doesn’t cast a light, it casts a shadow, it shows you something
negative, it shows you that a robot can’t have feeling any more than a
computer can, because the things that you invoke that are going to give
it feeling, its sensors and actuators say, fall into the
observer-dependent category just as much as the parts of a computer do.
Therefore the intentionality they have is derived intentionality.

S. HARNAD: “There are,
again, two (hence equivocal) “internal/external” distinctions. The
unproblematic one (states occurring inside vs outside a computer, robot
or person) and the problematic one (felt vs unfelt states).” Well no,
the “inside/outside a computer” is problematical. Because in that case
“inside and outside” are observer-dependent. Whereas, with a conscious
being what we are describing as “inside and outside” does have an
observer-independent existence: “inside” refers to feeling.

S. HARNAD: “And you
prejudge the matter (and beg the question) if you assume that “robots”
don’t feel, since robots are just autonomous causal systems, and hence
surely feeling organisms are robots too.” But robots aren’t autonomous
systems. That’s entailed by their observer-dependent status. Something
is only part of a robot when somebody says it is.

S. HARNAD: “If by
“robot,” however, you just mean a computer plus I/O peripherals, I
agree that just a computer plus peripherals probably could not pass T3,
let alone feel. The dynamics in the hybrid dynamic/computational T3
robot will probably have to be a lot deeper than just add-on peripheral
I/O devices.” The dynamics in the T3 robot will be observer-dependent,
its intentionality derived. You aren’t adding the relevant thing when
you add deeper dynamics, it’s just more of the same. The causes of
feeling are observer-independent. You’ve been asking us to identify
what is missing from a robot that stops it feeling:

well this is the answer.
It’s the wrong category of phenomenon, it doesn’t have the appropriate
causal properties. And this isn’t just intuition, it’s hard practical
fact. That is shown by the multiple realisability of computation for
example. Consciousness is caused by very speciﬁc physical processes. It
isn’t reasonable to believe that it is also going to be caused by
innumerable other quite different processes. Speciﬁc physical processes
in the nervous system cause feeling. They are related to, and a
development of, physical processes in the nervous system that cause
unconscious behaviour. It seems quite plausible to me that both these
levels build on a still lower level. What I mean is that perhaps the
missing thing we are looking for is whatever makes an autonomous
organism autonomous. Perhaps that thing comes along very early in the
development of life, maybe bacteria have it. Or maybe the autonomy only
comes along with, and as an inevitable consequence of, feeling.

But, never mind; let’s see
where you want to take the suggestion that feeling is
observer-independent:

B.RANSON:
“[Searle-Distinction 1] between those features of a world that are
observer independent and those that are observer dependent…
[Searle-Distinction 2] between original or intrinsic intentionality on
the one hand, and derived intentionality on the other… derived
intentionality is always observer-dependent”

“Intentionality” is another
of the weasel-words. If “intrinisic” intentionality’s not just T3
grounding, then it’s “derived” if it is unfelt, and “intrinsic” if it
is felt.

A book’s and a computer’s
“intentionality” is not only unfelt (hence “derived”), but it is also
ungrounded (no connection between internal symbols and the external
things they are about). Sensorimotor grounding remedies that in a T3
robot — but it is still unknown whether (and if so, it is unexplained
how and why) T3 grounding also generates feeling. That’s the “hard”
problem.

B.RANSON: “a robot can’t
have feeling any more than a computer can, because… its sensors and
actuators…[are likewise] observer-dependent [like the] computer…
Therefore… they have [only] derived intentionality”

Unfortunately these are
just expressions of your beliefs about “robots.” It’s not even clear
what you mean by a “robot.” What I mean is any organism-like autonomous
causal physical system (i.e., the ones having T3 or T4 capacities like
those of organisms): That includes both naturally evolved organism-like
autonomous causal physical systems (i.e., organisms) and bioengineered
or reverse-engineered ones (e.g., T3, T4).

The systems are autonomous,
so observers’ interpretations have nothing to do with what they are and
what they can and can’t do.

B.RANSON: “robots aren’t
autonomous systems… [they are] observer-dependent… Something is only
part of a robot when somebody says it is.”

I think you’re wrong on
this one, on anyone’s sense of “autonomy.”

(And, as I said, Searle
went too far when he thought everything was a computer: what he meant,
or should have meant, was just about anything can be simulated by a
computer. But that does not make them identical: A computationally
simulated airplane cannot ﬂy in the real world, and a computationally
simulated organism or robot cannot move in the real world.)

I also think that your
conception of a robot as having to be just a computer plus peripherals
is unduly narrow. A robot (whether man-made or gowing on trees) is any
autonomous organism-like dynamical system, and there may be a lot of
dynamics going on inside it that are other than computation. How much
of that could in principle be replaced by internal computer simulations
is an empirical question, but I would be skeptical about, say,
simulating pharmacological effects…

B.RANSON: “The causes of
feeling are observer-independent… what is missing from a robot that
stops it feeling [is that] it doesn’t have the appropriate causal
properties… That is shown by the multiple realisability of computation…
Consciousness is caused by very speciﬁc physical processes. It isn’t
reasonable… that it [can] also… be caused by innumerable other quite
different processes.”

There’s a distinction that
has to be made between (1) “multiple realizability” in general (there
are plenty of examples, both in engineering and in convergent
evolution), in which the same I/O is generated with different
mechanisms and the special case of multiple realizability, which is the
(2) “implementation-independence” of computation (the hardware/software
distinction).

You are conﬂating them.
There may well be more than one way to pass T3, maybe even T4. Some of
them may feel and others may not. But it won’t be because of the
implementation-independence of computation (because T3 and T4 are not
just computational). And, because of the other-minds problem, we’ll
never know which ones do feel and which ones don’t (if some do and some
don’t). Moreover, we won’t be able to explain the difference (if
any)causally either (and that’s the problem!)

If you think feeling can
only be generated by certain speciﬁc features of T4, please say which
those are, and how and why they generate feeling…

B.RANSON: “Speciﬁc
physical processes in the nervous system cause feeling. They are
related to, and a development of, physical processes in the nervous
system that cause unconscious behaviour.”

Comments for
Bernie Ranson: I know you want to get Stevan Harnad’s response but I
couldn’t help wondering about a few things when reading your remarks.
You wrote: “But robots aren’t autonomous systems. That’s entailed by
their observer-dependent status. Something is only part of a robot when
somebody says it is.” I know you’re taking a Searlean position here
(one I think very much in error) so I was wondering if you could more
clearly state why you think the statement above is correct? You say
robots aren’t autonomous and, certainly, none that I know of at present
are. But the issue isn’t just to do with what currently exists. It’s
about what can be done with computers going forward and, of course,
with robots. So just declaring that robots are unable to achieve what
is being argued they might achieve can hardly be an argument against
their achieving it. You say that something is only a part of a robot
when somebody says it is. Well, yes and no. Certainly whatever we
encounter in the world is what we believe about it in a certain sense.
The coin is money because people believe that about it, treat it that
way and so forth. But in another sense there is what exists independent
of us. If we built a robot and endowed it with a million year energy
supply and it then managed to remain operational for a million years,
but mankind didn’t, would it cease to be what it is (an operating
robot) once all the people who used to know it as a “robot”, capable of
doing this and this and this, were no longer around to think of it in
that way? In what sense is a robot (or a computer) just like money and
in what sense is it like us? “Consciousness is caused by very speciﬁc
physical processes. It isn’t reasonable to believe that it is also
going to be caused by innumerable other quite different processes.” Why
not? We don’t really know that it can only be caused by very speciﬁc
processes such as those found in brains. We only know that brain
processes do seem to most of us to be implicated causally in the
occurrence of consciousness. But that brain processes seem to be causal
of consciousness implies nothing about what other processes can
accomplish. In fact, maybe it’s not the processes at all. Maybe it’s
the functions, i.e., what the processes do. In that case any processes
capable of doing the same things (and, presumably, not all processes
will be) ought to be able to get us to the same result. Are computers
just like read only devices? Merely books, capable of nothing more than
an amazon Kindle in the end? All the evidence seems to militate against
that presumption, even if it remains to be seen whether computers can
be built and programmed to do what brains do.

J.STERN: “You have
posited that perhaps no computer could pass T2, but what about a
human?” SH (@ 3/1/2011 15:30): We’re looking for (causal)
*explanations* of T2 capacity (reverse-engineering), not just examples
of it. The model’s capacities have to be indistinguishable from those
of a real human — but it has to be a model that we built so we can
explain how it can do what it can do: Humans are
“Turing-indistinguishable” from one another, but so what, since we have
no idea how they pass the test? Well, I have to remind us, we *are*
looking for examples of it. The test, whether it is T2 or T3, is never
explanatory, that comes separately. Perhaps it comes after – we
generate T2 candidates at random, and do not plan to even attempt to
analyze them until they ﬁrst succeed. J.STERN: “What… differentiates
the fake that passes the test, from the ‘real’ that passes the test?”
SH: Feeling. But the whole point of the test is that once you can’t
tell them apart, you can’t tell them apart. (And I would not call any
robot that can pass the T3 for a lifetime a “fake”!) Lost me again. Let
me try to recover. So, T3 for a lifetime equals feeling? Don’t you
disclaim that in the paper? It seems to me this skates the edge of my
own position, that feeling results from causal systems. Come on over to
the dark side! We may be evil, but at least we are consistent.

Harnad: “I’m saying we
have no causal explanation of how and how we feel (or anything does),
and that we are not likely to get one, because (except if psychokinetic
forces existed, which they do not) feelings cannot be assigned any
independent causal power in any causal explanation. Hence the causal
explanation will always work just as well with them or without them.”
I’m sympathetic with this view, which involves two claims: that we’re
not going to get a causal explanation of feelings (conscious
qualitative phenomenal states, or qualia for short) and that feelings
don’t contribute causally in 3rd person explanations of behavior. Re
the second claim, I suggest that since feelings like pain aren’t public
observables, they are logically barred from causal accounts that
involve intersubjectively available objects, such as neurons, brains,
bodies and environments. This solves the problem of mental, or more
precisely phenomenal, causation: there is none. But of course feelings
will still feel causal to the experiencing subject, a perfectly real
phenomenal appearance that tracks 3rd person explanations well enough
such that it’s very convenient (and harmless) to appeal to the causal
role of feelings in folk explanations, even if it’s literally false to
do so, see http://
www.naturalism.org/privacy.htm#ﬁction

Re the 1st claim, I
suggest that qualia might be non-causally entailed by being a complex,
free-standing, behavior-controlling representational system, of which
we are examples. For a number of reasons, any such system will perforce
end up with basic, irreducible, uninterpreted representational surds
that function as the (privately available) vocabulary in terms of which
the world gets represented to the system – qualia. Those reasons,
admittedly sketchy and merely suggestive thus far, are presented at http://
www.naturalism.org/appearance.htm#part5

T.CLARK: “since feelings
like pain aren’t public observables, they are logically barred from
causal accounts that involve intersubjectively available objects, such
as neurons, brains, bodies and environments. This solves the problem of
mental, or more precisely phenomenal, causation: there is none. But of
course feelings will still feel causal to the experiencing subject, a
perfectly real phenomenal appearance that tracks 3rd person
explanations well enough such that it’s very convenient (and harmless)
to appeal to the causal role of feelings in folk explanations, even if
it’s literally false to do so, see http://www.naturalism.org/
privacy.htm#ﬁction”

All true, but not
particularly helpful, if all one is doing is asking the perfectly
innocent and natural question, for which it is perfectly natural to
expect an answer (yet no answer turns out to be forthcoming, probably
answering is not possible): Why and how do we feel (rather than
just do)?

T.CLARK: “[feeling]
might be non-causally entailed by being a complex, free-standing,
behavior-controlling representational system… any such system will
perforce end up with basic, irreducible, uninterpreted representational
surds that function as the (privately available) vocabulary in terms of
which the world gets represented to the system – qualia. Those reasons,
admittedly sketchy and merely suggestive thus far, are presented at http://www.naturalism.org/appearance.htm#part5”

Harnad: “No, I’m afraid
that doesn’t do the trick for me…” For me it at least gets us in the
vicinity of qualia by non-causal means, but I’m not surprised you’re
not impressed. What we all instinctively want, perhaps, is some sort of
causal, mechanistic or even emergentist account of feelings (e.g.,
HOT-type internal observation) involving public objects, since that’s
how 3rd person explanations go. Such explanations likely won’t be
forthcoming since feelings (qualia) are categorically private,
system-bound realities, intersubjectively unobservable (stronger: they
don’t exist intersubjectively). I’m wondering if you have even the
glimmer of an idea of what *would* do the trick for you if you rule out
as nonstarters the sorts of non-causal entailments I suggest.

T.CLARK: “I’m wondering
if you have even the glimmer of an idea of what *would* do the trick
for you if you rule out as non-starters the sorts of non-causal
entailments I suggest.”

I have the same powerful,
convincing causal idea everyone has — psychokinesis — but alas it’s
wrong.

Discovering a ﬁfth force —
feeling — would of course solve the mind/body problem. Trying to ask
“why does the fundamental feeling force feel?” would be like asking
“why does the fundamental gravitational force pull?” It just does. (But
unfortunately there is no fundamental feeling force.)

(Please don’t reply with
the details of space-time curvature. The feeling force could have some
of that too. Substitute “Why is space curved?” if you most, to
illustrate that the explanatory buck must stop somewhere, with the
fundamental forces…)

Apropos the
question of the relation of behavior to “feeling”, and just to add a
little reality to this discussion, here is a link to a site which
purports to decipher canine behaviors in terms of their meanings. Note
that all the descriptions imply (and depend on the notion) that the
dog’s behavior indicates its feelings:

I submit that this
supports my earlier point that behaviors ARE the criteria upon which
our references to feelings in others stand and that it’s inconceivable
that any entity passing the lifetime Turing Test for having feeling
would be thought to lack it. If this is so, then the idea that we don’t
“see” (as in have direct access to) the feelings others have cannot be
an obstacle to supposing feelings are present — and if that’s the case,
then any explanation that accounts for behavioral outcomes in terms of
why the entity, say the dog baring its teeth, is doing what it does is
enough to account for the presence of the feeling(s). The Other Minds
problem has no role in scientiﬁc questions about what brains do and how
they do it, and is misleading philosophically as well.

NOTHING UP MY SLEEVE
(reply to Stevan Harnad) “My questions are simple; if the answers are
complicated and book-length, the questions have not been understood and
the answers are answers to something else.” Surely you can’t be
serious! These are the words of someone who has already decided that
the question they are asking can’t answered. And what reasons do you
give? An intuition that even once we are able to build a T4 robot we
will not be able to explain why and how it feels. But what reason do we
have to take your intuition seriously? The kind of in principle claim
that you are making is easy to rebut because all you need is one
possible answer and that is where the higher-order stuff comes in
(though there are probably others, I just happen to understand this
one). It is not enough to simply declare it a non-answer because that
already assumes that you are correct and so begs the question. This
insistence on simplicity looks like a smokescreen for lack of an
argument especially since I and others have the contrary intuition. So
given the intuition standoff we need to actually look at the proposed
explanations themselves and see if they hold up. “awareness of
toothache, or awareness of awareness of toothache are merely felt
states, like toothache itself” But this actually shows that you do not
understand “enough of higher-order approaches to see that they beg the
question”. According to the theory toothaches can occur unconsciously
and when they do they are not felt states. They are unfelt states that
none the less have (mostly) all of their causal powers. They are
functed but not felt. But in order to understand that you need to make
some distinctions and get clear on what you are talking about and these
things sometimes require *gasp* extended treatment to get clear on. But
look, I agree that consciousness certainly seems special and mysterious
and that one can get oneself all worked up about it if one tries but
that is absolutely no reason to think that once we are in a position to
build something like T3 or, more likely, T4 we will feel the same. We
don’t need a priori convictions or predictions, we need close
examinations of empirical evidence and proposed explanations.

No, not decided: concluded
(from the falsity of psychokinesis and the superﬂuity of feeling for
doing, or causing doing).

R.BROWN: “And what
reasons do you give? An intuition that even once we are able to build a
T4 robot we will not be able to explain why and how it feels. But what
reason do we have to take your intuition seriously?”

Not intuition, reasons: the
falsity of psychokinesis and the superﬂuity of feeling for doing, or
causing doing.

R.BROWN: “The kind of in
principle claim that you are making is easy to rebut because all you
need is one possible answer…”

But I do agree *completely*
that just one potentially viable explanation (aside from psychokinesis,
which we already know is false) would be enough to refute me. That’s
why I invited candidate causal explanations. But all I’ve heard so far
— it’s hard to resist the obvious naughty pun here — has been thin air!

I have a skill that has not
even been tested yet in this symposium: I am very good at showing how
and why candidate causal explanations for feeling fail. But to air my
skill (and to earn my comeuppance) I need candidate examples. Here’s an
example (alas easily shot down):

“Pain needs to be felt,
because if it was just functed, you might not notice the tissue injury,
or take it seriously enough to act on it.”

Ghost-Buster: Why doesn’t
whatever brings the tissue injury to your attention so you act on it
just act on it?

R.BROWN: “and that is
where the higher-order stuff comes in… we need to actually look at the
proposed explanations themselves and see if they hold up…”

Agreed.

R.BROWN: “According to
the theory toothaches can occur unconsciously and when they do they are
not felt states.”

And they are not
toothaches. At most, they are the physiological consequences of tooth
injury.

R.BROWN: “They are
unfelt states that none the less have (mostly) all of their causal
powers. They are functed but not felt.”

Indeed they are: Assuming
they’re enough to make the organism stop chewing on that side, so as
not to injure the tooth further, and enough to make it learn to avoid
whatever it was that caused its tooth to become injured, etc., all that
functional, adaptive stuff that you’ve kindly agreed with me to call
“functing.”

So my question is: why and
how is all of our adaptive capacity *not* just that — unfelt functing,
getting all the requisite doing done, T3-scale, so we can survive and
reproduce, etc. Why and how is some of it felt?

(Maybe the clue is in those
residual causal powers you are suggesting are left out: Do you mean
that not all of T3 can be passed without feeling? But then I’m all ears
as to which parts of our T3 capacity require feeling in order to be
doable at all — and how and why they duly become felt.

R.BROWN: “in order to
understand that you need to make some distinctions and get clear on
what you are talking about and these things sometimes require *gasp*
extended treatment to get clear on.”

Again, I’m all ears. Maybe
we could do a little bit of the extended treatment, in order to be
convinced that there really is causal explanation at the end of the
tunnel? Because all I’ve heard from HOT partisans so far is that
there’s something inside that takes something else inside as its object
and “bingo” the feeling lights go on…

R.BROWN: “once we are in
a position to build something like T3 or, more likely, T4 [things] will
[not] feel [so] special and mysterious. We don’t need a priori
convictions or predictions, we need close examinations of empirical
evidence and proposed explanations.”

Reverse-engineering a
causal mechanism that can pass T3 (or perhaps T4) is the right
empirical program for cognitive science, just as Turing suggested. I
can hardly disagree with that.

(Isn’t it odd, though, that
those who are — rightly — saying we need to wait to conquer T3 or T4
already think they have a potential explanation of feeling now, with
HOT? Makes you wonder whether HOT has much to do with causal
explanation at all.)

Tom Clark
(@3/2/2011 12:03): For me it at least gets us in the vicinity of qualia
by non-causal means, but I’m not surprised you’re not impressed. What
we all instinctively want, perhaps, is some sort of causal, mechanistic
or even emergentist account of feelings (e.g., HOT-type internal
observation) involving public objects, since that’s how 3rd person
explanations go. Such explanations likely won’t be forthcoming since
feelings (qualia) are categorically private, system-bound realities,
intersubjectively unobservable (stronger: they don’t exist
intersubjectively). I’m one of we all who want that HOT-type internal
observation, so let’s see how that impacts what we can say about
qualia. Certainly we know about the various “physical correlates” as we
like to say, and those are in principle observable intersubjectively.
The story goes that there is still some phenomenological residue, the
subjective aspect, that is private – but wait a moment, let’s not
mistake what that means. The HOT idea is that we can get very close, at
least, to that phenomenological occurrence, that there is a physical
correlate to the occurrence of the subjective. So if the C-ﬁber
stimulation is blocked, the brain never gets it, no pain. If the
stimulation reaches the brain and ﬁres all the triggers, but some
painkilling drug prevents that last correlate from ﬁring, no pain. You
might even ask the subject, “do you feel that?” and get an answer like,
“now that you ask, yes.” The point is to remind ourselves that the
privacy of qualia is only regarding the subjective aspects. And that’s
the pessimistic view. The optimistic view is that, at the last, it will
turn out that the HOT – or some other simple, causal, explanation –
will after all completely comprise subjective phenomena. We will simply
have to retune our intuitions. Consider the case where we learn how to
do this with mere programming of even common computers. Even then, will
we “feel” the red, when we read about the program “feeling” red?
Probably not. But the problem then is no longer with the mystery of
subjectivity, but with our limited empathic skills.

S. HARNAD: “And feeling
only exists when somebody feels it… But it doesn’t only exist when
somebody says it does. Knowing that feeling only exists when somebody
feels doesn’t make any contribution to our knowledge, it’s a tautology.
But knowing that consciousness exists whatever anybody says about it,
and that the same is not true of computers, does add to our knowledge.

S. HARNAD:
“Intentionality” is another of the weasel-words. If “intrinisic”
intentionality’s not just T3 grounding, then it’s “derived” if it is
unfelt, and “intrinsic” if it is felt. A book’s and a computer’s
“intentionality” is not only unfelt (hence “derived”), No, I’m afraid
that is wrong. The book and computer’s intentionality is felt, by the
observer when they have those ideas. Intentionality is a mental
phenomenon, it is always “felt” in your terminology. The book and the
computer’s intentionality is derived because it is felt not by the book
but by the observer.

S. HARNAD cont: but it
is also ungrounded (no connection between internal symbols and the
external things they are about). Sensorimotor grounding remedies that
in a T3 robot — The only intentionality a computer has is derived
intentionality. If “grounding” means a connection between the symbols
and external things, then that derived intentionality is grounded,
those connections are made in the mind of the observer. The computer

doesn’t have any
intentionality of its own towards the symbols. So adding sensorimotor
capacities can’t ground the computer’s intrinsic intentionality,
because it hasn’t got any. B.RANSON: “robots aren’t autonomous systems…
[they are] observer-dependent… Something is only part of a robot when
somebody says it is.”

S. HARNAD: I think
you’re wrong on this one, on anyone’s sense of “autonomy.” What they
have is “derived autonomy”, which isn’t really autonomy at all. Genuine
autonomy on the other hand is what is crucially missing from computers
and robots, it’s another way of describing the thing that you
challenged us to identify.

S. HARNAD: (And, as I
said, Searle went too far when he thought everything was a computer:
what he meant, or should have meant, was just about anything can be
simulated by a computer… No, he meant what he said, he didn’t mean what
you say at all. He didn’t say everything is a computer, he said that
anything of sufﬁcient complexity could be used as a computer. Anything
with the features of a Universal Turing Machine, which as you know can
be speciﬁed in a few sentences.

S. HARNAD: “I also think
that your conception of a robot as having to be just a computer plus
peripherals is unduly narrow. A robot (whether man-made or growing on
trees) is any autonomous organism-like dynamical system, …” If you
widen the scope of the term “robot” until it covers us, then questions
like “can a robot feel” become entirely uninteresting, and your
questions about what robots lack fail to make sense. I know the kinds
of things that exist today that are called robots, the amazing welding
and assembly machines in car factories, the humanoid ﬁgures that can
run around the room and open doors and appear to be looking at you and
talking plausibly about things they can see or hear. They are just a
computer plus peripherals, or just peripherals. You say we could add
something else, these “dynamics”, but how are they going to be any
different to the existing computer and/or peripherals?

S. HARNAD: and there may
be a lot of dynamics going on inside it that are other than
computation. How much of that could in principle be replaced by
internal computer simulations is an empirical question, but I would be
skeptical about, say, simulating pharmacological effects… None of the
relevant dynamics could be replaced by computation. You can simulate
anything you like, what is going on in a simulation is
observer-dependent, it doesn’t have the causal properties of the thing
it is simulating, you don’t get wet in a simulated thunderstorm and you
don’t get feeling from simulating a feeling organism.

S. HARNAD: There’s a
distinction that has to be made between (1) “multiple realizability” in
general (there are plenty of examples, both in engineering and in
convergent evolution), in which the same I/O is generated with
different mechanisms and the special case of multiple realizability,
which is the (2) “implementation-independence” of computation (the
hardware/software distinction). You are conﬂating them. I don’t think
so, I think from my perspective 2. is as you say merely a special case
of 1. I would suggest that you are applying the concept of multiple
realisability over-optimistically: there’s an implication that because
the same I/O can be generated by different mechanisms, then the desired
I/O will be generated by different mechanisms. We can see the kinds of
things that are going on in the brain that are related to feeling, we
can manipulate them in many different ways with predictable results, we
know that feeling is caused by our physical brains. Yes, it’s possible
that this could be done using some other mechanism: but we would need
some reason to think that any given speciﬁc mechanism might be able to
do it. And there isn’t any reason to think that something like a
computer would be able to do something like that, any more than a
toaster would. If you think a computer/robot has speciﬁc features that
allow it to generate feeling, please say which those are and how they
generate feeling.

S. HARNAD If you think
feeling can only be generated by certain speciﬁc features of T4, please
say which those are, and how and why they generate feeling… We don’t
yet know what speciﬁc features of the body cause feeling. I have
somebody working on this (my son the Neuropsychologist). He has
enormous microscopes and all different kinds of rays and imaging
devices and I’ll be sure to let you know if he comes up with anything.
So we don’t know the how yet, but in the meantime: the how is the why.
Evolution found a way to make feeling, it was useful, so it survived.

B.RANSON: “If you widen
the scope of the term “robot” until it covers us, then questions like
“can a robot feel” become entirely uninteresting, and your questions
about what robots lack fail to make sense.”

Not if you widen the scope
of what the robot must be able to do (T3). We can ask for the causal
mechanism of T3, and the causal explanation of how/why a system with T3
power feels (if it does).

This does not widen “robot”
so much as too much to make it uninteresting: It widens what a robot
can do just wide enough to make it interesting to explain how — and
then ask how and why it feels, if it does.

B.RANSON: “You say we
could add something else, these “dynamics”, but how are they going to
be any different to the existing computer and/or peripherals?”

For example, if the only
way get it to do human-scale Shepard rotation tasks were to build in an
analog rotator, rather than a computation in polar coordinates, that
would be deep dynamics. Or if (as in the brain) 16 internal isomorphic
projections of the retinal projection proved more powerful and useful
than a bitmap and computations…

B.RANSON: “If you think
a computer/robot has speciﬁc features that allow it to generate
feeling, please say which those are and how they generate feeling.”

But you see, I haven’t the
faintest idea, and that’s precisely why I’m asking how *any* system
could generate feeling. On the other hand, I have more hope for
explaining how it could generate T3 or T4 performance power…

Steve I am in agreement
with your conclusion, but I really think we’re obliged to take on the
best cutting edge work of our opponents, i.e. a posteriori physicalists
who employ the so called ‘phenomenal concepts strategy’, e.g. David
Papineau, Brian Loar, Joseph Levine. Nothing you say in your paper
really touches them. They agree that there is a conceptual gap between
physical and phenomenal qualities, but they try to explain this in
terms of features of phenomenal concepts. Papineau’s explanation of why
pain and c-ﬁbres ﬁring are correlated is that the concept of pain
refers to the property of c-ﬁbres ﬁring; it’s just that we can’t know a
priori that that’s the case (hence the conceptual gap).

P.GOFF: “a posteriori
physicalists who employ the so called ‘phenomenal concepts strategy’,
e.g. David Papineau, Brian Loar, Joseph Levine. Nothing you say in your
paper really touches them. They agree that there is a conceptual gap
between physical and phenomenal qualities, but they try to explain this
in terms of features of phenomenal concepts. Papineau’s explanation of
why pain and c-ﬁbres ﬁring are correlated is that the concept of pain
refers to the property of c-ﬁbres ﬁring; it’s just that we can’t know a
priori that that’s the case (hence the conceptual gap).”

It may be that nothing I
say touches those who believe that pain refers to the property of
c-ﬁbres ﬁring. But I was asking how and why pain is felt rather than
just functed. That would apply to c-ﬁbres ﬁring, too…

S. MIRSKY: You say
that something is only a part of a robot when somebody says it is.
Well, yes and no. Certainly whatever we encounter in the world is what
we believe about it in a certain sense. Hi Stuart, The important point
is that some things are what they are whatever anybody says. Gravity,
mass, consciousness, digestion. Robots and computers aren’t in that
category

S. MIRSKY: If we
built a robot and endowed it with a million year energy supply and it
then managed to remain operational for a million years, but mankind
didn’t, would it cease to be what it is (an operating robot) once all
the people who used to know it as a “robot”, capable of doing this and
this and this, were no longer around to think of it in that way? It
wouldn’t cease to be anything, it would cease to be described as
something.

B. RANSON:
“Consciousness is caused by very speciﬁc physical processes. It isn’t
reasonable to believe that it is also going to be caused by innumerable
other quite different processes.”

S. MIRSKY: Why
not? Because there isn’t any reason. There isn’t anything about what
computers do that would make you think they are conscious, any more
than a toaster.

Reply to Bernie
Ranson’s Response to Stevan Harnad @ 3/2/11 14:21 Bernie Ranson writes:
“The only intentionality a computer has is derived intentionality.”
This is just an expression of a belief. It’s not an argument, it’s
dogma. BR: “(Searle) didn’t say everything is a computer, he said that
anything of sufﬁcient complexity could be used as a computer.” SWM: No,
he said it could be called a computer. (Of course such a broad claim
makes the idea of “computers” pointless since if anything can be
called, and thus treated, as one, then there’s no point in doing so
because you cannot distinguish between computers and what aren’t. If
you cannot, why bother calling something a computer at all?)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ BR: “I know the kinds of things
that exist today that are called robots, the amazing welding and
assembly machines in car factories, the humanoid ﬁgures that can run
around the room and open doors and appear to be looking at you and
talking plausibly about things they can see or hear. They are just a
computer plus peripherals, or just peripherals. You say we could add
something else, these ‘dynamics’, but how are they going to be any
different to the existing computer and/or peripherals?” SWM: The point
is not what current computers and robots do but what computers and
robots CAN BE MADE to do. You can’t deny the possibility of something
just by denying its actuality.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ BR: “None of the relevant
dynamics could be replaced by computation.” SWM: Again, this is
argument by asserting belief. If the point is to ask whether we can do
what this statement denies (which it is), then you have to have a
reason that goes beyond the mere recitation of the denial.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ BR: “We can see the kinds of
things that are going on in the brain that are related to feeling, we
can manipulate them in many different ways with predictable results, we
know that feeling is caused by our physical brains. Yes, it’s possible
that this could be done using some other mechanism: but we would need
some reason to think that any given speciﬁc mechanism might be able to
do it. . . there isn’t any reason to think that something like a
computer would be able to do something like that, any more than a
toaster would.” SWM: One reason: Similarity in the mechanics. Brains
appear to operate by passing electrical signals along particular
pathways formed by neurons, neuronal clusters and synaptic spaces
between while computers operate by passing electrical signals along
particular pathways through various logic gates embedded in chips and
chip arrays. Another reason: Similarity of outcomes. A lot of the
things brains do are also doable using today’s computers (as in playing
chess, answering questions, calculating equations and so forth). If so,
then it is possible computers can do more of the things brains do.
These are strong reasons to think that computers could do what brains
do while chairs and tables would not be able to.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ BR: “Evolution found a way to
make feeling, it was useful, so it survived.” SWM: This is certainly
true on its face but so what? It says nothing about the reverse
engineering called for by Stevan. The issue isn’t what factors in
biological history brought about the development of brains which happen
to produce consciousness or even what purpose consciousness in brains
serves for the organism having it. Rather it’s what do brains do that
produces it and could it be built synthetically, either using computers
or some other inorganic platform?

Reply to Bernie
Ranson’s response to me @ 3/2/11 14:50 Bernie Ranson wrote: “The
important point is that some things are what they are whatever anybody
says. Gravity, mass, consciousness, digestion. “Robots and computers
aren’t in that category” SWM: Neither are gravity and mass in the same
category as digestion and, so far, no one really knows what category
consciousness is in. What about robots and computers? I go with Stevan
Harnad’s view, at least with regard to robots. They are any autonomous
operating system “grounded” in real world interactions. Of course, in
ordinary usage we mean by “robots” only those certain mechanical
devices made of inorganic material that are capable of getting about on
their own and performing operations in the world with some degree of
autonomy (but not necessarily with complete autonomy). But the issue
before us is what could a robot be built to do, not what current robots
can now do. You can’t argue that robots can’t be built to be conscious
just because they’re, well, robots and no robots do what is at issue!
That begs the question in a huge way. The same applies to computers.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

S. MIRSKY: If we
built a robot and endowed it with a million year energy supply and it
then managed to remain operational for a million years, but mankind
didn’t, would it cease to be what it is (an operating robot) once all
the people who used to know it as a “robot”, capable of doing this and
this and this, were no longer around to think of it in that way? BR:
“It wouldn’t cease to be anything, it would cease to be described as
something.” SWM: Right. But this isn’t about describing. It doesn’t
matter if we call a human a human or a featherless biped or a
big-brained hairless primate, etc. And it doesn’t matter if we call a
robot a robot. What matters is what the entity in question can DO. A
robot in a world without humans (or any other intelligent species
capable of naming and describing) would no longer be known as a robot.
Indeed it would no longer be known at all — unless it had its own
consciousness and knew itself (or unless we want to assume the presence
of less intelligent species than humans which “know” what they know in
a different way than we do. My late cat which would never have mistaken
a robot for me but might have known it for a bothersome obstacle across
its path or as a nice place to curl up on, once it stopped moving).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

B. RANSON:
“Consciousness is caused by very speciﬁc physical processes. It isn’t
reasonable to believe that it is also going to be caused by innumerable
other quite different processes.”

S. MIRSKY: Why
not? BR:”Because there isn’t any reason. There isn’t anything about
what computers do that would make you think they are conscious, any
more than a toaster.” SWM: This just sounds like dogma. In fact there
are certainly reasons and quite good ones. Here are a couple: 1)
Similarity in the mechanics: Brains appear to operate by passing
electrical signals along particular pathways formed by neurons,
neuronal clusters and synaptic spaces between while computers operate
by passing electrical signals along particular pathways through various
logic gates embedded in chips and chip arrays. 2) Similarity of
outcomes: A lot of the things brains do are also doable using today’s
computers (as in playing chess, answering questions, calculating
equations and so forth). If so, then it is possible computers can do
more of the things brains do. These are strong reasons to think that
computers could do what brains do while chairs and tables would not be
able to.

In replying to
Richard Brown, Stevan Harnad wrote (among other things) that his
position was argued for, contra Richard’s point that he was invoking
intuition. Stevan said: “Not intuition, reasons: the falsity of
psychokinesis and the superﬂuity of feeling for doing, or causing
doing.” And yet Stevan’s denial of my argument, that it’s at least
theoretically possible to explain “feeling” in terms of a process based
system, hinges on his invocation of the Other Minds Problem, a
notoriously insoluble philosophical conundrum (unless one is prepared
to deconstruct the problem and deny it meaning a la Wittgenstein). The
Other Minds Problem cannot be argued against because it hangs on a
certain way of understanding things, i.e., of acceptance of the idea
that knowledge requires indubitablity which is only achievable through
direct access or reasoning from information acquired via direct access.
I suggest that this IS intuition-driven and, therefore, Stevan’s denial
of my proposed answer to his challenge looks to be founded not on
reasons, contra his claim, but on intuition too. Stevan further wrote:
“I invited candidate causal explanations. But all I’ve heard so far —
it’s hard to resist the obvious naughty pun here — has been thin air!”
But, frankly, how is a denial of a thesis based on the Other Minds
problem be relevant to a scientiﬁc question or supported by reasons, as
he asserts, when the problem is not, itself, grounded in reasons but in
a certain way of understanding what we mean by knowledge in cases like
this? If we assume that the only way we can know, with any certainty,
about others’ feelings is via direct access, but are denied that
access, then we are obliged to accept the claim that we cannot know
about others’ feelings. And yet, we DO know about others’ feelings.
Even Stevan admits this when he says he would not deny the feeling of
any entity that passed the lifetime Turing Test at, at least, the T3
level. So why demand more of the T3 passing entity scientiﬁcally? And
has an answer to the proposal offered, that hinges on an intuition that
absence of access amounts to absence of knowledge, saatisﬁed Stevan’s
assertion that his is a reasoned stance rather than an intuitive one?

SH (@ 3/2/2011 16:21)
(2) But it’s the best we can do (Turing Indistinguishable). (That’s why
I wouldn’t eat or beat a T3 robot.) Seems like a rash commitment, not
that either eating or beating a piece of metal seems like a good idea
in the ﬁrst place. Would you turn it off for the night? Would you give
it the vote? There’s a fair amount of (science) ﬁction literature on
all of this, of course, much of it more along the lines of whether it
would eat or beat *you*! But seriously, folks, isn’t an appeal to
Turing Indistinguishability moving in the wrong direction? If that was
enough, then Searle’s posit of the CR would not, could not, suffer from
its (alleged) shortcomings. And I thought we were all agreeing that we
want a constructive answer – we don’t have an answer until we have a
constructive answer. Something something about hermeneutics otherwise.
SH: (3) Even if T3 (somehow) guaranteed feeling, we would not know how,
and T3 would not explain it. Again, this seems to give up the argument.
What exactly might it be, for us to know how the feelings are
guaranteed? On the one hand we have literature, as in Wittgenstein, on
the role and surveyability of proofs. On the other hand we have the
pragmatic experiences of writing and using computer programs. If I
write a program to compute prime numbers, do I know “how” – “how” what,
exactly? How they are computed in the abstract, or in the particular
case of 2^n-1, n=123456789? If I write a program (or build a robot)
that does X, is there any more to know about generating and
guaranteeing X, than what was done? The Wittgensteinian answer is, to
make a long story short, no, that’s it. The compsci answer, that pesky
Systems Reply again, is also – this is it, all there is. Do we know how
a program puts a picture of Obama on the screen, even though all it
does is move around ones and zeroes? Does that guarantee the image I?
Could there be more to guaranteeing a feeling F? I don’t see how,
unless a priori we distinguish F from all other things that are not F,
but if we do that we’ve prejudged the issue and are just moving around
the pieces. If it comes to prejudgement, mine is that T2 is as good at
Turing Indistinguishability as any other possible TX, and has the
historical advantage that Turing wrote it that way. The only reason to
move away from T2 is if it gives us a more constructive answer, such
that we would have exactly the knowledge, to the degree that one can
ever have that kind of knowledge, of what those feelings are and how
they are generated. That does seem to be a call to move beyond Turing
Indistinguishability, that follows from your argument, and in some way
I also credit Searle for motivating us in that direction.

J.STERN: “What exactly
might it be, for us to know how the feelings are guaranteed?”

We don’t want guarantees.
(This is not about the other-minds problem.) We want causal explanation
for the feeling, just as we would get it for the (T3, T4) doing.

J.STERN: “I write a
program to compute prime numbers, do I know ‘how’”

Yes.

J.STERN: “If I write a
program (or build a robot) that does X, is there any more to know about
generating and guaranteeing X, than what was done?”

No. And what was done with
the prime numbers, was the computation of primes. And with T3, the
generation of the T3 performance capacity. And if that also generated
feeling, we want to know how that was done. Without further notice, if
it generates T3 all we have explained is how it generated T3.

J.STERN: “Do we know how
a program puts a picture of Obama on the screen, even though all it
does is move around ones and zeroes? Does that guarantee the image I?”

Yes.

J.STERN: “Could there be
more to guaranteeing a feeling F?”

Yes indeed.

J.STERN: “I don’t see
how, unless a priori we distinguish F from all other things that are
not F”

Here’s a way they
distinguish themselves: With all those other things, you can explain
how and why your model generates them; with F you can’t.

J.STERN: “but if we do
that we’ve prejudged the issue and are just moving around the pieces”

No, we’ve just called a
spade a spade, and admitted that we have a gaping explanatory gap.

J.STERN: “If it comes to
prejudgement, mine is that T2 is as good at Turing Indistinguishability
as any other possible TX, and has the historical advantage that Turing
wrote it that way.”

Yes, but it might be better
to keep thinking…

J.STERN: “The only
reason to move away from T2 is if it gives us a more constructive
answer”

Your point has been made,
multiple times, Joshua. You want a working model. Fine. Point taken.
But that is not an argument. It is changing the subject.

Intuition and Argument:
Earlier on I posted something suggesting that Stevan’s argument was
intuition-based, contra his response to Richard Brown that he was
relying entirely on reasons which he believed were supportable to make
his case. The case he makes is that what is central to consciousness,
the condition of feeling anything at all (which Stevan holds is the
most basic and least misleading way of speaking about consciousness),
can never be accessed in others and therefore can never be explained
scientiﬁcally. We can, Stevan maintains, scientiﬁcally account for the
performance of an entity through reverse engineering and, if that
entity is capable of acting with apparent autonomy, which looks like
what our autonomous behavior looks like, then we can at least conclude
that we have captured the behavior mechanisms that match our own. But,
he has argued, those mechanisms need not involve feeling like we have
and so we can never have reason to think that such a reverse-engineered
entity feels anything at all. Stevan denies that his argument has to do
with the philosophical zombie question (even though it seems to imply
the possibility of such things) and, while periodically invoking the
historically troublesome philosophical problem of Other Minds, he has
also denied that this is relevant to his position as well. His position
appears to boil down to the claim that cognitive science must forever
limit itself to questions of behaviors, and their mechanisms, while
giving up on questions of “feeling” entirely. In his paper, he invited
others to try to counter his position (that no explanation of the
phenomenon of “feeling” is possible) by offering arguments which showed
that some explanation was possible. Several have since been offered
here but Stevan has rejected them all. His grounds for rejecting them
seem to come down to these: 1) Psychokinesis (mind over matter?) would
need to be possible for feeling and physical entities to be linked and
there is no evidence that it is; and 2) That there is a “superﬂuity of
feeling for doing or causing doing”. (Presumably he means that feeling
and doing are conceptually disconnected.) Richard Brown suggested that
at least the second basis for rejection seems to be consistent with
epiphenomenalism and a commitment to the conceivability and, therefore,
possibility, of philosophical zombies. But Stevan denies this while
continuing to maintain that his claim about feeling and physicality is
reason based, open to argument and, therefore, refutation. My last
response indicated my agreement with Richard’s position, that Stevan’s
approach looked like an intuition-based one to me. (Of course, I could
be said to have an ax to grind as my response to Stevan’s challenge was
one of those he rejected out of hand.) On reconsideration, though, I
think my explanation of why Stevan’s response is intuitionistic was
inadequately explicated. I put it all down to the role of the Other
Minds Problem in Stevan’s argument (which, of course, Stevan has denied
is relevant to his claims) and traced it to the problem of certainty as
it relates to what we can know. While not withdrawing my point about
the role of the Other Minds Problem in Stevan’s argument, I think I
could have been a bit clearer. The Other Minds Problem (which holds
that, because we can’t access the minds of others in the way we have
access to our own, we can’t be sure anyone else has a mind but
ourselves) is, it seems to me, driven by two factors: 1) The
supposition that minds (“feelings”) have an ontic status of the same
type as other objects of our knowledge (i.e., that they have some kind
of existence apart from their physical base — that they aren’t, ﬁnally,
physical, whatever their provenance or genesis); and 2) The supposition
that, to know anything, we must be certain of it and that certainty
amounts to a claim’s being seen to be true in an undeniable way (with
no room for doubt). I focused in my response on the second element of
the Other Minds Problem. But that, I now think, is not a matter of
intuition but of linguistic misapplication, i.e., it’s a case of using
our terms wrongly (expecting “certainty”, which is important to claims
of knowledge, to always be a matter of indubitability — in the way it
is for math equations, deductive syllogisms and so forth). I don’t
think we do this intuitively but only when we aren’t paying close
enough attention to what we actually mean when we use terms having to
do with knowing things, etc. In fact, I should have focused on the ﬁrst
part of the problem which is where the real issue of intuition resides,
I think. Given that our language is publicly grounded, we expect mental
referents to be like referents in the public realm of our experience.
Unlike the issue for #2 above, this is NOT a misuse of our language but
the result of thinking in certain categories. That is, it’s a failure
to see how our language can and often does lead us astray in certain
applications. Unlike the question of what it means to know or be
certain (these uses vary by context and cannot simply be substituted
one for another), we DO routinely refer to our mental lives and this
cannot be an error of use because it’s an important part of our
discourse. Rather it looks like an error of conceptualization in that
we may allow the pictures from one area of our operation to mpose
themselves on another. Once we recognize the distinction between public
referents and the private (mental) kind, and how language is used
differently in the two contexts, the expectation that mental phenomena
must be entity-like, and therefore exist in the way public objects
exist, evaporates. But this cannot be argued for. It involves seeing
the way our language works in a new way and that’s why it’s ﬁnally a
matter of intuition — and why, I think, Stevan’s second reason for
denying the possibility of an explanation of how physical phenomena
result in feelings is intuition-based.

S.MIRSKY: “The case he
makes is that… [feeling] can never be accessed in others and therefore
can never be explained scientiﬁcally”

No. Feeling cannot be
explained causally because it is causally superﬂuous to explain doing
and we can only explain doing.

The constant importation of
the other-minds problem (“Would a T3 robot feel or not?”) has next to
nothing to do with it. Whether or not the T3 robot feels, we cannot
explain how or why it does.

S.MIRSKY: “if [T3's]
behavior [is indistinguishable from ours] then we can at least conclude
that we have captured the behavior mechanisms that match our own. But,
[Stevan] has argued, those mechanisms need not involve feeling like we
have and so we can never have reason to think that such a
reverse-engineered entity feels anything at all.”

The constant importation of
the other-minds problem (“Would a T3 robot feel or not?”) has next to
nothing to do with it. Whether or not the T3 robot feels, we cannot
explain how or why it does.

S.MIRSKY: “Stevan denies
that his argument has to do with the philosophical zombie question…
and…[he has also denied that] the historically troublesome
philosophical problem of Other Minds… is relevant to his position.”

Correct.

S.MIRSKY: “His position
appears to boil down to the claim that cognitive science must forever
limit itself to questions of behaviors, and their mechanisms, while
giving up on questions of “feeling” entirely.”

Correct.

S.MIRSKY: “Stevan has
rejected [attempted explanations on the grounds of] “superﬂuity of
feeling for doing or causing doing”. (Presumably he means that feeling
and doing are conceptually disconnected.)”

I’m not sure what
“conceptually disconnected” means, but I mean that feeling is causally
superﬂuous in explaining doing, or the causes of doing.

S.MIRSKY: “Richard Brown
suggested that at least the second basis for rejection seems to be
consistent with epiphenomenalism and a commitment to the conceivability
and, therefore, possibility, of philosophical zombies. But Stevan
denies this.”

That’s right. I have no
idea what “epiphenomenalism” means. I’m just asking for (and not
hearing) a viable causal explanation of why and how we feel. If people
ﬁnd it informative to dub the absence of that explanation
“epiphenomenalism,” they’re welcome to call it what they like. I don’t
ﬁnd that the term conveys any information at all.

And as for zombies: what
more can I say than that I don’t believe in them; I don’t use them; I
don’t regard “conceivability” as any sort of rigorous criterion for
anything; and if (as I believe) zombies are not possible, I certainly
cannot explain why or how they are impossible, because that would be
the same as explaining why and how we feel.

S.MIRSKY: “[Stevan]
maintain[s] that his claim about feeling and physicality is reason
based, open to argument and, therefore, refutation.”

In principle, yes, it is,
since I can no more prove, a priori, that there cannot be a causal
explanation of how and why we feel than I can prove that there cannot
be zombies. All I can do is give the reasons I think such attempted
explanations are bound to fail, and then try to show, one by one, with
actual attempted explanations, how they do indeed fail.

S.MIRSKY: “Stevan’s
response is intuitionistic”

I don’t think so: It no
more appeals to intuitions than it appeals to zombies, the other-minds
problem, or epiphenomenalism. It is not an appeal *to* but an appeal
*for* — causal explanation.

S.MIRSKY: “The
supposition that minds (“feelings”) have an ontic status of the same
type as other objects of our knowledge…”

I don’t understand “ontic
status” or “ontic status type”: There are (roughly) objects, events,
actions, properties and states. I suppose a feeling is a kind of state.
But that doesn’t help if what we want is a causal explanation.

S.MIRSKY: “The
supposition that, to know anything, we must be certain of it and that
certainty amounts to a claim’s being seen to be true in an undeniable
way (with no room for doubt).”

Who asked for certainty? I
can’t even be certain about the T3 explanation of doing (or about the
scientiﬁc explanation of any empirical phenomenon). I’m with Descartes
and the soft skeptics on certainty: it’s only possible with (1)
mathematically provable necessity and (2) the Cogito (“I feel,
therefore there is feeling”). The rest is just probability and
inference to the best explanation.

But with feeling, there is
no explanation at all.

S.MIRSKY: “language
publicly grounded… mental referents… public experience realm… NOT
language misuse… thinking in certain categories… language leads astray
in certain applications… what it means to know or be certain… part of
our discourse… error of conceptualization… pictures from one area
impose… public referents… private (mental) referents… language use…
different contexts… entity-like expectation… public object existence…
language works in a new way… matter of intuition — and why, I think,
Stevan’s second reason for denying the possibility of an explanation of
how physical phenomena result in feelings is intuition-based…”

Such a complicated way to
say: We do and we feel. We can explain how and why we do, but not how
and why we feel. Amen. (And may Wittgenstein rest in peace.)

Great discussion! Thanks
for your cross-post. I take it that what we all are interested in is
what you call feelings. If one is an a posteriori physicalist and
endorses the phenomenal concept strategy that Philip Goff mentioned,
she will explain why your question won’t receive the answer you are
expecting. Pain might be identical to a functional state, but you won’t
be able to deduce phenomenal properties (the way it feels or the fact
that it feels) from its functional description. With regard to the
discussion about HOT. Very roughly, for HOT theories a state is
conscious iff it is the target of the right kind of higher order
thought. Imagine the state you are in when you are in pain, M.
According to HOT M is conscious (it feels “painfully” for me to be in
this state) because it is targeted by another higher-order state of the
right kind. I have understood that M has its “painfulness”
independently on whether it is conscious or not, it is a qualitative
state. I do not understand very well what is a non-conscious
qualitative state, so maybe you can ask what distinguishes
non-conscious qualitative states from non qualitative states (I don’t
know the answer). This way you could understand why the HOT is required
to get a feeling and how this could address your question or whether
you think that this qualitative state are already ‘feeling states’.

M.SEBASTIAN: “an a
posteriori physicalist [who] endorses the phenomenal concept strategy
that Philip Goff mentioned… will explain why your question won’t
receive [an] answer… Pain might be identical to a functional state, but
you won’t be able to deduce phenomenal properties (the way it feels or
the fact that it feels) from its functional description.”

A rather complicated
explanation for why we cannot explain how and why we feel. But I would
say it was already apparent that we could not…

M.SEBASTIAN: “for HOT
theories a state is conscious iff it is the target of the right kind of
higher order thought. Imagine the state you are in when you are in
pain, M. According to HOT M is conscious (it feels “painfully” for me
to be in this state) because it is targeted by another higher-order
state of the right kind. I have understood that M has its “painfulness”
independently on whether it is conscious or not, it is a qualitative
state.”

Deﬂationary transcription:

“for HOT theories a state
is felt iff it is the target of the right kind of higher order thought.
Imagine the state you are in when you are feeling pain, M. According to
HOT M is felt (it feels “painfully” for me to be in this state) because
it is targeted by another higher-order state of the right kind. I have
understood that M has its “painfulness” independently of whether it is
felt or not, it is a felt state”

Is that supposed to be an
answer to my question of why and how I feel? If so, it doesn’t quite do
the trick. It sounds like process hermeneutics, begging the question
while embedding it in an interpretative system.

M.SEBASTIAN: “I do not
understand very well what is a non-conscious qualitative state”

Nor do I. In my more direct
language it is an unfelt feeling, and I haven’t the faintest idea what
that’s supposed to be!

M.SEBASTIAN: “so maybe
you can ask what distinguishes non-conscious qualitative states from
non qualitative states (I don’t know the answer).

Deﬂated: “maybe you can ask
what distinguishes unfelt felt states from unfelt states”

I sure don’t know the
answer either: I can’t even understand the question!

M.SEBASTIAN: “This way
you could understand why the HOT is required to get a feeling and how
this could address your question or whether you think that this
qualitative state are already ‘feeling states’”

I’m afraid HOT only serves
to cool my understanding. My suspicion is that it conceals the lack of
understanding under layers of interpretation.

Harnad: “I invited
candidate causal explanations. But all I’ve heard so far — it’s hard to
resist the obvious naughty pun here — has been thin air!” And: “I have
the same powerful, convincing causal idea everyone has — psychokinesis
— but alas it’s wrong. “Discovering a ﬁfth force — feeling — would of
course solve the mind/body problem. Trying to ask “why does the
fundamental feeling force feel?” would be like asking “why does the
fundamental gravitational force pull?” It just does. (But unfortunately
there is no fundamental feeling force.)” You seem to be stuck on the
notion that to really explain feelings, the explanation has to be
causal, involving mechanisms and/or undiscovered forces that somehow
produce or generate them. But that claim is not obviously true, and
indeed the idea that a causal explanation of consciousness could in
principle exist seems to me unlikely, given that feelings are
categorically private, unlike their neural correlates. Were feelings
generated or caused by observable forces or mechanisms, feelings would
be in the public domain as is every other product of causal processes,
but they aren’t. This suggests they aren’t caused, produced or
generated, but instead entailed (for the system alone) by *being* a
system with certain sorts of representational capacities. There are
philo-scientiﬁc considerations that count in favor of this possibility
adduced at http://www.naturalism.org/appearance.htm
I’m not saying my proposal is correct, only that the
fact that it isn’t causal shouldn’t lead one to reject it out of hand
as an answer to your innocent question: “Why and how do we feel (rather than just do)?”

Response to Tom
Clark’s comments of 3/3/11 @ 13:08 You write that “the idea that a
causal explanation of consciousness could in principle exist seems to
me unlikely, given that feelings are categorically private, unlike
their neural correlates.” But why should this preclude a causal
explanation? What is “causal” but a claim that X brings about Y (as in
makes it happen, results in it, is sufﬁcient for its occurrence, etc.)?
If an account is given of a set of operations which will produce in a
given machine “feeling”-related behavior which is convincingly that (in
every conceivable way we might need to be convinced), then why would
that not be sufﬁciently “causal” to ﬁt the bill? Stevan’s position
seems to be that the inherent subjectivity of feelings, the fact that
they are private to the entity having them, precludes our ability to
ascertain that they are really present and therefore such an
explanation can only explain the occurrence of the behaviors, not the
feelings (which may even be self-reported by the machine but which
still, being private to the machine, cannot be credited with being
there). On these grounds he seems to conclude that a causal explanation
isn’t possible. But if he’s wrong about what’s needed to assure
ourselves of the occurrence of the feelings in the machine, then hasn’t
a causal explanation (in all the usual senses of “causal”) been
provided? What else would be needed and doesn’t your concession that
“causal” isn’t relevant because it doesn’t apply in this kind of case
amount to an afﬁrmation of Stevan’s position that a “causal”
explanation isn’t possible? After all, that’s all he really seems to be
claiming here.

J.STERN: “I don’t
see how, unless a priori we distinguish F from all other things that
are not F” SH (@ 3/2/2011 20:07: Here’s a way they distinguish
themselves: With all those other things, you can explain how and why
your model generates them; with F you can’t. I have not accepted that
point: I assert that all such things are both caused and causal. I
assert there is nothing but the physical, causal world. Any phenomena
that occur within deserve an explanation, but I am unconvinced there is
a problem here of special concern. If you want to accept this a priori
difference between F and non-F, that’s ﬁne, I am happy to point out,
repeatedly as required, that I don’t accept it, don’t recognize it in
the ﬁrst place to be accepted, accept as just another phenomenon like
the rising of the sun or the wetness of water. J.STERN: “If it comes to
prejudgement, mine is that T2 is as good at Turing Indistinguishability
as any other possible TX, and has the historical advantage that Turing
wrote it that way.” SH: Yes, but it might be better to keep thinking…
We agree on that. J.STERN: “The only reason to move away from T2 is if
it gives us a more constructive answer” SH: Your point has been made,
multiple times, Joshua. You want a working model. Fine. Point taken.
But that is not an argument. It is changing the subject It is the only
subject, it is the key to the analysis of the problem, and must be
reviewed at every point. In discussions with SWM ofﬂine, perhaps we’ve
come up with a better way to say this. SWM asks, where is the content,
in the mind, or the symbols? My response is this – look at the CR
problem again. Either there is something missing (“explanatory gap”,
“hard problem”, “feelings”) that must be accounted for, or there is
something present (“cognition”, “intelligence”, “consciousness”)

– that must be
accounted for! Searle presents the parable as if it were in the ﬁrst
category. I applaud the intuition pump, but place the story in the
second category – given the ex hypothesi intelligent behavior, just
what could possibly account for it? Unfortunately Turing never really
touched that, unless you count the TM story as the answer – and even
think that is too broad a jump, talk about your explanatory gaps. So,
we keep thinking.

Reply to Stevan Harnad’s
post of 3/3/11 @ 14:29 I’ll try to be brief (but it’s a challenge). SH:
“The constant importation of the other-minds problem (‘Would a T3 robot
feel or not?’) has next to nothing to do with it. Whether or not the T3
robot feels, we cannot explain how or why it does.” SWM: As I recall,
you mentioned the “other minds” problem a number of times as part of
your justiﬁcation for saying we can’t explain the occurrence of
feelings. I’m merely looking at the implications. SH: “I’m not sure
what ‘conceptually disconnected’ means, but I mean that feeling is
causally superﬂuous in explaining doing, or the causes of doing.” SWM:
I probably didn’t put that well enough. I meant conceptually
disconnected because the referents are radically different. Anyway,
your argument that a causal explanation of the occurrence of feeling is
impossible because “feeling is causally superﬂuous in explaining doing,
or the causes of doing” certainly has the look of circularity, don’t
you think? At the least it begs the question since it assumes “causal
superﬂuity” to justify the claim that a causal explanation can’t be
deployed. One may not need much more than that to read this as a claim
based on intuition. SH: “I have no idea what ‘epiphenomenalism’ means.”
SWM: I’m guessing this is hyperbole but, on the off chance it isn’t —
it’s the notion that consciousness (what you call “feeling”) is
a-causal with regard to the physical workings of brains, merely along
for the ride as a corollary of some physical phenomena in brains (kind
of like the froth on an ocean wave). Certainly a claim that “feeling”
may not accompany what we otherwise recognize as “feeling” behavior in
an entity implies that we could have waves with or without the froth!
And this is in keeping with the idea of philosophical zombiehood, as
Richard Brown has suggested. I am inclined to see your claim in that
way, too, at this juncture. SH: “[My response] . . . no more appeals to
intuitions than it appeals to zombies, the other-minds problem, or
epiphenomenalism. It is not an appeal *to* but an appeal *for* — causal
explanation.” SWM: By “appeals to” do you mean “depends on”? Note that
it’s not my claim that you make any such appeals explicitly to justify
your argument, only that your explicit claims imply these other claims
to varying degrees; if so, they need to be considered in light of those
implications and what can be concluded from arguments for those
positions. SH: “I don’t understand ‘ontic status’ or ‘ontic status
type’: There are (roughly) objects, events, actions, properties and
states. I suppose a feeling is a kind of state. But that doesn’t help
if what we want is a causal explanation.” SWM: My reference was to the
tendency to think of (and so want to talk about) feelings in the same
way as we do other referents. If we make that move then we want to look
for the same kinds of things we expect to ﬁnd in referents in the
public domain (the home of physical things). And now we run aground
because, of course, the referents are in no way alike. The only
similarity is that both occur within our experience. But they play a
different part in that experience so why should we expect to treat them
in the same way, or be surprised when we ﬁnd we really cannot? SH: “Who
asked for certainty? . . . I’m with Descartes and the soft skeptics . .
. it’s only possible with (1) mathematically provable necessity and (2)
the Cogito (‘I feel, therefore there is feeling’). The rest is just
probability and inference to the best explanation. “But with feeling,
there is no explanation at all.” SWM: There is if you don’t demand a
certainty beyond probability and inference (though I go further in
accepting the Wittgensteinian analysis — but given your acceptance of
probability and inference there’s already enough to achieve an
explanation of “feeling”). Anyway, it seems to me that you miss the
point of the Wittgensteinian analysis vis a vis how we use language
about mental referents. But it’s not even necessary as long as you
grant that the probability of the presence of “feeling” is as high as
it gets or is ever needed for recognizing the occurrence of “feeling”
in an entity. Once there, we can explain what brings it about in
entities (and test for it by determining the level of autonomous
thought which is dependent on having feeling, etc.). I take it, though,
from the tenor of these discussions that you want an account that shows
how this or that physical event in brains (or some equivalent platform)
just is feeling? Of course, I have suggested that an account which
relies on a description of layered processes (computational ones would
do, here) which produce a multi-level mapping of the world and of the
mapping entity itself (the latter becoming the sense of being a self
which the entity possesses), with sufﬁcient interactivity of processes
to allow the occurrence of the associative events that continuously
update the mappings and depict the elements of the world and
relationships between the elements, could be enough. After all, what is
feeling but the occurrence of various awarenesses of things at varying
levels? Why shouldn’t such an explanation sufﬁce? And yet I think you
will still say it does not. The only question then will be whether you
give as your reason for such denial something that goes beyond
restating the initial claim, that “feeling is causally superﬂuous in
explaining doing, or the causes of doing”.

S.MIRSKY: “you mentioned
the “other minds” problem a number of times as part of your
justiﬁcation for saying we can’t explain the occurrence of feelings”

I mentioned the other-minds
problem to show we can’t expect to do better than Turing-Testing, but
not in connection with the fact that we cannot explain why or how we
feel.

S.MIRSKY: “your argument
that a causal explanation of the occurrence of feeling is impossible
because “feeling is causally superﬂuous in explaining doing, or the
causes of doing” certainly has the look of circularity, don’t you
think?”

No, it’s open to
refutation. Just show how and why feeling’s not causally superﬂuous in
explaining doing, and instead show how and why feeling is needed in
order to have a successful explanation (T3, or T4).

S.MIRSKY: “At the least
it begs the question since it assumes “causal superﬂuity” to justify
the claim that a causal explanation can’t be deployed. One may not need
much more than that to read this as a claim based on intuition.”

It’s open to
counterexamples, if anyone can come up with one; but it’s not an appeal
to intuition.

S.MIRSKY:
“‘epiphenomenalism’ means… feeling is a-causal with regard to the
physical workings of brains, merely along for the ride”

I know. I think it’s a bit
more ontically circumspect to say that feeling cannot be explained
causally than to say it’s acausal…

S.MIRSKY: “you grant
that the probability of the presence of “feeling” is as high as it gets
or is ever needed for recognizing the occurrence of “feeling” in an
entity”

Yes, the correlates are
reliable enough (in people, if not quite in T3) — if all we’re
interested in is predicting or mind-reading. But if we’re interested in
explaining how and why we feel rather than just do, that’s no help.

S.MIRSKY: “I take it…
that you want an account that shows how this or that physical event in
brains (or some equivalent platform) just is feeling?”

Stuart, it’s a lot more
demanding than that! It’s something more along the lines of wanting an
explanation of how and why this mechanism *could not do what it can do
without feeling*. (But please, please spare me the zombies at this
point where you usually invoke our agreement that they are
inconceivable/impossible as some sort of justiﬁcation for not facing
the real question!).

Think of feeling as being a
“widget” in your causal mechanism. I am simply asking, what would fail,
and why, if this widget were not there?

S.MIRSKY: “I have
suggested… an account which relies on a description of layered
processes… multi-level mapping of the world… the mapping entity itself…
the sense of being a self … sufﬁcient interactivity of processes…
associative events that continuously update the mappings… depict the
elements of the world and relationships between the elements…”

That’s the explanation of
how and why it feels rather than just functs? It all sounds like
functing to me!

S.MIRSKY: “After all,
what is feeling but the occurrence of various awarenesses of things at
varying levels?”

There’s the weasel-word
again:

Deﬂated: “After all, what
is feeling but the occurrence of various feelings at varying levels?”

S.MIRSKY: “Why shouldn’t
such an explanation sufﬁce?”

Because it doesn’t explain.
It just interprets (hypothetical) processes which (if there really
turned out to be such processes) could be described (and causally
explained) identically without supposing they were felt.

(Again, please don’t reply
with our agreement that there are no zombies! And it has nothing to do
with the other-minds problem either: Our supposition that the processes
are felt could be right; that still does not amount to an explanation
of how/why they are right.)

S.MIRSKY: “you will
still say it does not [explain]. The only question then will be whether
you give as your reason… something that goes beyond restating the
initial claim, that ‘feeling is causally superﬂuous in explaining
doing, or the causes of doing’”

There is nothing in your
explanation of the “multi-level mapping” processes etc., that explains
why and how they are felt rather than just functed. You simply assume
it, without explanation. That’s probably because it’s all just
hypothetical anyway. But even if you actually built a successful T3 and
T4 (thereby making Josh Stern happy by providing a “construction”!),
and you could actually point to the internal processes in question, it
would make no difference. There’s still nothing in your account to
explain how or why they are felt rather than functed (even if they
truly are felt!).

In response to Stuart
Mirsky 3/3 14:35: “But why should [the fact that feelings are
categorically private, unlike their neural correlates] preclude a
causal explanation?” It doesn’t necessarily. But in one of my responses
to Stevan, I suggested that were feelings generated or caused by
observable forces or mechanisms, they would be in the public domain as
is every other product of causal processes, but they aren’t. This
suggests they aren’t caused, produced or generated, but instead
entailed (for the system alone) by *being* a system with certain sorts
of representational capacities. There are philo-scientiﬁc
considerations that count in favor of this possibility adduced at http://
www.naturalism.org/appearance.htm

“If an account is given of
a set of operations which will produce in a given machine
‘feeling’-related behavior which is convincingly that (in every
conceivable way we might need to be convinced), then why would that not
be sufﬁciently “causal” to ﬁt the bill?” The causal account is of the
feeling-related *behavior*, which isn’t the same thing as feelings
since feelings can exist without any behavior. Feelings (conscious
phenomenal states) seem to closely co-vary with certain behavior
controlling and cognitive processes going on in the head, see http://www.naturalism.org/kto.htm#Neuroscience
But as far as I can tell, there’s no causal story about how feelings
get produced or generated by those processes (one of the difﬁculties
with epiphenomenalism, which has it that feelings *are* thus
generated). “But if Stevan’s wrong about what’s needed to assure
ourselves of the occurrence of the feelings in the machine, then hasn’t
a causal explanation (in all the usual senses of “causal”) been
provided?” Not necessarily, since as I’ve suggested there might be a
non-causal explanation. “…doesn’t your concession that “causal” isn’t
relevant because it doesn’t apply in this kind of case amount to an
afﬁrmation of Stevan’s position that a “causal” explanation isn’t
possible?” Stevan is saying that he hasn’t seen a causal explanation
that he ﬁnds convincing, *and* he seems convinced that only a causal
explanation could be truly explanatory. There may be a causal
explanation of consciousness but I’m skeptical for reasons I’ve
sketched here and that are set out in more detail at http://www.naturalism.org/appearance.htm#part2
I’ve also suggested in part ﬁve of the same paper that there might be
viable *non-causal* options for explaining consciousness.

Stevan: “Not caused
but entailed? How entailed? Why entailed?” My suggestion is that any
complex behavior-controlling representational system (RS) will have a
bottom level, not further decomposable, unmodiﬁable, epistemically
impenetrable (unrepresentable) hence qualitative (non-decomposable,
homogeneous) and ineffable set of representational elements. They
therefore appear as irreducible and indubitable phenomenal realities
for the RS alone. The “therefore” is of course completely contestable;
I don’t claim to have proved the entailment, only suggested it. The
supporting considerations, which you might ﬁnd of interest and draw
extensively from Metzinger’s work, are at

TC: It doesn’t
necessarily. But in one of my responses to Stevan, I suggested that
were feelings generated or caused by observable forces or mechanisms,
they would be in the public domain as is every other product of causal
processes, but they aren’t. This suggests they aren’t caused, produced
or generated, but instead entailed (for the system alone) by *being* a
system with certain sorts of representational capacities. There are
philo-scientiﬁc considerations that count in favor of this possibility
adduced at http://
www.naturalism.org/appearance.htm

SWM: In this case I
would agree with what I take Stevan’s last response to be, to wit, that
“entailed” sounds like just another way to speak of “caused”. The
question then is just what is it that causes a feeling system (which,
by virtue of being that, feels)!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ “If an account is given of
a set of operations which will produce in a given machine
‘feeling’-related behavior which is convincingly that (in every
conceivable way we might need to be convinced), then why would that not
be sufﬁciently “causal” to ﬁt the bill?” TC: The causal account is of
the feeling-related *behavior*, which isn’t the same thing as feelings
since feelings can exist without any behavior. SWM: Yes, they can. But
we aren’t looking for only non-behavior related feelings but for
feelings period. Thus any feelings will do and behavior-related ones ﬁt
that bill. Stevan’s argument is that that isn’t enough to assure
ourselves that our T3 entity isn’t a robot (that is, isn’t a zombie)
and so, absent such assurance, all we have is an account of caused
behavior. It is there I have mainly disagreed with him. I’m not sure
where you are on this point though. Are feelings in others recognized
through behavior and is that enough to determine that feelings are
present? TC: . . . as far as I can tell, there’s no causal story about
how feelings get produced or generated by those processes (one of the
difﬁculties with epiphenomenalism, which has it that feelings *are*
thus generated). SWM: Epiphenonmenalism aside, at least one issue worth
exploring here is whether a process-based systems account is enough to
explain the occurrence of feelings. Note that I am not arguing that any
such account is true, only that, if it were true, it would be enough,
contra Stevan. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ “But if
Stevan’s wrong about what’s needed to assure ourselves of the
occurrence of the feelings in the machine, then hasn’t a causal
explanation (in all the usual senses of “causal”) been provided?” TC:
Not necessarily, since as I’ve suggested there might be a non-causal
explanation. SWM: My only point is that it would be enough IF IT WERE
TRUE. This isn’t about what is true but about what would sufﬁce if it
were. Your non-causal explanation, on the other hand, only looks like a
variation in the terms because it uses “entailment”, which is generally
applied to questions of logic, to substitute for the “cause” word in
this case. Of course, language does allow such ﬂexibility but we need
to be very careful that we don’t let it paper over real distinctions or
suggest distinctions that aren’t there.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ “…doesn’t your concession
that “causal” isn’t relevant because it doesn’t apply in this kind of
case amount to an afﬁrmation of Stevan’s position that a “causal”
explanation isn’t possible?” TC: Stevan is saying that he hasn’t seen a
causal explanation that he ﬁnds convincing, *and* he seems convinced
that only a causal explanation could be truly explanatory. There may be
a causal explanation of consciousness but I’m skeptical for reasons
I’ve sketched here and that are set out in more detail at http://www.naturalism.org/appearance.htm#part2
I’ve also suggested in part ﬁve of the same paper that there might be
viable *non-causal* options for explaining consciousness. SWM: I agree
he is saying he hasn’t seen one that he ﬁnds convincing. Note that he
has also said that he believes none is possible. When presented with
possible explanations of this type, he denies their possibility. I take
that to be a signiﬁcant problem with his argument. But when you
substitute “entail” for “cause”, it seems to me that, insofar as you
aren’t simply substituting one term for another, you are already
granting his claim that causal explanations of feeling aren’t possible.

Reply to Stevan Harnad’s
comments of March 3, 2011 at 16:54 S.MIRSKY: “your argument that a
causal explanation of the occurrence of feeling is impossible because
“feeling is causally superﬂuous in explaining doing, or the causes of
doing” certainly has the look of circularity, don’t you think?” SH: No,
it’s open to refutation. Just show how and why feeling’s not causally
superﬂuous in explaining doing, and instead show how and why feeling is
needed in order to have a successful explanation (T3, or T4). SWM:
Given the “lifetime Turing Test” we’ve discussed, an entity could not
keep up the simulation across the full range of behavioral
requirements. That is, feeling in the sense the term is used here is an
essential ingredient in the entity’s actions.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY: “At the least it begs
the question since it assumes “causal superﬂuity” to justify the claim
that a causal explanation can’t be deployed. One may not need much more
than that to read this as a claim based on intuition.” SH: It’s open to
counterexamples, if anyone can come up with one; but it’s not an appeal
to intuition. SWM: An entity that met the Commander Data test would be
the counterexample — unless you assume that absence of access equals
implies at least the possibility of the lack of presence. If you don’t
then any explanation that accounted for the Data entity would sufﬁce. I
think there’s a confusion in your position, Stevan. It’s as if you
think that, because there’s a difference between having feelings and
“reading” them in others, there’s no reason to grant their presence
when everything else that’s relevant testiﬁes to that presence.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY: “you grant that the
probability of the presence of “feeling” is as high as it gets or is
ever needed for recognizing the occurrence of “feeling” in an entity”
SH: Yes, the correlates are reliable enough (in people, if not quite in
T3) — if all we’re interested in is predicting or mind-reading. But if
we’re interested in explaining how and why we feel rather than just do,
that’s no help. SWM: If we feel because systems of a certain type have
feeling, then any explanation for how such a system has feeling is
enough. The kind of system example I have offered includes just such an
explanation of what feeling is. Below I see you call that into question
so I’ll delay amplifying my point until then.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY: “I take it… that you
want an account that shows how this or that physical event in brains
(or some equivalent platform) just is feeling?” SH: Stuart, it’s a lot
more demanding than that! It’s something more along the lines of
wanting an explanation of how and why this mechanism *could not do what
it can do without feeling*.. . . Think of feeling as being a “widget”
in your causal mechanism. I am simply asking, what would fail, and why,
if this widget were not there? SWM: The entity would fail to pass the
lifetime Turing Test at some point, however good a simulation it was.
Why would it fail? It would lack the underpinnings that lead to the
requisite behaviors. The theory proposes that the underpinnings are
accomplished by installing certain functionalities in the system’s
processes. If those functionalities were not added or didn’t work as
speciﬁed or weren’t enough to generate the requisite mental life, the
tested entity would fail. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
S.MIRSKY: “I have suggested… an account which relies on a description
of layered processes… multi-level mapping of the world… the mapping
entity itself… the sense of being a self … sufﬁcient interactivity of
processes… associative events that continuously update the mappings…
depict the elements of the world and relationships between the
elements…” SH: That’s the explanation of how and why it feels rather
than just functs? It all sounds like functing to me! SWM: I can’t make
sense of your neologism, “functing”. It just sounds like a way of
avoiding acknowledging feeling to me. I agree that not all processes or
process-based systems will feel. I am only arguing that feeling could
be achievable via such an approach, insofar as a system can be gotten
to the point where it replicates the things brains manage to do. I make
no claim here that this is true or has been done, only that such a
theory COULD explain the occurrence of feeling in an entity. But, as
I’ve suggested before, there is an underlying conception of
consciousness or feeling that may inform our different intuitions about
this (why it seems to work for me but not for you). I have no problem
thinking about feeling in terms of the various processes that it could
break down to and, indeed, when I do that, I see nothing left out. It
seems to me that you either do have a problem with this notion (that
feeling is analyzable into non-feeling constituent processes performing
particular tasks) or you think something is left out that I don’t see.
If the latter, can you say what it is other than just the feeling
itself? If not, then I must assume our difference lies in your
discomfort with the idea that feeling is analyzable into constituents
more basic than it is. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
S.MIRSKY: “After all, what is feeling but the occurrence of various
awarenesses of things at varying levels?” SH: There’s the weasel-word
again: Deﬂated: “After all, what is feeling but the occurrence of
various feelings at varying levels?” SWM: A linguistic problem again. I
don’t like the term “feeling” because it seems misleading to me. So I
try to narrow it down by offering to describe it in terms of what I
think is clearer. Since you think “feeling” is the clearest we can get,
you don’t accept further analysis. But let’s stipulate for the moment
that we mean exactly the same thing by “feeling” and “awareness”, and
disregard all the other uses that both words lend themselves to. This
gets to the question of reducibility of the phenomenon. Above I asked
if you disagree with me about the possibility of analyzing feeling into
constituent elements which aren’t feeling, or if you thought something
is left out in my analysis that is not reducible. In either case though
we come to a point where you reject my reducing instances of having
feeling to instances of being aware. Isn’t this just a claim that
feeling is irreducible? And doesn’t this then take us back to the core
problem, that you see consciousness (feeling) as a bottom line basic,
whereas I don’t? But if you do, then isn’t that fundamentally a dualist
position? It would certainly explain why we cannot agree on common
ground here and why you bridle at my suggestion that to describe a
system that causes feeling behavior is to provide a causal explanation
of feeling. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY: “Why
shouldn’t such an explanation sufﬁce?” SH: Because it doesn’t explain.
It just interprets (hypothetical) processes which (if there really
turned out to be such processes) could be described (and causally
explained) identically without supposing they were felt. SWM: It would
suppose they were felt if the measure is feeling behavior and the
entity passes the lifetime Turing Test.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY: “you will still
say it does not [explain]. The only question then will be whether you
give as your reason… something that goes beyond restating the initial
claim, that ‘feeling is causally superﬂuous in explaining doing, or the
causes of doing’” SH: There is nothing in your explanation of the
“multi-level mapping” processes etc., that explains why and how they
are felt rather than just functed. You simply assume it, without
explanation. . . . even if you actually built a successful T3 and T4 .
. . There’s still nothing in your account to explain how or why they
are felt rather than functed (even if they truly are felt!). SWM: The
explanation lies in the notion that what we call feeling is just the
conjoining of certain processes performing certain functions in a
certain way. When the entity feels, it is having certain
representations and doing something with them at a system level. We see
the feeling in the feeling behavior in the context of what has been
built into it in terms of process-based systems and we ascertain its
presence to our satisfaction via appropriate testing of the behaviors.
Nothing is left out unless you believe that feeling is irreducible and
then you have the problem of defending a dualist position (which you
have yet to take a position on or defend).

S.MIRSKY: “An entity
that met the Commander Data test would be the counterexample”

Commander Data is just T3.
The counterexample is to provide a causal explanation of why and how we
(or T3) feel (rather than just funct).

S.MIRSKY: “It’s as if
you think that, because there’s a difference between having feelings
and “reading” them in others, there’s no reason to grant their presence
when everything else that’s relevant testiﬁes to that presence.”

Stuart, there’s a point
that you are systematically not taking on board: I am talking about
explaining *how and why* we (or T3) feel (rather than just funct). You
keep answering about *whether* we (or T3) feel.

We will not make any
progress this way…

S.MIRSKY: “If we feel
because systems of a certain type have feeling, then any explanation
for how such a system has feeling is enough.”

Yes indeed; and I am
waiting patiently for such an explanation…

S.MIRSKY: “[An
unfeeling] entity would fail to pass the lifetime Turing Test at some
point, however good a simulation it was. Why would it fail? It would
lack the underpinnings that lead to the requisite behaviors. The theory
proposes that the underpinnings are accomplished by installing certain
functionalities in the system’s processes. If those functionalities
were not added or didn’t work as speciﬁed or weren’t enough to generate
the requisite mental life, the tested entity would fail.”

Well, Stuart, not to pull a
Josh-Stern on you, but that’s rather a mouthful given that we have
neither a working T3 “construction” based on your processes, nor a
demonstration that without your processes it couldn’t pass T3!

But let’s suppose it’s so:
You’ve built T3, and you’ve shown that if you don’t have processes in
it of the type you describe, it fails T3. Now, how have you explained
how and why it feels (rather than just how and why it passes T3)?

(It’s always the same
thing: you conﬂate the ontic question of the causal status of feeling
with the epistemic question of whether or not it feels.)

S.MIRSKY: “I can’t make
sense of your neologism, “functing”. It just sounds like a way of
avoiding acknowledging feeling to me.”

Actually, it’s a way of
avoiding question-begging: Until further notice, whatever internal
processes it takes to pass T3 or T4 are processes required to pass T3
or T4, i.e., processes required to generate the right doing. They do
not explain why or how one has to feel to be able to do all that.
(Reminder, even if the processes are perfectly correlated with feeling
in people, and even if armed with them the robot can pass T3 or T4, and
without them it can’t, it still does not explain how and why the
processes are felt rather than just functed. This is not about whether
they are felt, but about how and why.

S.MIRSKY: “I have no
problem thinking about feeling in terms of the various processes that
it could break down to and, indeed, when I do that, I see nothing left
out.”

I’ve noticed, and I’m
suggesting that it’s a problem that you have no problem with that…

S.MIRSKY: “It seems to
me that you… think something is left out that I don’t see. If the
latter, can you say what it is other than just the feeling itself?”

The feeling itself — in
particular, how and why it’s there (agreeing for the sake of argument
that it *is* there).

S.MIRSKY: “let’s
stipulate for the moment that we mean exactly the same thing by
‘feeling’ and ‘awareness’”

Ok, but then that means
that all the transcriptions I’ve done, swapping feeling for being
aware, etc. have to be faced. And many of the transcriptions no longer
make sense (e.g., “unfelt feeling”).

S.MIRSKY: “I asked if
you disagree with me about the possibility of analyzing feeling into
constituent elements which aren’t feeling”

I don’t even know what that
means. Even if feeling were reducible to mean kinetic energy I still
wouldn’t know how or why it was felt rather than functed. After all,
heat, which really is mean kinetic energy, is just functed.

S.MIRSKY: “you reject my
reducing instances of having feeling to instances of being aware”

Why would I reject reducing
feeling to feeling (as we’ve agreed)? But at some point I’d like to
move from reducing to explaining….

S.MIRSKY: “Isn’t this
just a claim that feeling is irreducible?”

Give me a “reductive
explanation” that’s explanatory and I’ll happily accept. But the
reduction that “this feeling *just is* that functing” certainly does
not do it for me.

[We're repeating ourselves
dreadfully, and spectators must be really weary, but here's perhaps a
tiny injection of something new: I've always hated fervent talk about
"emergence" -- usually by way of invoking some analogy in physics or
biology, in which a phenomenon -- like heat -- is explained by a
"reductive" explanation -- like mean kinetic energy. Yes, there are
unpredicted surprises in physics and biology. What looked like one kind
of thing turns out to be another kind of thing, at bottom. But one of
the invariant features of this sort of reduction and emergence is that
it's all functing to functing. Heat is functing, and so is mean kinetic
energy. Now I don't want to insist too much on this, but when we say
that the emergent property was "unexpected," I think we are talking
about appearances, which means we are talking about feelings: Heat did
not *feel* like mean kinetic energy. Fine. But then we learn to think
of it that way, as a kind of difference in scale, and the unexpected no
longer looks so unexpected. What the emergent functing feels like is
integrated with what the lower-level functing feels like. Well, I just
want to suggest that this sort of thing may not work work quite as well
when it is feeling itself that we are trying to cash in into functing.
That just *might* be part of the reason why causal explanation is
failing us here. But, if so, I don't think it's the whole reason.]

S.MIRSKY: “And doesn’t
this then take us back to the core problem, that you see consciousness
(feeling) as a bottom line basic, whereas I don’t? But if you do, then
isn’t that fundamentally a dualist position?”

If it is, then it’s because
I’m an epistemic (i.e., explanatory) dualist, not an ontic one. Just as
I don’t believe in zombies, I fully believe that the brain causes
feeling, with no remainder, causally. It’s just that we don’t seem to
be able to explain — in the usual way we do with all other functing,
whether or not “emergent,” — how and why the brain causes feeling. It’s
duality in what can and can’t be explained causally, rather than a
duality either in “substance” or in “properties” (unless a “property
dualist” is deﬁned as someone who holds that the difference between
feeling and all other properties is that all other properties can be
explained causally, whereas feeling cannot: but in that case I’d still
rather write that out longhand than dub myself a “property dualist”
without an explanation!) In closing, I will leave you with (what I now
hope will be) the last word:

S.MIRSKY: “The
explanation lies in the notion that what we call feeling is just the
conjoining of certain processes performing certain functions in a
certain way. When the entity feels, it is having certain
representations and doing something with them at a system level. We see
the feeling in the feeling behavior in the context of what has been
built into it in terms of process-based systems and we ascertain its
presence to our satisfaction via appropriate testing of the behaviors.
Nothing is left out unless you believe that feeling is irreducible and
then you have the problem of defending a dualist position (which you
have yet to take a position on or defend)."

SH: Commander Data
is just T3. The counterexample is to provide a causal explanation of
why and how we (or T3) feel (rather than just funct). SWM: The causal
explanation lies in Data’s specs.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY: “It’s as if you
think that, because there’s a difference between having feelings and
“reading” them in others, there’s no reason to grant their presence
when everything else that’s relevant testiﬁes to that presence.” SH:
Stuart, there’s a point that you are systematically not taking on
board: I am talking about explaining *how and why* we (or T3) feel
(rather than just funct). You keep answering about *whether* we (or T3)
feel. SWM: The “whether” question is relevant to the “why” question
since your denial of my answer as to why always seems to hinge on the
notion that we can’t know what’s really going on inside our Commander
Data type entities (or others like ourselves). Of course, I’m arguing
we can know. Therefore the specs of such entities, combined with what
we can know about them from observing behaviors, gives us the full
explanation: We know X causes Y when Y is seen to be present with the
implementation of X. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SH: We
will not make any progress this way… SWM: That’s so if we can’t even
agree on what counts as an explanation and what needs to be included to
reach the conclusion that the explanation works.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY: “If we feel because
systems of a certain type have feeling, then any explanation for how
such a system has feeling is enough.” SH: Yes indeed; and I am waiting
patiently for such an explanation… SWM: I don’t believe, at this point,
that you would ever agree that an explanation involving the specs of a
Data-type entity, combined with the evidence of feeling behavior in
that entity (when the specs are implemented), would ever sufﬁce. It has
to do with what I take to be the conception of consciousness (feeling)
you hold which differs fundamentally from my conception. I don’t see
how we get past this fundamental difference. I will say this though. I
once held the same view you apparently hold (or one very much like it).
I don’t think I ﬁnally shifted until I was

confronted with
Searle’s CRA which I initially found convincing. But the more I thought
about it, the more mistaken it seemed to me to be. So Searle was my
catalyst for adopting a different idea of consciousness from the one
you seem to hold. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY:
“[An unfeeling] entity would fail to pass the lifetime Turing Test at
some point, however good a simulation it was. Why . . . ? It would lack
the underpinnings that lead to the requisite behaviors. The theory
proposes that the underpinnings are accomplished by installing certain
functionalities in the system’s processes. If those functionalities
were not added or didn’t work as speciﬁed or weren’t enough to generate
the requisite mental life, the tested entity would fail.” SH: Well,
Stuart, not to pull a Josh-Stern on you, but that’s rather a mouthful
given that we have neither a working T3 “construction” based on your
processes, nor a demonstration that without your processes it couldn’t
pass T3! SWM: It’s not claim as to what’s true. It’s a basis for
assessing the success or failure of the proposed approach, for
measuring the predictive power of the system theory of mind. That’s how
science works after all. We formulate hypotheses and test them. I don’t
expect any more than that for the approach or claim more for it.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SH: But let’s suppose it’s so:
You’ve built T3, and you’ve shown that if you don’t have processes in
it of the type you describe, it fails T3. Now, how have you explained
how and why it feels (rather than just how and why it passes T3)? (It’s
always the same thing: you conﬂate the ontic question of the causal
status of feeling with the epistemic question of whether or not it
feels.) SWM: The only way to scientiﬁcally arrive at an outcome is to
test it. So what we can know about this, is relevant. Everything else
is secondary. After all, you can’t arrive at a proper theory a priori!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY: “I can’t make
sense of your neologism, “functing”. It just sounds like a way of
avoiding acknowledging feeling to me.” SH: Actually, it’s a way of
avoiding question-begging: Until further notice, whatever internal
processes it takes to pass T3 or T4 are processes required to pass T3
or T4, i.e., processes required to generate the right doing. They do
not explain why or how one has to feel to be able to do all that.
(Reminder, even if the processes are perfectly correlated with feeling
in people, and even if armed with them the robot can pass T3 or T4, and
without them it can’t, it still does not explain how and why the
processes are felt rather than just functed. This is not about whether
they are felt, but about how and why. SWM: There’s that neologism
again. If feeling is reliably recognized in behavior and the right
system yields the right behavior, then whatever theory underlies
construction of the system explains the feeling manifested in the
behavior. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY: “I have no
problem thinking about feeling in terms of the various processes that
it could break down to and, indeed, when I do that, I see nothing left
out.” SH: I’ve noticed, and I’m suggesting that it’s a problem that you
have no problem with that… SWM: Yes, I can see that’s your view. It’s
why we’re apparently at opposite ends of this particular opinion
spectrum vis a vis what consciousness is. But what you can’t say (or so
far haven’t said) is why it’s a problem except to reiterate your view
that T3 is about functing not feeling, and so forth. But that is just
to take your stand on your bottom line conception of consciousness
which differs from mine. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
S.MIRSKY: “It seems to me that you… think something is left out that I
don’t see. If the latter, can you say what it is other than just the
feeling itself?” SH: The feeling itself — in particular, how and why
it’s there (agreeing for the sake of argument that it *is* there). SWM:
I said “other than just the feeling itself”. As to the how and why, the
type of theory I’ve offered gives us both the how and the why, i.e., it
says feeling is just so many physical processes doing such and such
things in the right way. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY:
“let’s stipulate for the moment that we mean exactly the same thing by
‘feeling’ and ‘awareness’” SH: Ok, but then that means that all the
transcriptions I’ve done, swapping feeling for being aware, etc. have
to be faced. And many of the transcriptions no longer make sense (e.g.,
“unfelt feeling”). SWM: You won’t ﬁnd me arguing for “unfelt feeling”.
That’s not my game. I think feeling is explicable in terms of processes
and functions, i.e., that the sense entities like us have of things,
including of having senses of things, all reduce back to perfectly
physical processes and that computational processes running on
computers are as good a candidate for this as brain processes. But I
don’t see how “unfelt feeling” can be intelligible except in some
special sense (like a suppressed emotion that we don’t admit to having
but which has an impact on other things we feel and do as in
psychoanalytical claims). ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
S.MIRSKY: “I asked if you disagree with me about the possibility of
analyzing feeling into constituent elements which aren’t feeling” SH: I
don’t even know what that means. Even if feeling were reducible to mean
kinetic energy I still wouldn’t know how or why it was felt rather than
functed. After all, heat, which really is mean kinetic energy, is just
functed. SWM: If feeling is a bottom line thing, irreducible to
anything other than itself, then it is, to all intents and purposes
non-physical and stands apart from the rest of the otherwise physical
universe. If it’s reducible then it doesn’t. The former is an
expression of dualism, the latter isn’t. If feeling is reducible in
this way, then it’s conceivable that it’s just the result of so many
non-feeling processes happening in a certain way, in which case we have
an explanation of feeling in response to your challenge. If it’s bottom
line, on the other hand, an ontologically basic phenomenon in the
universe, then we would not expect to see it reduce to anything else
and, of course, your thesis that it is not causally explainable would
be sustained. I submit that the only reason you think your challenge
cannot be met is because you think feeling IS bottom line in this way.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY: “you reject my
reducing instances of having feeling to instances of being aware”

SH: Why would I
reject reducing feeling to feeling (as we’ve agreed)? But at some point
I’d like to move from reducing to explaining…. SWM: I was using “aware”
as a way of more precisely zeroing in on your use of “feeling” in the
statement of mine you are citing here. However, in what followed that
statement, I shifted to the stipulative substitution we had formerly
agreed on. (This is all a function of the imprecision of language in
this arena. You think “feeling” is more precise, I think “awareness” is
but the truth is it just depends on how we each agree to use the terms
we like at any given point — and sometimes, because of the fuzziness of
language, we are going to slip up, despite ongoing agreements.)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ S.MIRSKY: “Isn’t this just a
claim that feeling is irreducible?” SH: Give me a “reductive
explanation” that’s explanatory and I’ll happily accept. But the
reduction that “this feeling *just is* that functing” certainly does
not do it for me. SWM: ‘This feeling is that combination of processes
doing such and such things within the larger system.’ Insofar as the
theory implied by that statement bears out in terms of a successful
implementation (Commander Data), we have a successful explanation. But
you reject that it is successful because you cannot agree that feeling
is reducible to elements that aren’t themselves feeling. SH: [We're
repeating ourselves dreadfully, and spectators must be really weary,
SWM: Yes but it's almost over so a rest is in the cards for all! SH:
but here's perhaps a tiny injection of something new: I've always hated
fervent talk about "emergence" -- usually by way of invoking some
analogy in physics or biology, in which a phenomenon -- like heat -- is
explained by a "reductive" explanation -- like mean kinetic energy.
Yes, there are unpredicted surprises in physics and biology. What
looked like one kind of thing turns out to be another kind of thing, at
bottom. But one of the invariant features of this sort of reduction and
emergence is that it's all functing to functing. Heat is functing, and
so is mean kinetic energy. Now I don't want to insist too much on this,
but when we say that the emergent property was "unexpected," I think we
are talking about appearances, which means we are talking about
feelings: Heat did not *feel* like mean kinetic energy. SWM: That's
because heat and "mean kinetic energy" are different concepts. One
refers to what we feel under certain conditions (what burns us, melts
us, exhausts us, etc.), the other to an atomic level description of
what underlies the phenomenon we feel. They can refer to the same thing
without denoting precisely the same things. That in some contexts we
want to say they do denote the same thing doesn't mean they do in all
contexts in which we can use these terms. SH: Fine. But then we learn
to think of it that way, as a kind of difference in scale, and the
unexpected no longer looks so unexpected. What the emergent functing
feels like is integrated with what the lower-level functing feels like.
Well, I just want to suggest that this sort of thing may not work work
quite as well when it is feeling itself that we are trying to cash in
into functing. That just *might* be part of the reason why causal
explanation is failing us here. But, if so, I don't think it's the
whole reason.] SWM: I think it’s an article of faith on your part that
“causal explanation is failing us here”. In fact I think it’s perfectly
feasible — as long as one is willing to grant at least the possibility
that feeling is analyzable into constituent elements that aren’t,
themselves, feeling. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
S.MIRSKY: “And doesn’t this then take us back to the core problem, that
you see consciousness (feeling) as a bottom line basic, whereas I
don’t? But if you do, then isn’t that fundamentally a dualist
position?” SH: If it is, then it’s because I’m an epistemic (i.e.,
explanatory) dualist, not an ontic one. Just as I don’t believe in
zombies, SWM: But you do believe in at least the possibility of T3
zombies which is the same thing! SH: I fully believe that the brain
causes feeling, with no remainder, causally. SWM: Here we ﬁnd
agreement. The only problem, then, is whether, in believing that, you
are also prepared to grant that in causing consciousness, the brain
does so physically and that it is at least possible that it does this
via its known physical processes. If so, then there are signiﬁcant
similarities between those processes and computer processes — enough,
in fact, to suggest that computers might be able to achieve the same
thing as brains achieve. And if THAT is so, then it’s hardly
unreasonable to suppose they do it via a concatenation of processes
constituting a system as I’ve already suggested. Thus the kind of
system I’ve proposed COULD provide an account for how feeling is
caused. SH: It’s just that we don’t seem to be able to explain — in the
usual way we do with all other functing, whether or not “emergent,”

— how and why the
brain causes feeling. It’s duality in what can and can’t be explained
causally, rather than a duality either in “substance” or in
“properties” (unless a “property dualist” is deﬁned as someone who
holds that the difference between feeling and all other properties is
that all other properties can be explained causally, whereas feeling
cannot: but in that case I’d still rather write that out longhand than
dub myself a “property dualist” without an explanation!) SWM:
Understood. Searle rejects the designation too. As far as I can see, if
“property dualism” is just about there being some properties of systems
that are what you call feeling, then it’s a fairly innocuous position.
But if you think there’s no possibility of giving a causal account of
feeling in terms of systems and system-level properties, then the
dualism, admitted or not, looks like it goes a lot deeper and is
certainly more problematic. It all rides, I think, on whether we grant
that feeling is reducible or not to constituents that aren’t,
themselves, feeling. Is feeling a bottom line property that somehow
springs into being or attaches itself to some physical phenomena/events
in an ultimately inexplicable way or is it just the manifestation of
some combination of certain non-feeling constituent
processes/functions? SH: In closing, I will leave you with (what I now
hope will be) the last word: S.MIRSKY: “The explanation lies in the
notion that what we call feeling is just the conjoining of certain
processes performing certain functions in a certain way. When the
entity feels, it is having certain representations and doing something
with them at a system level. We see the feeling in the feeling behavior
in the context of what has been built into it in terms of process-based
systems and we ascertain its presence to our satisfaction via
appropriate testing of the behaviors. Nothing is left out unless you
believe that feeling is irreducible and then you have the problem of
defending a dualist position (which you have yet to take a position on
or defend). SWM: If you accept this view then wer’e on the same page
which we haven’t seemed to be throughout these discussions. I take it

that was not the
purpose of ending your response here! Who knows though, eh? I have
enjoyed our conversations, even if it turns out we have found no common
ground, and I thank you for taking the time and making the no doubt
considerable effort to consider and respond to my comments.

On one version of the
view, the a posteriori physicalists think *the very feeling of pain* is
identical to the functional property. That very phenomenal concept of
pain you form when you reﬂect on your pain and think about it in terms
of what it feels like, that very concept refers to the functional
property. So to you question: ‘Why do we feel pain as opposed to just
funct it?’, Papineau would answer, ‘The property of feeling pain and
the property of functing pain are one and the same property. We just
have two concepts for picking out a single property’. Papineau thinks
it’s in some sense psychologically impossible for us to believe this
identity, just as it may be in some sense psychologically impossible to
believe that time doesn’t ﬂow.

FUNCTIONAL REDUNDANCY:
FAILURE OF IMAGINATION OR FAILURE OF EXPLANATION?

(Reply to Philip Goff)

P.GOFF: “a posteriori
physicalists think *the very feeling of pain* is identical to the
functional property… Papineau would [say] ‘The property of feeling pain
and the property of functing pain are one and the same property. We
just have two concepts for picking out a single property’… it’s in some
sense psychologically impossible for us to believe this identity, just
as it may be in some sense psychologically impossible to believe that
time doesn’t ﬂow.”

I can believe that feeling
and its functed correlates are somehow the same thing (or “aspects” of
the same thing, whatever that means) — in fact, I *do* believe it: I
just don’t know how or why it is so.

But I agree that any view
whose punchline is that the explanatory gap is unbridgeable can be
considered a notational variant of any other view with the same
punchline.

I just think that the
explanatory failure itself is the heart of the problem, rather than
just a symptom of some speciﬁc cognitive deﬁcit we happen to all have
regarding feeling and functing. It’s not our imaginations or intuitions
but *causal explanation* — usually such a reliable guide — that’s
failing us, in the special case of feeling.

And, if anything,
recognizing that explanatory shortfall goes *against* our intuitions:
What we feel intuitively is that we are active causal forces, not just
redundant decorations on functing. Yet, once we learn (and we do
*learn* it: we certainly don’t feel it naturally of our own accord)
that functing is in fact sufﬁcient to do the whole job, and there’s no
causal room left, it’s because we *can* in fact successfully conceive
and believe that, that we are impelled to ask: Well then why on earth
is all that functing felt?

I’d say the “why” question
(which is not a teleological question but a functional and evolutionary
— hence practical, mechanical — question) is even more puzzling than
the “how” question (though it is, of course, just a special case of the
“how” question):

As laymen, we’re always
ready to ﬁnesse the technical details underlying the scientiﬁc answer
to “how.” (How many people really understand the ﬁne points of
electromagnetism or phase-transitions or space-time?) We simply accept
the science’s punchline. But in the special case of feeling, it is the
very superﬂuity and redundancy of feeling in the face of the perfectly
sufﬁcient and complete explanation of the functional substrate (even
with the technical details taken on faith) that makes ask: Well then
why on earth is all that functing felt?

It’s rather difﬁcult to put
this without inventing a sci-ﬁ scenario so ad-hoc and fanciful that it
deﬁes belief, but here’s an attempt to give the ﬂavor of the sense in
which one could duplicate the puzzle of the redundant causality
underlying the explanatory gap in a domain other than feeling, namely,
doing:

Suppose we were told that
there is a planet in which billiard balls move according to the usual
laws of mechanics — what looks macroscopically like direct contact,
collisions, and local transfer of kinetic energy. But, in addition to
the usual universal laws of mechanics underlying those local
collisions, there were also empirical evidence of a second mechanical
force on that planet, unknown here on earth or elsewhere in the
universe, whereby the outcomes of collisions are determined not just by
the local mechanics, but by an additional macroscopic
action-at-a-distance force, rather like gravity, except a repulsive
rather than an attractive force. So when a billiard ball is headed
toward a collision, long before the local mechanics are engaged by
contact, there is already a (detectable) build-up of the
action-at-a-distance force, likewise sufﬁcient to determine the outcome
of the collision. The collision occurs just as a classical Newtonian
2-body collision would occur on earth, the outcome perfectly predicted
and explained by Newtonian 2-body mechanics, but, in addition, the
outcome is also perfectly (and identically) predicted and explained by
the action-at-a-distance force.

I suggest that under those
conditions it would be quite natural for us to ask why there are two
forces when the outcome would be identical if there were just one.

It’s rather like that with
feeling and functing — and that’s not because we have trouble believing
that functing causes feeling, somehow. We can even ﬁnesse the technical
details of the “how”. It’s the causal redundancy underlying the “why”
that is the real puzzle.

Nor is that resolved by the
overhasty invocation of “identity” (pain is identical to c-ﬁbre ﬁring).
At best, pain’s another property of c-ﬁbre ﬁring, besides the property
of ﬁring. And even if thinking of properties of things as being
“caused” is rejected as simplistic — and we think instead of properties
as just being “had” rather than being caused — the redundancy is still
there: Why does c-ﬁbre ﬁring “have” this extra property of *feeling*
like something, when just *doing* the ﬁring is already enough to get
whatever functional/ adaptive job there is to be done, done?

(To advert to your own
thread, Philip: I also think that one of the allures of “panpsychism” —
which I actually think is incoherent — is the wish at least to spread
this intractably and embarassingly redundant property of feeling all
over the universe, for that would make it somehow more credible,
respectable, as a universal redundant property, rather than just an
inexplicable local terrestrial ﬂuke. But I think that in that case the
cure is even worse than the disease!)

Hi Harnad, One minor
point: what do you mean by an extra-property? IF the property of being
in pain WERE the property of having c-ﬁbers ﬁring, there wouldn’t be
any extra property (there are well-know arguments against this kind of
identiﬁcation that the phenomenal concept strategy -PCS- tries to
answer). If the former statement were true, every (metaphysically)
possible situation in which one have C-ﬁbers ﬁring is a situation in
which one in pain. It is not possible to have C-ﬁbers ﬁring without
being in pain. On the plausible view that necessary coextensive
properties are identical, there is no extra-property. According to the
PCS there are no two aspect of one “thing” there is just one “thing”
and two ways of thinking about this unique “thing” (as pain and as
c-ﬁbers ﬁring).

M.SEBASTIAN: “what do
you mean by an extra-property?” IF the property of being in pain WERE
the property of having c-ﬁbers ﬁring, there wouldn’t be any extra
property (there are well-know arguments against this kind of
identiﬁcation that the phenomenal concept strategy -PCS- tries to
answer). If the former statement were true, every (metaphysically)
possible situation in which one have C-ﬁbers ﬁring is a situation in
which one in pain. It is not possible to have C-ﬁbers ﬁring without
being in pain. On the plausible view that necessary coextensive
properties are identical, there is no extra-property. According to the
PCS there are no two aspect of one “thing” there is just one “thing”
and two ways of thinking about this unique “thing” (as pain and as
c-ﬁbers ﬁring).”

I confess I can’t
understand or follow the metaphysical technicalities. I just mean
something like: A red circle has many properties, among them that it is
red and that it is round. C-ﬁbre activity, too, could have many
properties, among them that they are ﬁring and that they are felt.

THOSE WEASEL WORDS: I
didn’t want to end without addressing Stevan Harnad’s persistent
complaint about “weasel words”. Repeatedly in this discussion he has
taken me to task for using words other than “feeling” for the
consciousness this is all about (this being the On-line Consciousness
Conference, after all, not the On-line Feeling Conference). I
understand his concern here because there’s little doubt that the words
words we fall back on when referring to elements of our mental lives
are notoriously slippery. The only question is whether “feeling” is
intrinsically better than any of the alternatives as Stevan insists
(and often in these discussions a great deal has seemed to ride on that
for him). In his opening video he presented a whole list of words often
used synonymously for this mysterious thing we typically call
“consciousness” including “consciousness” itself, which he lumped in
with all the other allegedly “weasel words”. But Stevan thinks that
“feeling” alone isn’t weasely. It is, on his view, the most direct,
clear, appropriate term available and everything else we want to say
about consciousness seems to him to come down as some variant of what
could be better called “feeling”. In particular, he has often objected
to my use of “awareness” on the grounds that it suggests other things,
such as attending to, which detract from the feeling aspect that he
believes lies at the core of the actual intended meaning. But
“feeling”, too, has other meanings, doesn’t it? Surely these admit of
the same problem in speciﬁcity and intent. For instance, off hand I can
think of six different ways we use “feeling: 1) When we reach out and
touch something we are said to be feeling it. 2) When we get the
sensation that comes with touching it under normal conditions we are
said to feel it.

3) When we have any
sensations at all, we are said to feel it, too, (by analogy with the
narrower tactile sensory experience). 4) When we are in a particular
emotional state or condition we are said to feel a certain way. 5) When
we want to do something or say something we may describe this as
feeling like doing or saying. 6) When we think about anything (have it
as an object of thought in our mental sights, as it were) we may be
said to feel it (Stevan’s main use, I think) — as in understanding a
given symbol’s semantic content (meaning) having the character of our
being aware of that symbol’s meaning. By contrast, my own preferred
term, “awareness”, seems to offer just two possibilities: 1) Attending
to something that has entered our range of observation (either in a
sensory sense or in a conceptual sense — as when some idea may come to
our attention) 2) Having a sense of anything at all as when we
experience sensory phenomena or ideas. In fact, it rather looks like
there are better terms yet, such as “experience” though this, too, has
its alternative uses: 1) Whenever we go through any series of events of
which we are aware (as in paying at least some level of attention to)
we can be said to be having experience. 2) The elements of our mental
lives characterized by daydreams, streams of consciousness, etc. 3) The
phenomenon of being a subject, which seems to contain all other
phenomena we encounter, whether objectively observable physical objects
or events, or the mental imagery and emotions that characterize our
interior lives including imaginary and remembered imagery. One common
thread here is “subjective” (and its cognates) and another is “mental”
while, of course, “consciousness” keeps recurring. On Stevan’s view
such persistent shifting among terms and the dependence for meaning on
others in this group is indicative of weaseliness. And yet, what comes
clear I think from all this is that “feeling” is no better in that it
is no more basic a term than the others. We may agree for argument’s
sake to stick to a common term such as we have often tried to do here
in these discussions, in deference to Stevan’s preference, and yet even
then this has proved hard. Stevan often took me to be straying from his
“feeling” when I lapsed into my preferred term of “awareness” and I
often took Stevan’s “feeling” to be too broad to be helpful. Sometimes
Stevan himself erred as when he mixed “feeling” as in migraine
headaches (a sensory phenomenon) with “feeling” as in what Searle says
we must have if we are to be said to, in fact, understand the Chinese
ideograms fed to us in the Chinese Room. So I guess I want to say that
stipulating to a term’s meaning for the purpose of a particular
discussion or discussions may be a useful strategem, but it isn’t a
cure-all for the deeper problem of language growing rather
loosey-goosey at the margins of what is its natural habitat in the
public realm. And mind words are clearly outside that habitat.

Many feelings, one meaning:
We can feel lots of different kinds of things. What needs to be
explained is how and why any of them feel like anything at all.

S.MIRSKY: “By contrast,
my own preferred term, “awareness”, seems to offer just two
possibilities… Attending to something that has entered our range of
observation (either in a sensory sense or in a conceptual sense — as
when some idea may come to our attention)”

S.MIRSKY: “The phenomenon
of being a subject, which seems to contain all other phenomena we
encounter, whether objectively observable physical objects or events,
or the mental imagery and emotions that characterize our interior lives
including imaginary and remembered imagery.”

Demustelation:
“subject” (feeling or unfeeling?); “phenomena” (felt or unfelt?);
“observable” (felt or unfelt?); “events” (felt or unfelt?); imagery
(felt or unfelt?); emotions (felt or unfelt?); “interior lives” (felt
or unfelt?), etc. All equivocal. All just functing if unfelt; and, if
felt, all of a muchness, and all demanding to know:

how felt rather than just
functed?

why felt rather than just
functed?

S.MIRSKY: “‘feeling’ is
no better”

No better for what? It is
just singling out that which needs to be explained.

S.MIRSKY: “Stevan
himself erred… when he mixed “feeling” as in migraine headaches (a
sensory phenomenon) with “feeling” as in what Searle says we must have
if we are to be said to, in fact, understand the Chinese ideograms fed
to us in the Chinese Room.”

No error: They are both
felt states, and *that* is the fact that needs explaining — not that
they feel like THIS or like THAT.