Pages

Sunday, 15 December 2013

Understanding: Discussion Summary

§§138-184
form a complex, interlocking series of remarks. Sometimes they seem
to repeat themselves and and sometimes they go off on curious
tangents. Wittgenstein wrote in his preface that “The same or
almost the same points were always being approached afresh from
different directions” and this is certainly a prime example.

It
would be a mistake, however, to think that the discussion is
presented this way simply because Wittgenstein wasn't up to the task
of moulding it into a more conventional form. Rather, the
presentation is bound up with the idea of philosophy as therapy.
It attempts to reflect the experience of being in the grip of a
particular way of looking at things. Our thoughts keep coming back to
the same familiar notions – or, rather, those same notions keep
reappearing in subtly altered forms. They cannot be despatched at a
stroke; it requires patient work on the part of both the author and
the reader.

At
the same time, there's no doubt that it's easy to get lost in the
maze of remarks that Wittgenstein presents us with. So I thought it
might be helpful to provide a rough overview (which I actually wrote
for my own benefit) to act as a guide when things get difficult.
Obviously these notes aren't intended as a substitute for the text
itself. They do not stand alone. Although here and there I offer some
supporting arguments (usually in square brackets), my main aim is to
highlight the flow of Wittgenstein's discussion and the connections
between the various parts. Hopefully more detailed posts will follow
in due course.

I
have tried to keep things brief but I must admit it's turned into a
bit of a monster. So in case reading it on the blog itself is
tiresome I've prepared a PDF version without all this introductory
waffle.

The
discussion can be divided into four parts:

§§138-150.
This starts with the seeming clash between understanding as
something grasped in an instant and meaning as something explained
through use. It considers (and rejects) the claim that understanding
amounts to having an “inner Something”. The following candidates
are rejected: a mental picture; a mental state; a disposition. It
concludes that “meaning” (and hence “understanding”) is more
akin to having an ability.

§§151-155.
This considers a concrete case of “grasping in an instant” (“now
I know”) which seems to go against the link to ability in §150.
Again, it considers and rejects the claim that understanding
consists of an inner Something. Specifically, it argues against
understanding as a characteristic experience or an inner (mental or
physical) mechanism. It concludes that understanding is logically
(grammatically) bound up with the circumstances
in which it takes place.

§§156-178.
To clarify this point, the case of “reading” is considered. As
with understanding, the notion of reading as an inner experience or
process is rejected. This time, however, the process is seen from
the point of view of “deriving” or “being guided”, and the
experience of reading is considered as an experience of this process
taking place. But it is suggested that the pertinent difference
between reading and (eg) pretending lies in the different
circumstances of the two cases.

§§179-184.
The way in which circumstances enter into the concept of
understanding is clarified. The nature of “now I know” is
identified as a kind of signal rather than a report or description
of an inner experience or state.

Now
let's attempt a more detailed summary.

1.
§§138-150. First Run-Through

§138
points out that when we hear a familiar word we understand it in an
instant. This seems to clash with idea of meaning as something
established by use,
which is spread out over time. A number of questions are generated as
a result:

What
exactly is grasped when we understand a word in an instant?

How
does what is grasped relate to use? How can it fit or fail to fit
with use?

How
can we grasp the whole use of a word in an instant?

How
can observed use help us in novel situations – ie, how is it that
we understand a word even when it's used in a sentence we've never
encountered before?

Note
how the mere idea of “grasping in an instant” suggests that what
we need to understand is something “inner”. Again and again
during the discussion observations about the public criteria
for using the word “understanding” will be countered by
objections relating to the first-person experience of
understanding. One of Wittgenstein's main concerns is to clarify the
place of such experiences in relation to the concept of
understanding. He will not claim that they're of no importance
(much less that they don't exist), but that we tend to misunderstand
their role. Achieving a proper understanding, however, doesn't
involve discovering a correct theory; what we need is to
remind ourselves of the complex role understanding plays in our
lives.

§§139-142:
Pictures

What
is grasped when we understand a word in an instant? The first theory
considered is that we get a mental picture.
If we have a picture of a cube and we pick out an actual cube then
what is grasped fits the use. If we pick out (eg) a triangular prism
then it doesn't.

But
who is to say that picking out a triangular prism isn't a correct
application of the picture? A picture by itself cannot provide
a standard for its correct use. The same picture might be applied in
two different ways, and we would say that it had a different meaning
on each occasion. So the criterion for understanding is still the use
rather than the picture itself.

[Note
the link between this point, the discussion of ostensive
definition (§§28-36), the discussion of deriving (§§163-164),
and the discussion of “+2” in the rule-following argument
(§§185-190). In fact, the discussion of ostensive definition
foreshadows many of the points Wittgenstein makes about
understanding. I'll group them together as the “anything goes”
argument. I'm not saying they're the same argument each time,
but they do seem connected. Further consideration might be
rewarding.]

We're
tempted to say that the picture forces an application upon us,
but what this boils down to is that when confronted with a picture we
often expect it to be used in a certain way – other possible
applications do not occur to us. So “the picture fits the use”
means “the use was the one we expected”.

Does
all this mean there's no such thing as an application occurring to
someone in an instant? No. It will often make sense to say such a
thing. But we need to clarify the role of a statement such as “the
application came before my mind”. This will happen shortly.

§§143-150:
States, Dispositions, Abilities

As
a prelude, however, Wittgenstein switches to a description
of teaching someone the decimal series of numbers. It outlines
certain broad, public criteria for saying that the pupil understands.
It also emphasises (a) that at any stage the pupil's ability to learn
may break down, and (b) that the type of instruction given will
depend on the type of mistake the pupil makes. The point of this is
to undermine the idea that getting the pupil to understand involves
giving her a specific inner Something (a picture or formula, etc) the
possession of which will be the source of correct performance.
Rather, what the pupil is “given” depends on what she needs in
order to perform correctly – and it's possible that there is
nothing we can give her (ie, no course of instruction) that
will achieve this.

Note
also that the pupil's coming to understand is drawn out over time and
there is no precise moment when we might say “now she knows”.
There is no such thing here as “grasping in an instant”.

At
§146 we get an important objection: applying understanding is not
understanding itself. Understanding is the source of correct
performance. Or, to put it another way, correct performance is
derived from the source.

This
is a variation on the “inner Something” idea; behind it lurks the
notion that the “source” is a mental state – one in
which a formula (or other method of application) occurs to the person
who understands.

Wittgenstein
makes two objections:

A
formula has the same problem as a picture. It does not come with its
method of application built-in.

Understanding
a word is not best categorised as a mental state. Mental states
(such as feeling anxious or euphoric) have specific duration and
varying degrees of intensity. But I understand a word whether or not
I'm thinking about it. My understanding isn't interrupted when I'm
distracted by something, and so on.

Objection
(b) prompts a revised version of the claim (§149): understanding is
a state of an apparatus of the mind (or brain). (Wittgenstein
sometimes calls this a disposition.) So, just as a pocket
calculator can give us the answer to a sum because of its structure
(ie, independent of this or that instance of calculating the answer),
so I can be said to know a word because the structure for correct
performance is in place even if I'm not currently using it.

Wittgenstein
counters by pointing out that this gives us two separate criteria for
saying someone understands: (i) ascertaining the structure, and (ii)
observing performance. He doesn't here elaborate on the implications
of this situation; they will emerge as the discussion continues (I'll
call this the “two criteria” objection).

Instead
(§150), we get a summary of sorts: the grammar of the word “know”
is related to “able to”, but also to “understand”. I think
this is emphasising a logical (grammatical) connection between
meaning/understanding and performance. I also think it needs
considering in conjunction with §155. But it all needs careful
unpacking.

2.
§§151-155: “Now I know!”

Wittgenstein
now switches back to instantaneous grasping when he considers the
significance of phrases such as “now I know” or “now I
understand”. And this marks the start of his attempt to clarify the
idea of an application coming before one's mind, which he mentioned
in §141.

[Why
did he first discuss learning the series of decimal numbers
(§§143-150)? I think because that process is part of the general
scene-setting (the circumstances) without which “Eureka
moments” don't make sense. (Before you can even try to
continue a number series you have to learn to count.) Compare this
with ostensive definition. Viewed in isolation ostensive definition
seemed to do something both fundamental and mysterious: provide an
unmistakable super-bond between word and object. But its function was
only seen aright when we reminded ourselves that actually it took
place within a broader linguistic context. I think Wittgenstein is
saying something similar about instantaneous grasping: if you
overlook the broad context in which it actually takes place then it
can seem to provide the essence of understanding and perform a
truly mysterious function. The mystery is dissolved precisely by
reminding ourselves of that broader context.]

Wittgenstein
first points out that there are various things that might go on in
someone who suddenly understands. This mitigates against the idea
that understanding is a characteristic experience. But worse still,
any of those experiences might occur and the pupil might still be
unable to continue the series. So the experiences are not the
“essence” of understanding; they are concomitant processes.

And
now, just as when the notion of understanding as a mental state fell
to pieces, we're tempted to posit a hidden process that lies
behind what we actually experience. (Note how we've moved here from
description to theory.) But if the process is hidden then how do I
know that I actually understand? “How can the process of
understanding have been hidden, given that I said 'Now I understand'
because I did understand?” (§153). [This links back to the
“two criteria” situation in §149.] If understanding is a state
of an apparatus then how can I say I understand when (a) I don't know
what that state actually is, and (b) whatever it is, I don't know
whether it obtains?

Against
this it is objected (§154) that there must be a state (or
process). For if (eg) a formula occurring to me is not enough to
provide understanding then something else must be necessary, and what
could that something else be if not a hidden process or state?

Here
we have reached an impasse. We seem forced to accept a theory despite
the fact that it makes things worse (cf §112). This is a radical
breakdown, and Wittgenstein's response amounts to a rejection of the
whole approach that brought it about. Certainly something else is
needed, he says, but not a state or process or disposition or
structure or any type of inner thing. Instead we need to
remind ourselves of the circumstances that warrant someone's
saying “now I know” when the formula occurs to her.

3.
§§156-178: Reading

To
make this clearer, he introduces the analogous topic of reading.
As with understanding, reading is a concept that tempts us to think
its essential characteristic must be something inner. On the one
hand, there is surely something computational about reading –
we derive our words from the text – and this derivation is a
process that takes place in the mind, or perhaps in the brain. On the
other hand, reading is surely also a distinctive experience;
just compare actual reading with pretending to read! These two
characteristics might seem at odds with one another, but aren't they
really different aspects of the same phenomenon? That is to say,
isn't the experience of reading precisely the experience of the
computational process taking place?

Here,
then, we're once again confronted by the two notions that have dogged
us throughout our investigation: process and first-person experience.
And in the context of reading they seem to stand out even more
compellingly than before. Accordingly, Wittgenstein's treatment of
them is both richer and more probing in this part of the discussion.

§§156-158:
Experiences and Mechanisms

As
with §§143-145, we start with a (brief) description of the
circumstances surrounding reading: learning to read, various criteria
for saying someone is reading, the difference between a beginner and
a fluent reader, etc.

This
quickly prompts both the idea that reading “is a distinctive
conscious activity” (§156e) and that some kind of mechanism must
be at work (§156g). Against both, Wittgenstein observes that it
makes no sense to talk about the first word a beginner reads (note
the connection with §145b). The point here is that if reading is
either a particular experience or the state of a mechanism then it
ought to make sense to ask “what was the first word he
read?” Concentrating on the notion of mechanism, Wittgenstein draws
a highly significant distinction between our concepts in relation to
machines and the way we apply them to living beings –
even where the living being is
used as a “reading machine”.

Against
this it is objected (§158) that the different treatment merely stems
from our comparative ignorance of the workings of the brain.
Wittgenstein's response is to underline the claim's theoretical
nature (a point already made at §146g), and to further suggest that
it is a priori:
whatever the evidence (or lack of it), things must
come down to a mechanistic explanation.

[What
is he getting at here? I think §158 is an enigmatic, troubling
section. It seems to come alarmingly close to suggesting that our
belief in causality as a universal principle is a kind of
metaphysical superstition (cf Zettel,
§609). It certainly requires careful, detailed analysis. For the
moment, however, I'll offer a provisional, “middle of the road”
gloss: “Why do you say things must
come down to a mechanistic explanation? What evidence do you have for
this? What we know is
that 'understanding' is not used like a name for a mechanistic
process – and certainly not a hypothetical one.” That's by no
means the last word on the subject, but it'll have to do for now.]

§§159-161:
The Conscious Act of Reading

The
discussion now switches back to the first-person experience of
reading. Isn't that the essential thing which distinguishes actual
reading from merely pretending or the free-association of sounds with
marks on a page?

Wittgenstein
counters with the imaginary case where a drug produces a feeling of
reciting from memory in someone who is in fact reading a passage he's
never seen before. A variation is offered in which the person feels
he is reading when he is actually associating words with signs in a
completely unfamiliar alphabet. In the first case we would say he was
reading despite his feelings to the contrary. In the second case,
classification would depend on his reaction to the signs. If his
words bore no clear relation to them (eg, he read “^#*” as “blue”
on one occasion but as “left” on another) we'd say he wasn't
reading. But if the same words were always associated with the same
signs then we'd perhaps be more inclined to say he was. That is, it
wouldn't be clear whether he was reading or not – it would be up to
us.

[The
point, of course, is that his experiences
are not the decisive factor in either case. But nor is it exclusively
down to what he does (for he might speak exactly the same words both
times). Rather, it is down to what he does given the
particular circumstances in each case.
Wittgenstein underlines this with the experiment in §161. What is
the difference between counting to twelve and reading the numbers
from a watch dial? Again (I think) the implied answer is: the
circumstances.]

§§162-164:
Derivation

Here
we switch back to understanding as a process – specifically the
process of deriving.
It's an idea that has sort of been “in the air”, but not directly
confronted, ever since §146b (the same is true, by the way, of
rule-following, which is closely connected to derivation; see §143a,
§147 and §162).

Wittgenstein
describes a particular case in which we'd be inclined to class
reading as an example of deriving sounds from a text. But immediately
comes the objection that we don't have enough here to be sure
this is derivation; we taught him the alphabet, he read the words –
what right have we to say that the link between the two was deriving?

[Note:
(a) this already treats derivation as an “inner” or “hidden”
thing; and (b) we're at once tempted to look inside ourselves
and search for it there. This goes some way towards explaining why
the discussion constantly switches between third- and first-person
perspectives.]

So
Wittgenstein alters his description to make it an even clearer case
of deriving – that is, he changes the circumstances.
But here it's objected (§163) that even if this is
derivation, we can't assert it simply because the pupil looks at the
chart and writes the “correct” letters. For whatever
he writes might be classed as derivation according to some rule or
other, and thus be “correct” (the “anything goes” argument –
cf, §139). And now the very concept of derivation starts to look
empty.

In
§164 Wittgenstein offers the moral of the story: our search for the
essence of deriving
has led us into darkness, for there is no such thing. Instead, we
have a complex family
of circumstances in
which the word “deriving” is warranted.

[A
brief elaboration: when we treated deriving as a hidden essence it
became logically distinct from its consequences. As a result, the
very essence we thought we needed to find became empty. Compare this
to the case of mental pictures (§139) and the “two criteria”
objection to dispositions in §149. Also compare it to the famous
“beetle in a box” example in §293. But our grammar doesn't treat
deriving as if it was a thing in a box; instead it conceptually
links it to various performances in various circumstances. These
circumstances sometimes include
what went through someone's mind (“I recited to myself 'Richard of
York gave battle in vain' and derived the answer from that”). But
sometimes they don't.]

§§165-178:
“Experiencing the Because”

Yet
again we revert back to a first-person argument. The discussion
considers a cluster of related claims:

we know from our own experience that
reading is a particular process (§165);

the words come in a distinctive way
(§165);

they somehow cause our utterance
(§169);

we feel the connecting mechanism
between the word and our utterance (§169).

So
finally the third- and first-person arguments come together. Reading
is both a mechanism and a characteristic experience; it is a
characteristic experience of a mechanism at work .

At
each stage, however, Wittgenstein (a) exposes this account as a
picture that we adopt rather than a straightforward
description of the the facts, and (b) undermines the temptation to
adopt this picture.

§165. If reading is a
characteristic experience then it doesn't matter what sounds result
as they can all be linked to the text according to some rule or
other. [This brings together the “two criteria” argument (§149,
§153, §§163-164), and the “anything goes” argument (§139,
§§163-164).]

§165. Moreover, it's not enough that
the written words makes the spoken ones “occur” to me, or
“remind” me of it. That could happen yet what I utter might
still be incorrect. Mere association is not enough.

§166. The claim that the words come
in a distinctive way is a fiction. Consider normal cases of
reading: we don't even think about how the words come or if there's
something distinctive about it. We see the words and we make the
sounds. What else do we know? Of course, we notice a difference when
we (eg) associate sounds with squiggles but it mis-describes reading
to therefore conclude that the words come in a special way. With
reading it's automatic, with squiggles it isn't. That's the
difference. And it's not an experiential difference; it's a
circumstantial one.

§167. There is no single
characteristic experience of reading. This undermines the idea that
reading is a particular process that we experience. [To put it
another way: even if there is a particular process, we cannot
infer its existence from the experience of reading, for that
experience is extremely varied.]

§169. It makes no sense to say that
we experience the causing. Causation is established by experiment,
tests, etc; it is not something that can be felt. [That would be
like saying “I'm feeling inflation” when I'm shocked by how much
the price of bread has risen.] This is a categorical (grammatical)
distinction. Indeed, we do not say that the text is the cause of our
reading – rather, it is the reason we utter the words that
we do. That is, we appeal to a standard of correctness, not
to causation.

§§170-178
provide a kind of summary. We assume that the difference between
reading letters and associating sounds with squiggles represents
(respectively) the presence and absence of influence. But being
influenced (or guided) is no more a particular experience than
reading is. It forms a wide family of cases, and the important factor
is not the presence of a particular experience but the circumstances
pertaining to any given case.

None
of this bothers us during actual use, but when we reflect on these
things mere circumstances can seem insufficient. We're tempted to
posit a particular source of influence (in other words, we've started
theorising). Maybe it's a strange feature of the words
themselves – as if they exercised a kind of “thought control” –
or a process (perhaps physical) operating behind the scenes. And now
we take our varied experiences to be experiences of this
elusive form of influence. We look at them “through the medium of
the concept 'because'” (§177). Of course, it is right to say that
we're influenced, but not because of any particular experience or
process. Rather, it is correct to apply the word “influenced” in
all these varied circumstances. [We have supposed an essence, but
what we needed was to recognise a family resemblance concept.]

4.
§§179-184. Back to Understanding

Wittgenstein
now applies to understanding the insights gained from considering the
case of reading. Not surprisingly, this involves reiterating the
point made in §§154-155: “The words 'Now I know how to go on'
were correctly used when the formula occurred to him: namely under
certain circumstances. For example, if he had learnt algebra, had
used such formulae before” (§179).

At
the same time, he's aware of the temptation to take this point the
wrong way. We might, for example, suppose that “now I know” (or
“I understand”) is a kind of shorthand description of the
circumstances – as if we somehow deduced that we understood
from the fact that the situation was one in which “now I know”
would make sense. This harks back to the interlocutor's point at
§147: “When I say I understand the rule of the series, I'm
surely not saying so on the basis of the experience of having
applied the algebraic formula in such-and-such a way!” This is
correct, but the interlocutor's mistake is to assume it shows that
the circumstances are irrelevant. Rather, it shows that they do not
connect with the language-game in the way he supposes. We do not
appeal to them as a criterion of application; they are the context
within which our criteria make sense. They “set the stage for our
language-game” (§179).

A
second temptation is to think that the circumstances form part of a
causal explanation. Here the supposition is that if the right
circumstances are in place (general education and other background
features + the formula occurring to the pupil + a characteristic
experience of understanding) then the pupil must continue the
series correctly (§183). That this is mistaken can be shown from the
fact that even where the phrase “now I know” is warranted it is
still defeasible – the pupil might still be unable to
continue the series correctly. In such a case we would normally say
that the pupil's statement was wrong: he didn't in fact know. But it
was still understandable. [Compare this to a case where you
show a formula to someone with no mathematical training whatsoever
and he says “now I know how to go on”. In those circumstances his
claim is not so much wrong as completely bizarre.]

These
considerations throw light on the role of “now I know”. It is
best not thought of as a description of a mental state at all (§180).
Rather it is a signal that the pupil is confident (perhaps certain)
that he can go on correctly. But, of course, being certain you can
give the right answer and actually doing it are not the same thing.
We frequently find ourselves ruefully saying “I was so sure
that answer was right!”

Next
Steps: Rule-Following

I
started this summary with a list of questions, but there's one I've
not really considered so far: how can we grasp the whole use of a
word in an instant? This is raised right back in §139 and is
occasionally glanced at during the discussion (eg, §147: “I surely
know that I mean such-and-such a series, no matter how far I've
actually developed it”). Knowing the whole use of a word
seems a criterion of understanding, just as knowing what a chess pawn
is means knowing how it can move in any given position. But how can
we know the whole use of a word? For however we've used a word up
till now, what happens when we come to apply it in a completely new
situation? How is our past experience supposed to help us?

Surely
when I grasped the rule for using the word “cat” I didn't know
that it could be used in the sentence “The cat sat on Jupiter's
second-largest moon”? And yet in some sense I clearly did
know that, for I was able to form the sentence without any trouble at
all. It didn't come as a surprise to me that “cat” could be used
in that context. The rule, it seems, guides us effortlessly through
countless permutations that weren't envisaged when we learnt it.

How
does it achieve this? Or, to put it more generally, what is the
connection between the rule and its application? What keeps them in
sync? That is the issue which forms the heart of the discussion in
§§185-242.

72 comments:

I read this discussion with interest Philip. It seems to me you have got a very tight hold on this aspect of Wittgenstein's thinking.

I have been kicking around the same issue, the question of understanding and meaning, for a while now myself. I've taken a slightly different approach to the one exemplified by your remarks here -- though, it seems to me, we are likely not so far apart. Rather than comment on your remarks (which doesn't seem the right approach for your comments speak for themselves) I'll just offer this link to one of my pieces on Sean's list dealing with the matter, Can Machines Get It?:

These two pieces offer my own, somewhat Wittgensteinian, approach to the problem though neither is, strictly speaking, exegetical of Wittgenstein nor simply an effort to stay within the boundaries he drew. I have tried, in fact, to take a slightly different tack without sacrificing the important elements of insight he brought to the table. I thought you might have a comment or two since I do diverge in places from a strictly Wittgensteinian perspective (for instance I think the idea of mental pictures very important in getting meaning) and your feedback could be helpful in my figuring out whether, perhaps, I've strayed too far!

There's clearly too much here to address in even an intolerably long comment, so I'm going to focus this comment on the narrow issue of "mental pictures". My take on the integrated issues of intent-meaning-understanding is very much aligned with Stuart's, yet he considers mental pictures "very important" whereas I consider them largely irrelevant and am inclined to totally ignore them. If it's clear that Wittgenstein dismisses pictures, then resolving this might not be relevant to the exegetical purpose of the post. But I anticipate a good bit of discussion on the post, so it might avoid recurring debate if we could agree up front whether mental pictures will or won't have a role in further discussion.

Here's my argument against mental pictures. First, any mental picture that a person blind from birth learns to associate with a word can only be based on verbal descriptions of the entity in question. But then what role in understanding the word could the mental picture play that couldn't be played as well, perhaps better, by the verbal description itself?

Second, implicit in the assumption that mental pictures play a role in understanding is that they have causal efficacy, eg, one hears a word, forms a mental picture, and then based on the picture does something. There are reasons to doubt such causal efficacy, among them the blindsight phenomenon in which a subject claims to form no mental picture of some portion of the field of view but nonetheless can detect motion or avoid obstacles in that portion. (By no means conclusive proof of causal inefficacy, but highly suggestive.) Furthermore, as far as I know there currently are neither convincing physiological arguments for how mental pictures might be implemented nor convincing evolutionary arguments for their necessity. Therefore, it seems doubly speculative to assume some functionality that is based on what may very well be a trick played on us by the brain.

Yes, this post covers a lot of ground. I intend to back it up with more detailed, topic-specific posts and they may provide a better platform for debate. This one's more about knowing your way around the discussion than digging deep into it.

Having said that, I'd be wary of the claim that Wittgenstein dismisses pictures. It's true that he dismisses them as the defining characteristic of understanding, but that's not to say that they have no role whatsoever in the concept. I think Wittgenstein would say that mental images do play a part on some occasions. What they don't (and cannot) do is constitute understanding itself.

It's important to realise, however, that Wittgenstein's point is a conceptual one, not an empirical one. By that I mean it does not represent the discovery of a new fact gained through experiment or testing. Rather, it is shown by a description of how we use the word "understanding" in various situations - for it is that description which shows us how the concept of understanding works. So when you say mental pictures do not have causal efficacy that is correct, but not because we have discovered something hitherto in doubt (by studying blindsight, for example). Rather, it is shown by considering the place mental pictures have in the concept of understanding.

Suppose I'm shown a number of 19th C portraits I've not seen before and asked "which one is Beethoven?" As it happens I'm familiar with a particular portrait of Beethoven so I call it to mind (as best I can) and pick a portrait from the group based on my mental image. Now, is the mental image the cause of my choice? That would be an absurd thing to say because if it was the cause then I didn't make a choice! It would be more a reflex action: given stimulus A I couldn't help but perform action B. But that is not how we describe things in the case of my picking out a portrait. Instead we would say that my mental image was the reason I chose this portrait rather than that one. In other words, it's what I would appeal to if I was asked to justify my selection (and here it should be obvious that my picture's being a mental one is completely inessential - I might just as well have had a photo of the portrait that I consulted instead).

I think Philip is right that ". . . Wittgenstein would say that mental images do play a part on some occasions. What they don't (and cannot) do is constitute understanding itself."

In the piece I wrote involving that road trip and the unfamiliar road sign, I would say no particular mental picture I had at the moment of "getting it" constituted my understanding of the words in question. After all, my experiences are different from everyone else's and so leave me with different "pictures." If my pictures WERE the meaning, no one could ever understand me and I could not understand them. How could the sign maker have had my pictures?

What counts as the meaning in this situation, I would say, is the way various pictures relate within a larger framework, a context. On such a view, understanding between speakers happens when there is a preponderance of similarities in the relations of the pictures, when a certain critical mass is achieved, leading to enough commonality, not picture for picture, but in associative frameworks. On this view understanding is certainly mental and private to a degree but that doesn't exhaust the meaning of THAT term because understanding surely involves propensities to behave and actual behavior, too.

Charles makes a good point about the case of someone who is blind from birth. After all, one doesn't have to have vision to understand. But I think this can be dealt with by recognizing that "pictures" needn't only be visual. A recollection of a certain complex of sounds or tastes or touch experiences can be "pictures", too. I don't know to what extent the brain is wired for visual images primarily (I know that in myself the visual seems to be quite important), or whether being blind from birth is a bar to having any visual images at all. Perhaps research in this area will help answer that question. But I think it can be strongly argued that understanding has a mental side and that it resides in the picturing phenomenon of brains and that we can't limit mental imaging or picturing to the visual paradigm alone.

Why is this important? If we want to build synthetic brains having artificial intelligence and/or consciousness (and I think there's good reason to want to do this -- whether or not it turns out to be a good idea from the perspective of humanity or not!), then it's critical to figure out how entities like ourselves grasp things and make behavioral choices. It's not enough to look only at behaviors or even brain mechanics because a "smart" machine that behaves right will still be limited in ways we are not. Without the ability to image its world the machine's smartness would be severely constrained. So one has to know what goes on in brains to prompt certain behaviors in order to know why brains are able to produce the behaviors they do.

On your road trip a picture occurred to you when you understood. But does that always happen? Does it happen for every single word you hear during a conversation? And when it does happen do you always have to interpret the picture?

My point here is that a picture (of whatever form) is not only not sufficient for understanding, it is also not necessary.

I agree that not every instance of understanding involves pictures. I was specifically addressing the question which Searle raises when he speaks about understanding a bit of writing, in that case Chinese characters.

Not every use of instance "understanding" is like that, of course. We understand how to do things without having pictures per se (though we may have pictures when we think about DOING them, e.g., what it's like to be riding on a bike). We may also understand another's feelings or simply assert understanding by way of acknowledging. But when it comes to getting meaning from objects used as signifiers I'm saying that something must be going on in the head, that it's not just about the right behaviors because there was no behavior involved when I recognized the meaning of that sign's words!

A "smart" robot that reacted to the sign in the right way would still not be said to understand if it had no mental life, or at least it could not be said to understand in the way we mean when we speak of understanding in ourselves.

Now do I get pictures for "every single word" in a conversation? (An interesting point.) I would say no, not on a one-to one correlation anyway. But I certainly do have a stream of thoughts which seems to occur as part of the conversation. In the midst of the conversation I would hardly be attending to them. On that trip, my wife was talking about something else but I tuned that conversation out for a moment to deal with a confusing sign and think about why it confused. So to some extent it's a matter of where we direct our attention. In the midst of a conversation we're presumably following the words and everything is going so fast we don't look, separately, at whatever images are occurring. But couldn't we?

If someone says X to me, and we're speaking the same language and I'm paying attention and he or she is being clear and there are no intervening events or noises to interfere, then X has meaning for me. Though I may still be temporarily flummoxed by a missed phrase or an unfamiliar one as I was with the sign.

Do we look to pictures as the words flow? No. But do pictures matter? How could they not? Wouldn't a moment's confusion be the same in a conversation as in reading a road sign in an unfamiliar idiom?

If someone wants to build a robot that understands as we do, could he/she do it without also giving it a mental life? I'm inclined to think not, which at least puts a slightly different slant on what I take to be the usual view of Wittgenstein's minimization of the role of mental pictures in getting meanings.

I'm going to focus on one remark you made because I think it's particularly interesting. You said “something must be going on in the head […] it's not just about right behaviors”.

I might be misreading you, but I think those two clauses run together two distinct approaches: the hypothetico-deductive and the conceptual.

The first clause, as I understand it, is about the need for some brain-process to take place if understanding is to happen at all. In other words, it's about the causal underpinnings of the phenomenon.

The second clause, however, seems more directed at the criteria for ascribing understanding: even if X behaviour (the input) is followed by Y behaviour (the output) that's not enough to call Y “understanding”.

With regard to the first clause we can certainly say this: something must be happening in the brain, because without a functioning brain no-one can understand anything (the same, of course, is true of walking, breathing and everything else that people do). It doesn't necessarily follow, however, that there must be discrete, identifiable brain-processes that correlate with instances of understanding; the existence of such processes is a matter for science to investigate.

But with regard to the second clause, insofar as it's about criteria we can say this: often (though not always) we can ascribe understanding without any reference to brain-processes or mental events. I say “pass me the salt”, she passes me the salt, she understood. End of. That is simply how the concept “understanding” works in many cases.

But! The concept doesn't operate in a vacuum. It takes place in the flow of human life and only there does it make sense. And one of the required background conditions is that the people involved are (broadly speaking) normal human beings – and, of course, that includes the possession of a normally functioning brain. If someone exhibited understanding behaviourally but turned out to be brain-dead then... well, I wouldn't know what to say. The concept of understanding is simply not equipped to deal with such an extraordinary situation.

Don't forget, however, that a functioning brain is far from the only requirement when it comes to designating someone as “normal” for this purpose. For example, think of all the varied characteristics we might call upon to distinguish a person from a machine: skittishness, moodiness, a sense of humour, grumpiness, fatalism, optimism, dourness... the list is endless.

To sum up: the first clause seems right as far as causes go (unless one resorts to magic). And the second clause is right insofar as criteria are concerned; behaviour (conceived of as bare bodily movements) is not enough. But the approaches of the two clauses pass each other by. The “extra” needed in the second case is not the brain-processes posited in the first case. What is needed is the context of human practices.

I see your point and do not see much to disagree with in general. I think it's perfectly true that judging understanding when our table mate responds to a request to pass the salt by passing the salt is a perfectly ordinary and legitimate way to use "understanding." But it seems to me that there is another issue here which is not undermined by your point about human context, and that was the issue I was trying to get at.

Here we come back to brain processes. What do those last two words suggest? What is the picture we get? When I read those words I see a brain running variously patterned electrical charges in all their glory. Or I see neurons or groups of neurons firing off little sparks among and between. Neither picture may be just right but they're close enough to make my point. Brain processes are physical phenomena and of course understanding isn't THAT! We don't mean neuronal or brain operations when we speak of someone understanding what I've said, etc., even if such events must be going on for brained entities, like the lady at the table, to pass the salt on request!

But would we say of the lady passing the salt that she understood if she were merely a mechanical device in the form of a person or, perhaps, a comatose person hooked up to a motorized mechanism that activated only when the words "please pass the salt" are uttered?

Technology is getting us to that point after all. So I would want to say that her understanding can't be in the behavior itself or even the behavior within the limited context of words spoken and action initiated. If we rightly cannot appropriately apply the term "understanding" to mean certain neuronal firings, we can no more apply it to certain grosser level physical behaviors, can we?

There's a middle level, it seems to me, that has somehow dropped out of the picture here, the level of mental images which Wittgenstein rightly wanted to downplay in terms of what we mean by words like "understanding." The question is what must the brain DO for the words "pass the salt" to result in passing the salt?

I think you rightly point out that the issue of physical operations is relevant in a scientific sense but not in an everyday one. But the brain's doings aren't just physical operations. There's also the functional question, as in what are the physical operations accomplishing.

And here the question arises: What must BE accomplished by any physical system (whether brain or some equivalent) to result in the understanding that motivates passing the salt when the right stimulus is presented? This, too, is a scientific question though perhaps one more akin to engineering than to physics.

Of course we can ask what understanding is in different senses. In one it amounts to the lady at the table taking the correct actions in response to "pass the salt." But suppose the point is to build the "lady"? Now it can never be enough to say we have given her understanding merely because she makes the right movements in response to "pass the salt". Perhaps an affinity to Wittgenstein can make us too quick to play down the role of the mental life in behaviors of entities like ourselves?

I'll reply substantively to your post in due course, but just to be clear: let's not indulge in veiled ad hominem comments. If I'm right I'm right, if I'm wrong, I'm wrong. Either way it has nothing to do with my "affinity to Wittgenstein". I could say the same about your "affinity to Dennett" but really that wouldn't help matters, would it? We'd just end up shouting slogans at each other.

You're right that neither events in the brain, mental events nor physical actions are by themselves enough to constitute understanding. The "extra" that's needed is not, however, something extra in the mind or brain. How could it be? How could more mental happenings or brain processes give us what we need? That would be like pushing harder at a door marked "pull".

The extra that's needed is that the actions be performed by creatures whose lives are complex enough to warrant the ascription of understanding. Creatures who learn things, get things wrong sometimes, forget things, improvise, like things, dislike others, suffer, feel regret, ask questions, justify their actions by reference rules - and so on.

Above all, they must be creatures that establish practices and have techniques that are learnt within those practices. For that is what makes rule-following possible and without rule-following you cannot have understanding. And to follow a rule is not simply to act in accordance with a rule. Computers can do the latter but not the former (and that's why mere behaviour is not enough, btw).

In short, the creatures would have to be more or less like us. How much like us? I don't think we can honestly say. It may well be possible for scientists to one day create machines that are incredibly life-like and replicate human behaviour in all sorts of ways. Would we say they genuinely understood things? I think the honest answer to that is: show me one and I'll let you know.

The reference to affinity for Wittgenstein was meant as much for me as for you, nothing personal or ad hom intended. I guess my point in all this is not to dispute the complex picture you've drawn but to say that what I think interesting here is trying to say what happens within the system that makes the complex behaviors you've described occur. This will partly be a question of how brains work (a matter for science) but also what our subjectiveness is, ie., can it be fully explained in terms of brain operations understood in straight physical terms? Philosophy sometimes seems to go off the rails here and perhaps here is a divergence between a classically Wittgensteinian approach and Dennett's. Wittgenstein focused on not getting the language, and therefore the thinking, wrong in order to avoid metaphysical excesses like dualism, idealism, radical skepticism, solipsism and so forth, the classical problems kicked up by traditional philosophy while Dennett seems to focus on reconciling current science with how we are often tempted to think about minds. It seems to me that these projects are not at odds.

Dennett's affirmative response, of course, is that consciousness is computational at bottom which puts some people off but his account hinges on showing that consciousness (including instances of understanding) can be described in terms of the things a computer can perform given sufficient complexity in its design (including in the programming). I don't think Wittgenstein would have had a problem with this because it's not a claim that understanding is merely generic programming but that it's a rather specialized sort. And, right or wrong, that's finally an empirical question, not a logical or linguistic one.

Well, I think we have to address the "mental picture" issue to even get off the dime.

Philip: In the discussion of the previous post I cautioned against getting too specific in discussing events that are "in the mind". The problem is that many quotidian uses of relevant words are incompatible with known brain physiology. To repeat, I appreciate that different perspectives on an issue may require different vocabularies, and I agree that for some purposes - eg, W's in PI - the vocabulary of brain physiology isn't appropriate. But it seems clear to me that when one opts to use the vocabulary of the mental, the use must not involve any assumptions that conflict with known brain physiology or assume unknown "facts" about brain physiology. Assuming that mental imagery plays a role in understanding , decision making, etc - a position Philip suggests Wittgenstein takes "on some occasions" and that Stuart explicitly asserts - seems to me to violate that requirement.

Philip's example of Beethoven's portrait captures some of the dangers of "going inside the mind". It essentially addresses visual "recognition", a word perfectly acceptable in quotidian discourse. However, one speculating about the implementation of "recognition" opens the door to challenges, some of which will be based on inappropriate use of the quotidian vocabulary. For example, we all know (more or less) how "mental imagery" is used in quotidian discourse, but the assumption that "recognizing" a currently viewed portrait is a matter of comparing its appearance with a stored mental image of the appearance of a previously viewed portrait is almost certainly wrong. And as noted in the discussion of determinism in Philip's other blog, denying that "recognition" amounts to a choice (in what I take to be a "free-will" sense) not only isn't obviously "absurd", it's what many thoughtful people are confident is the case. Finally,

my picture's being a mental one is completely inessential - I might just as well have had a photo of the portrait that I consulted instead.

is simply wrong once you go inside the brain. Both presumably amount to some sort of pattern matching, but the former is between patterns of current neural activity (whether directly or indirectly via mental images) and patterns essentially in "long term memory", while the latter is between patterns that are both currently producible from directly available visual sensory stimulation.

You seem to suggest that the quotidian use of various psychological terms should be given a secondary status when it comes to the discussion of certain concepts connected with knowing, understanding and so on. But surely, if anything, it should be the other way round. Where did cognitive science get the term "understanding" from if not our ordinary quotidian language? If it means something else by that term then cognitive scientists ought to explicit about that and not claim to be investigating the concept as it is normally understood. But it seems to me that that's exactly what cognitive scientists don't do. When it's pointed out that the concept they're using doesn't normally operate the way they say it does they claim they're talking about something else (though they don't say what, and nor do they decide it would be easier to give their subject a different name). But when they think they've made a breakthrough they delight in telling us that our "quotidian" understanding is wrong. It seems to me they're having their cake and eating it.

There's no issue of relative status. I'm just suggesting that there are realms of appropriate and inappropriate use for any vocabulary. I consider use of the psychological vocabulary often inappropriate when in the realm where the vocabulary of brain physiology is appropriate, use of the brain physiology vocabulary is almost always inappropriate in quotidian discourse, the chess vocabulary is mostly inappropriate in discussions about cooking, etc.

The (mis)behavior of cognitive scientists isn't relevant to my point, which is merely to suggest a policy that might make for more effective discussions.

Yeah, but... "appropriate" and "inappropriate" sound like weasel words to me. Cognitive Science gets the word "understanding" (and "knowing" and "thinking") from the everyday domain. If it's using those words in a totally novel sense then why not just coin new ones? But it seems to me it claims to be using a specialist vocabulary when it suits its purposes to say that but at other times it talks about "folk psychology" as an understandable yet inadequate theory. Again, having your cake and eating it.

I was a bit stroppy there, and I apologise. I certainly didn't mean to imply you were being dishonest or anything like that.

But nonetheless, look at the discussion we've been having. Have we been talking about understanding in any kind of technical sense? I don't think we have. We've been talking about the perfectly normal term that we all use without difficulty every day.

Consider this: when physics borrowed the term "atom" to refer to the new entities it had discovered it could just as easily have coined a new word (as it more or less did for sub-atomic particles) and that wouldn't have mattered a bit. But now let's try the same with understanding. Let's call it "applaction".

So now I say "The concept of understanding doesn't work like that" and you reply "But I'm not talking about understanding - I'm talking about applaction". And I say "What's applaction?" Well? What do you reply?

There's something very hollow about the claim of cognitive scientists to be using a specialist vocabulary when they use words like "thought", "understand", "know", "mind" and so on. At the very least I think it's fair to say that they don't stick rigidly to their specialised meanings when using the words in discussion.

Not to worry, Philip. I know all too well the frustration of not being "understood", since that's my usual state.

The behavior you're attributing to cognitive scientists is precisely what I'm militating against. I certainly don't think words from the quotidian, psychology, or philosophy vocabularies should be appropriated and redefined for use in specialized areas like physiology; that can only cause confusion. But I also think great care should be observed in using such words at all in those areas. Thus, I'd agree that statements like "Understanding is [description in terms of neural structures]" is inappropriate use, whereas I obviously have no objection to "Behavior we associate with 'understanding a sentence' may be implemented in this way: [description in terms of neural structures]" since I have used "understanding" in this exchange in just that way.

I have a bit of an obsession about this issue because being an autodidact in phil of mind and having no personal contacts in that discipline, I've been totally dependent on web posts and comments for interaction on relevant topics. And especially when I was just starting to learn, it was unnecessarily painful because of the imprecision of the vocabulary used in such fora. So, my main objective is promote more efficient exchange of ideas by using relevant vocabularies more "appropriately". Not that I'm qualified to be an arbiter of appropriate use, but I think I've become pretty good at spotting inappropriate use - sometimes my own!

I'm concentrating on the discussion at the bottom of the page at the moment - if only because I find it bewildering flicking back and forth through an ever-lengthening scroll of comments. Interested to hear your view on how things have been progressing.

Stuart: While I agree wholeheartedly with your emphasis on context, holism, and what I take to be the thinking behind "associative frameworks", I see your "getting" the meaning of the sign as a simple case of understanding the written words. Because the construction is rather awkward, it takes a little more processing than necessary, but I don't see that conjuring up mental images is a necessary adjunct. Unlike you, I'm neither imaginative nor very visually alert, so my mental imagery plays at most a minimal role (as best I can tell) in such situations.

"pictures" needn't only be visual

Understood, as I meant to suggest by "verbal descriptions". Nonetheless, my overarching point remains that once we leave quotidian discourse, words from that vocabulary may be seriously misleading. There are no "pictures" in the brain in the sense in which we commonly use that term, ie, as the name of certain objects. As far as we know, there's only neural activity. Some of that activity may be experienced as what we call "visual imagery", but that's about all I feel comfortable saying about it. Someone may be comfortable saying more, of course, but as far as I know, that would be at best speculation. In particular, I find your last paragraph suggests a certainty that I doubt you can justify. But I'd be delighted to be convinced otherwise, although this probably isn't the proper forum.

You speak of a "simple case of understanding the written words" as if this answered the question but that's the very question I think we have to address, i.e., what does "understanding the written words" amount to? What seems "simple" to us, because we don't see beneath the shell of the program, so to speak, is necessarily less so if you're trying to do the programming.

I'll admit I'm still kicking all this around. I've wondered for a long time about what someone like Searle means when he speaks of "semantics." I think he is woefully vague there, to the detriment of his claims, which is why I wanted to go a little deeper.

While acknowledging that not all instances of understanding will involve visual type mental pictures, it seems to me that on that occasion I wrote of, it clearly did. I will acknowledge, however, that I am not 100% sure that the image preceded rather than merely accompanied, or actually occurred after, I "got" the right meaning (Dennett's hypothesis of a multiple drafts model of self suggests it could have been retroactive only I wasn't in a position to notice!).

It's at least possible that the shifting pictures were merely a concomitant occurrence. My experience was only suggestive of what I take to be the fact that pictures were involved. But it seems to me that, if the aim is to figure out what brains do (and perhaps we don't all share this aim), then we have to find out how instances of understanding arise and this is the place to start looking.

Searle sees it as somehow ineffable. That could be right but I wouldn't want to take that view right off. And if brains are organic devices (whether they work like computers or not), and we have no reason to think that minds are brain-independent in a causal sense, then we have to presume that there is some physical operation going on when instances of understanding occur.

As I think Philip is suggesting, there's a wide variety of things that happen which we call "understanding" but here I want to focus in on the physical side, as in what is it that brains DO in the different cases to produce all these different things? If sometimes understanding is comprehending words on a page and sometimes recognizing a feeling in another and sometimes realizing how to do something or what an artist intended in his painting or composition and, perhaps, sometimes just an experience of everything "clicking into place" or maybe an unexplained insight or intuition, even if we grant all these and more to be instances of understanding, in the final analysis it's something we attribute to subjects, not objects -- and being a subject means having awareness (at some level at least).

So understanding implies a mental life and the issue with that seems to me to try to say what this consists of (and, for the scientists in the house at any rate) how physical things, like brains, do it.

This was intended to elaborate on my previous comment, but I think it addresses the questions you pose in your last comment. I'm pretty slow in composing comments, so I'm often a little behind the flow!

there was no behavior involved when I recognized the meaning of that sign's words!

Strictly speaking, possibly so. But IMO immediate behavior isn't required. In any event, since language is holistic it seems to me that we would do well to address speaker's "intent", linguistic "meaning", and hearer "understanding" together and in contexts narrow enough to avoid involving multiple possible uses of those words. To that end I'm going to focus narrowly on your road sign example.

Someone composed the sentence on the sign, and we feel confident that we know the composer's intent: the sentence is a command intended to elicit a specific response. That response is the "meaning" of the sentence, and the meaning of the sentence is "understood" by the motorist (M) if the intended response is elicited. (Cf the builder and his helper in PI §6.)

The response can be elicited only if M has a certain kind of background. It isn't necessary that M has learned to associate that specific sentence with the specific intended response, but the background must be adequate to have produced some neurological structure (what I assume you have in mind by an "associative framework") that when excited by the neural activity consequent to the visual sensory stimulation produces a response that is "close" (in some sense) to the intended response. For example, it helps if M is familiar with the fact that various states require by law that one turn on headlights when wipers are necessary since that reduces the task to translating the sentence into a familiar vocabulary. (All very vague, of course, but for more detail, try Kurzweil's "How to Create a Mind".)

The response presumably is implemented by a subset of the assumed neurological structure, a subset that comprises motor neurons that when properly excited could execute the response, but don't necessarily do so immediately. Thus, the response is "behavioral" if one includes latent behavior in the definition. However, it is not clear to an observer that the "meaning" has been understood until evidenced either by behavior or by some future brain activity monitoring device. Whether M "knows" (in some sense) that the meaning has been understood in the case of latent behavior is an open question. FWIW, my guess is no, but that's based on skepticism that we "know" anything about our internal brain activity, at least in the sense of "know" to which I subscribe.

If this is at all plausible, it seems to call into question the need for mental imagery, which plays no role.

As for the requirement of a "mental life", you need to detail what you intend for that phrase to encompass. I get that you can't imagine this, but the phrase has absolutely no meaning for me when using the physiological vocabulary. In fact, I studiously try to avoid the vocabulary of the mental except in musing about how one might relate physiological explanations of behavior to the way we use such words - eg, the musing above about how behavior that could be taken as evidence of "understanding" might be implemented. But that's a heuristic use for purposes of bridging - or at least narrowing - the gap between the realms of appropriate application of the quotidian and physiological vocabularies.

Thinking a little more about this, I suppose we may be talking here at cross purposes. My interest lies not merely in elucidating what we mean by a term like "understanding" but in getting a fix on what's going on when we understand in certain kinds of cases, especially in those like the one I described or which Searle addresses in his various arguments about the possibility (which he rejects) of computationally driven artificial consciousness.

Because I count myself more Wittgensteinian than not (though I don't share all Wittgenstein's concerns or agree with all his remarks), I think it's important to orient this question of what something like understanding amounts to in relation to his notions of how the idea plays out in philosophical discourse. That was what prompted me to read and then comment on Philip's very detailed and insightful account above of Wittgenstein's thinking on this question.

But perhaps my remarks have taken us too far afield from what Philip intended. It may seem to Philip and Charles that I am actually addressing a different question to the extent that I've been focusing on what Philip rightly characterizes as the causal question instead of the criterial one. On the latter I fully agree with the idea that when we speak of anyone's understanding something we do so with regard to the observed behaviors and the context in which they occur.

But I think it's equally, if not more, important to clarify what understanding amounts to in terms of what we think is going on with us when we have instances of understanding. The issue of how we use the word "understanding" in relation to ourselves seems to me to be at least as important, in terms of science, as is the related concern of how we use "understanding" in relation to others -- in terms of those classic philosophical questions which founder on the rock of metaphysics.

Charles writes: ". . . the response is 'behavioral' if one includes latent behavior in the definition. However, it is not clear to an observer that the 'meaning' has been understood until evidenced either by behavior or by some future brain activity monitoring device."

Ah, but Charles, this is just the point. Whether it was clear to an observer (my wife in the next seat, for instance) or not was irrelevant because at one moment I didn't understand the sign and then, the next, I did and that was clear to me. In essence I was the observer through the exercise of introspection.

In keeping with the Searlean example which had prompted my initial musings, I had had an experience of understanding without any behavioral alterations. One might say, I suppose, that my mental pictures of my behaviors were altered and call these the "latent behaviors" you refer to. That might be a way to get at this. But I think even that would be wrong because now one has expanded the notion of "behaviors" to such an extent that we lose the distinction between behavior and other things like thoughts, feelings, memories and so forth (whatever we want to call that complex of mental features which occurred to me with the occurrence of understanding and which seemed to me to constitute that understanding). If everything can count as behavior, then is anything really distinctly that?

My original point, in commenting here, was to try to highlight the role of mental phenomena qua pictures in this thing (or, better, these things) we call "understanding." I don't think you can dispense with that just by noting that observed behaviors are the criteria for ascriptions of understanding in others. Here, importantly, is an example in which the sense of understanding (in one who understands) is clearly in the mix.

OK, Stuart, I'll concede that there may be some attendant experience that accompanies the formation of a latent behavior due to excitation of the neural structure that I assumed in my speculative description. But I'll nevertheless object to the use of terms like "first person observer" and introspection. An experience is undergone, not "observed", and "introspection" has the same problem. They add nothing, so why describe them using that vocabulary?

Presumably you agree that "behavior" is produced by the activity of motor neurons. It seems logical to assume that in order to be available to engage in that activity when necessary, the motor neurons must be in some configuration awaiting stimulation. And if that's right, prior to stimulation couldn't they reasonably be said to implement a "latent behavior"?

I don't see that adding the possibility of latent behavior necessarily blurs the distinctions you cite, but even were that to turn out to be the case, wouldn't that be useful insight? (I wouldn't even be surprised if there isn't as big a distinction as we currently assume.)

The issue, I suppose, is what role that "attendant experience" plays. Is it merely ancillary? Sometimes it looks that way. Blind sight is, perhaps, an instance. But then it may just be that what we call sight is a complex phenomenon with lots of things happening in the brain and blind sight is just a less complete form of it, i.e., some elements that we usually have when seeing something (the access consciousness parts) fall away for various reasons leaving only enough to demonstrate that some part of the more complex experience is proceeding. These are questions for psychology and neurology though, more than for philosophy (though I think it's incumbent upon philosophy to deal with this in any effort to formulate an account of consciousness).

In the case of understanding, I suspect that, given the highly variegated things we count as that we are also dealing, at least in part, with a number of complex operations in the brain. Understanding the meanings of words in a language probably involves somewhat different neurological operations than understanding mathematical computations or how to build a bicycle or read a map or paint a portrait or why someone's feeling blue. My wife teaches English and loves poetry (she used to write quite a bit of it). She can read a few lines of a poem she's never seen before and catch the drift and tell me what's going on whereas for me it remains opaque. On the other hand, her sense of direction is hideous (we joke in our family that the best way to get where you're going is ask her and then go in the opposite direction). But I can feel my way through most terrain, rarely getting lost and, if I do, I'm pretty quick to self-correct (even if she's not around to ask so I can go in the other direction).

As to introspection, I think we sometimes overstate what we can do with it but that should not be taken as evidence that there's no such thing. If all the word means is to pay attention to one's mental goings on in a thoughtful way, to think about the elements of an experience (in terms of current and remembered inputs and associations) many of us do it quite often. I did it when I was thinking about that road sign and am doing it again now. I think introspection's gotten a bad rap because so many philosophers in the past have tried to heap so much on it, more indeed than it could, in fact, sustain.

I agree about the neurological stuff to the extent I understand it and that is largely as an educated layman, meaning there's a great deal about the brain and its operations that I'm totally in the dark about (though not for want of interest).

On the issue of "latent behavior" I recall debating a couple of hard core behaviorists a while back (they were eager to claim Wittgenstein for their camp & I was arguing it was pretty clear he wasn't one of them!) and they made a similar point. One of them even insisted that the brain events that underlie any experiencing were itself a form of behavior for behaviorism, relevant in the same sense as the organism's behaviors as an organism are. I think they were overreaching but granted that if you redefined "behavior" broadly enough, just about anything could pass muster. But then the point of behaviorism seems to fizzle away, doesn't it?

Don't forget though, that your "Eureka moment" regarding the road sign is not a report or description of a mental event (according to Wittgenstein, at least). It is a signal that you are now confident of having understood. Before you didn't get it, but now you're confident that you can act in compliance with the sign if you want to (whether you do or not is up to you).

However, whether your confidence is justified or not (that is, whether you actually do understand) is dependent on what you do next. What if it had turned out that the sign's intent was the literal one that had first puzzled you? Or a third thing that you'd completely missed? Your "Eureka moment" then wouldn't have counted as understanding after all.

Se what we're studying specifically in this case is not a moment of understanding, but the moment when you hit upon an interpretation that seemed compellingly correct to you. It made sense given your broad experience of road-signs and even broader experience of human communication and language in general.

But for all that, you might still have been wrong. As I mentioned in my post, I'm sure we've all had instances of "Eureka moments" where it turned out we were wrong.

btw, as I'm sure you know, scientists have been studying what they call "the Eureka effect" for some time. Tests have been devised, brains have been scanned and so on. The results so far have not been particularly encouraging - there's some evidence that different parts of the brain "light up" on different occasions. That doesn't mean that some kind of correlation or structure will never be found, but it doesn't particularly endorse the theory either.

You're right of course. The fact that I felt like or believed I understood is no guarantee that I did. But what was of interest in that situation was what seemed to me to have occurred when it seemed to me I understood. That is, something went on in my awareness, my mental life (a term you seem to be uncomfortable with). The issue is not what the particular brain event correlations might be (this is interesting but not immediately pertinent) but, rather, what kind of mental goings on accompany or constitute instances of understanding. Is it something that's somehow ineffable or can we say what it is if we pay close enough attention? And if we can say, then can it be replicated in some way? I think linguistic analysis can help here but it cannot, finally, answer this sort of question.

I'm not uncomfortable with talking about our mental lives per se, but I do think you have to be very careful. For example, it's all too easy to slip into talking about the mind as if it was a kind of thing (something created by the brain perhaps) and that by itself is enough to generate all kinds of illusions and philosophical problems.

So, just to be clear, from a Wittgensteinian point of view, talk of the mind is basically figurative. If I say "I've got this idea stuck in my mind" that is not to claim that I have a non-spacial mental receptacle in which a non-spatial mental object is trapped and that I cannot extricate despite my best efforts. I'm sure you'd agree such a claim would be absurd, and yet it's surprising how often philosophers seem to assume something of the kind without acknowledging it either to themselves or to others.

Moreover, the temptation here is to think that if talk of the mind is figurative then the literal truth must lie in the brain. But that just deepens the confusion because now the literal truth of what I'm saying is completely hidden from me. I can no longer understand my own words.

Insofar as there is a literal truth here, it lies in such statements as "I'm obsessed by this idea". And the word "idea" here is not the name of any kind of thing. It has neither a spatial nor a non-spatial location (whatever that might mean). It is simply what I tell you when I tell you my idea.

So when you ask (concerning mental goings on) "can we say what it is if we pay close enough attention?" The answer is no - not because we are unable but because there is no such task to be accomplished. You are modelling your conception of the mental on the analogy of a container with things hidden inside it - things that we might become better acquainted with if only we looked harder. But there is no such container, no such objects hidden inside it and no such thing as looking for them. Your analogy, suggested by a figure of speech, has led you on a wild goose chase.

We agree that the language we apply to the mental sphere is often misleading because of the public venue in which language is designed to operate. What we call "mental" is private in that sense and language is inherently public.

Perhaps we are not quite so close though in how we think we can use language for addressing issues revolving around such "private" concerns. You think my doing so has led me on a "wild goose chase" (a not uncommon view of many in the Wittgensteinian camp, though note, I am not excluding myself from that camp, merely noting some variations in our views). You apparently conclude that my interest in and focus on "first person" concerns suggests the kind of confusion Wittgenstein opposed. As I recall from our talk on Duncan Richter's blog earlier, you think Dennett has fallen into the same "trap."

I think this is a mistake. The difference in our views, I would suggest, lies in our interests, not in our understanding of the role and confusions of language use. It's true that to speak of minds or intentions or understanding in a subjective way CAN lead to the unsupportable notion of parallel entities. But I have explicitly discounted for that up front by acknowledging that I don't mean what you suggest, i.e., that I am somehow "modelling [my] conception of the mental on the analogy of a container with things hidden inside it - things that we might become better acquainted with if only we looked harder."

What I'm saying is that between the physical account of what brains do, when they are working properly, and what brained entities like people do in the context of their lives, there is also a subjective domain, as in what is going on in any brained entity's awareness (whatever produces it) that is responsible for the brained entity's behaviors.

If one wants to build a synthetic brained entity, the point would be to build the "brain" part to do what naturally occurring brains do in order for the brained entity to act as we do. Between the mechanism of the brain and the behaviors of the organism there is something else which, while difficult to speak about in an ordinary way, is undeniably present. Talking about the subjective aspect of our experiences is not merely illusory. We DO have a subjective side, a private side which others don't access. We do have thoughts and feelings and beliefs and sensations that are private to ourselves. Any entity that works like us would have to have these too. Nor is it just a way of speaking!

One way to get at this is to disregard it, as behaviorists seem to want to do. Another is to focus on it, as many misguided folks do when they try to reduce everything to the subjective and you get peculiar doctrines like idealism and dualism and the like. I'm suggesting that we can deal with it without disregarding it or misconstruing it though. But it's not easily done as this discussion has shown.

You say "we are unable [to say what it is by paying closer attention] because there is no such task to be accomplished." And yet there manifestly is. If we want to replicate the behaviors of a conscious entity like ourselves in real life (not just in tightly constrained, predictable environments) then we have to build something that can respond as we respond. To the extent we recognize that we have a mental life (and I take your comments to indicate that you do recognize that despite a reluctance to talk a lot about it) then it is there we must look to find the things a brain or some equivalent must do.

Then the question is whether we can explain consciousness (shades of Dennett here!) computationally or in some other perfectly physical way. If we can, then we can build systems to do what brains do. I guess what I'm trying to say here, Philip, is that the choice isn't between some crass subjectivity-denying physicalism and denial of the truths of physics (dualism, idealism). And I don't think Wittgenstein saw it in such either/or terms either.

On the one hand you say "I have explicitly discounted for that up front by acknowledging that I don't mean what you suggest".

But in the next paragraph you say "what is going on in any brained entity's awareness".

Now, maybe you expressed yourself clumsily here, but the second phrase seems an example of what you're denying in the first phrase. What exactly does "in any brained entity's awareness" mean? Do I have an "awareness" which has other things in it? Isn't this just a strange way of saying "what any brained entity is aware of"? Because there's no fundamental problem when it comes to ascertaining that. You ask the brained entity and the brained entity tells you.

Why does this seem strange to you? Isn't something going on? We aren't just robots going through the motions. Whatever motions we make we do BECAUSE of something we want, need, expect, desire, plan, hope for, etc., etc. What do those things consist of?

Perhaps they seem self-explanatory to you, or irrelevant to any question about what they are given that they are just a part of your life, your way of being in the world. But then suppose you're a researcher trying to figure out what you have to do to build an entity that behaves as we do.

Well, you could say I will plan out all the actions and they will LOOK just like ours. And conceivably you could produce a very successful simulation this way, successful because it would be convincing. But it could only convince to the extent that you've already planned all the behaviors out, anticipated all the situations qua stimuli.

Real life, of course, isn't like that and so, eventually, your simulation will be discovered and you know it. So what can you do? If you want the thing you build to really work like us you have to figure out what working like us means. What is it the brain does that produces the actions we see?

What's going on inside the box, in other words? What kind of beetle is in there and how does it do what it does to show itself through its actions?

Certainly, when we talk about minds and the things of minds in the ordinary way we don't really care a lick about the nature of the beetle. But the fellow thinking about building one needs to. Ordinary life and building beetles in their boxes are quite different enterprises.

The fact of the beetle's peculiar status, being forever invisible to observation in the ordinary sense, leads some folks to think the wrong way about the beetle. And then they get into conundrums like how can we know things about the box when we can't even see that beetle!

But perhaps we don't have to see the beetle in the way we see the box (and its behaviors). Not seeing it doesn't deny its existence. It doesn't mean there's nothing really there, in the box, nothing going on at all! If we want to build the box and its beetle, we have to figure out how to build both.

My point is that too strong an emphasis on Wittgenstein's important insights about what we're doing when we talk about mental things can lead us astray just as surely as disregarding those insights can.

It seems strange to me because it seems to take literally the idea that we have a mind (in the same sense that I might have a box) and that this mind has things in it (in the same sense that I might have a beetle in my box). Only here "mind" has been replaced by "awareness". And if you mean it that way then are doing precisely the thing you've denied you're doing - ie, taking literally the phrase "I have a mind".

But if you don't mean it that way then it doesn't make any sense. "I have an awareness" is nonsense. I am aware of things, I don't "have an awareness".

None of what you say after that addresses this issue. But, anyway, if you want to build a brain-like machine assuming it involves either (a) a patent fiction or (b) a senseless form of words is probably not a good place to start.

Neither form is unintelligible in ordinary language use. The latter phrase suggests a particular character to the awareness in question. For instance, I may be aware of the sign along the road because I see it but I may only have an awareness of it if I was told to look out for it and so know roughly where I should expect to see it.

Language does not prevent us from speaking of "awareness," at times, with the article in front of it. Different instances of awareness may well have different characters. An awareness of a sudden burst of light may be quite intense while an awareness that it's light (not dark) in the room may even go unnoticed. The former awareness may hurt the eyes or shock us because of its unexpectedness while the latter awareness may require someone's specific query about it in order to bring it to our attention.

Can we speak of having awareness in the sense that we speak of having a cup of coffee or a hundred dollar bill in the hand or a good time? Of course not, but then I haven't suggested that. Back when I was much younger I had a dog I was quite partial to (though he was almost as independent in his attitudes as a cat!). I once told my sister that he was my dog and she castigated me for speaking like that because she felt it denigrated the dog I professed to like so much into a mere possession. I then asked if she would object, on the same basis, to my describing her as "my sister"? I think this case is somewhat like that. Having is not one sort of relation, nor is calling certain states of mind we may have "awareness" the same thing as claiming they have entity-like status.

Nor, for that matter, is speaking of minds as things some entities have necessarily to imply some possession of some kind of parallel entity attached to the brain!

Is it a "patent fiction" or a "senseless form of words" to speak about the mental features we all find in ourselves when we stop to think about them? Just because they aren't entity-like doesn't mean we can't or don't need to apply names to them or that, when we do, we're just confused. What matters is what we do with those names and what confusions are kicked up, if any. Is it a confusion to suppose there are certain mental features (elements of the subjective side of our lives) which we would expect to be present in a synthetic entity if it were also expected to behave as we do?

Sorry, but this still mangles the grammar of the concepts "awareness". I've put your comments in italics.

We can 'be aware of' or 'have an awareness of.'

Yes, this is true. But we can't have “an awareness in which” - and that's the form you actually used rather than the two mentioned above.

The latter phrase suggests a particular character to the awareness in question.

No it doesn't. Any particular character relates to the object of awareness, not the awareness itself.

I may be aware of the sign along the road because I see it but I may only have an awareness of it if I was told to look out for it and so know roughly where I should expect to see it

The first case is fine. The second is wrong. You do not have an awareness of the sign, you are aware that the sign is somewhere in the area. Here “aware” is being used like “know”. But to know something about the sign (in this case its rough whereabouts) is not to have an awareness of it..

Different instances of awareness may well have different characters

As before, this is incorrect. The differences in character belong to the object of awareness not the awareness itself.

An awareness of a sudden burst of light may be quite intense

This solecism makes my point. A sudden burst of light might be quite intense but not the awareness. It's true that I might be keenly aware of something, or even intensely aware of it. But this just means I focus my full attention upon it. It doesn't pick out a perceptible quality of the awareness itself. There's no such thing as doing that.

an awareness that it's light (not dark) in the room may even go unnoticed.

I can make no sense of an awareness that goes unnoticed. By definition if I don't notice something then I'm not aware of it. There can be something in my field of vision (eg a man in distance) that I don't notice, but but to say that is precisely to say that I'm not aware of it. It really makes no sense to say I have an awareness of which I'm unaware (though I can certainly do something without being aware that I'm doing it).

In short, our ordinary linguistic usage budgets for none of the formulations you are attempting to impose upon it.

I don't think we're going to get a meeting of the minds here, even given their non-spatial nature -- or their lack of any nature to speak of at all. You write: "we can't have 'an awareness in which' - and that's the form you actually used rather than the two mentioned above."

I'd say it's the same difference. Why can't we have "an awareness in which" as in "I felt an awareness of something lurking just behind me, an awareness in which a profound sense of dread seemed to creep up from the soles of my feet"!

Most of the time I think it makes sense to speak of perceptions as perceptions OF something and the character we ascribe to them as belonging to whatever is being perceived. But perceptions are not the only kind of awareness. "Awareness" is vaguer, broader in its application than that".

Feelings are also instances of being aware as are what sometimes seem like their unaccountable absences. "I was so frozen in place by the appearance of the murderer I felt nothing!"

'I felt a sense of accomplishment when I worked out that formula.' 'What did it feel like?' 'I was happy and exhausted and it was as if a great burden had lifted.' 'You were aware of all that?' 'And something more! I knew I had passed the final hurdle.' 'In a foot race?' 'No, in life.'

'And how did all this occur?' 'Well I didn't mean them literally! It was just kind of like that, you know?'

Being aware is as complex as the things of which we may be said to be aware. To the extent that we're aware of many things, we are aware in many different ways. In the two cases of being aware of that road sign, the first reports an instance of perception (something seen), the second a bit of knowledge about what I should expect to see. In both I am aware, though of different things and thus there are different characteristics I can report. It's a mistake to suppose that the single word "aware" must denote only one thing, even if you think that the term is without content of its own. The content is in its many manifestations. The question is what's needed for those to happen?

Would building a machine that has the same range of sensory capacities we do, plus the capacity to store and organize inputs, as information, roughly as we do, and to manage the information (both currently received and stored) in order to construct "pictures" of the world in which the sensory inputs originate, as we do, provide enough raw material for the formation of awareness, in all its manifestations, including an aware self?

Philip writes: "A sudden burst of light might be quite intense but not the awareness . . . this just means I focus my full attention upon it. It doesn't pick out a perceptible quality of the awareness itself. There's no such thing as doing that."

I agree but this contradicts your apparent proposal that "awareness" just is, that it cannot be broken down to anything else. Here we agree that we can explore the varied ways in which we get the sort of thing we call "awareness" and you seem to be agreeing and yet you want to say I cannot talk about the many different characteristics we find in instances of awareness. Must we always be aware of the things we are aware? What about hitting the keys for the right letters as I type this? I could not tell you which key is where and yet my fingers hit the right ones without my awareness of doing that. Walk into a lit room in daylight with no reason to expect darkness and the fact that it's lit may go unnoticed. There are not only different characteristics to awareness, and different things we are aware of, but there are also degrees of awareness.

Anyway, it looks like we've each said our piece. I'll take a break now and leave you time, Philip, to address other interlocutors and explore other issues. Thanks for the helpful exchange.

First, thanks for your comments on this. I hope you'll have thoughts about future posts too.

One last point. You ask why we can't have "an awareness in which", and that's a very pertinent question. The answer, for me, is that it misrepresents the grammar of the concept "awareness" and in the process makes it sound as if awareness was a curious kind of medium in which our experiences take place. In other words, it raises the spectre of dualism (oops! sorry - forgot to put the word "aspect" in front of that to make it sound more modern and respectable), qualia, raw feels and so on.

And now it seems like this strange medium - and in particular its relation to the brain - is what we need to understand. Dennett's tactic for dealing with it is fairly ingenious, but also doesn't make sense. He declares that ordinary talk of perceptions, ideas, thoughts, sensations etc are a useful fiction - they represent how things seem to us and evolution has proved the value of this delusion.

But this is nonsense. The "fiction" Wittgenstein mentions in §307 of the PI (and which Dennett quotes approvingly in CE) is the philosophical notion of perceiving, thinking, etc as something radically "inner". And his point is that our non-theoretical language doesn't actually support this tempting idea. Dennett's mistake (well, one of them) is to assume we're all deluded aspect dualists, but the way we actually talk about perception when we're not theorising about what "perception" is simply doesn't bear this out. That's why philosophers so often end up using terms like "awareness" in idiomatic ways (which they sometimes call a "specialist" use). What they're doing is bending the facts to fit their preconceived theory.

And that's what you're doing when you write "I felt [...] an awareness in which a profound sense of dread seemed to creep up from the soles of my feet". But this is just a rather strange (and patently awkward) way of saying either "I felt a profound sense of dread creeping up [...]" or "A profound sense of dread seemed to creep up [...]". Either way the words "an awareness in which" add nothing yet make it seem like the real action is happening in a kind of private peepshow. (In this context it's worth reading §306 - ie, the section before the one Dennett quotes. He might have been well advised to think more about the implications of §306 than §307.)

Anyway - as you say - enough of this for now. We'd better get on with our lives! :)

Yesterday was Christmas shopping day, so I missed this whole exchange. Just three general comments:

According to dictionary.com, "aware" and "conscious" are synonyms, and they are often used that way. Both are words that I think are fine in casual conversation - as Philip notes, assuming they are used in generally accepted ways - but are obviously problematic when they themselves become the topic of serious discussion. I don't think anything is added by trying to describe one problematic word - eg, "understanding" - in terms of another problematic word. That's why I tend to go physiological faster than Philip perhaps thinks appropriate. Explanations in terms of neural structures may be wrong, but at least they can be more or less unambiguously wrong!!

Another problematic word is "subjective". Whereas "consciousness/awareness" seem to me to be underdefined in that there is no consensus on an explanatory theory, "subjective" seems to be overdefined: private, first person, relative, not objective, et al. Which makes it rather useless unless it is made explicit which of those descriptors one has in mind, in which case those descriptors would seem to suffice. And even then, whatever one has in mind in using the word often turns out to be extraordinarily difficult to grasp. Eg, I've been battling Davidson's essay "The Myth of Subjectivity" for a big piece of two years with only partial success. To some extent that's unquestionably a commentary on my capabilities, but I think the topic shares at least some of the blame. The people he refutes in the essay constitute a who's-who in the field. So, yet again I take the cowardly way out and try to avoid the word entirely, especially if tempted to assert anything based on evidence one might reasonably label "subjective" since such evidence is often suspect.

Finally, I look forward to more exchanges on the rest of Philip's post. Convergent or not, the comments to date have helped me a lot.

First, if we don't know what consciousness is, how can we look for it in the brain? What are we looking for?

Second, can the question "what is consciousness?" be answered by a theory? Is it like "what is dark matter?" After all, dark matter is a "known unknown" - we know that we don't know what it is. But in our every-day, non-theoretical talk we use the word "consciousness" without any trouble at all. There we do know what it is. If someone says "He's regained consciousness" or "I was conscious of how late it was getting" no-one says "I wish we knew what he meant by 'conscious'". Why not?

Well, Philip, your first question is one I've asked ever since I started reading about this stuff. Of course, I have no answer. And so far, the answers of those who do apparently haven't enjoyed widespread acceptance.

As for your second question, the answer is obviously "yes" - there is no shortage of theories about what consciousness is. I eagerly await some degree of consensus among the theorists on one or two theories that are deemed by them to be especially promising. Unlike Stuart (I think), I'd bet that it will be found to be some quite natural by-product of extreme complexity and capability, and that man-made entities of complexity and capability comparable to us would exhibit all the signatures of what we call "consciousness". Whether such entities are feasible if inorganic is another question. And if they are feasible only if to some extent organic, at what extent do we consider them human even if "artificial"?

Rorty distinguishes words that are used in referring to "some portion of reality" and words used in talking about a possibly "nonexistent object". We use the latter kind of words all the time. Whether "consciousness" is one of those remains to be seen, but its turning out to be should cause no hiccup in its quotidian use.

I'd say there was no chance of a theory answering the question "what is consciousness". Either that's a question about how the word "consciousness" is used, in which case we don't need a theory to answer it or it's a question about what sort of thing consciousness is, in which case the question itself is a subtle form of nonsense.

The very idea that there's a question to be asked (in that sense) goes back to Descartes and rests on a thoroughly incoherent definition of "mind". How do you get a correct definition? There's no escaping it: you have to look at how the word (and its associated vocabulary) is used. That's what you call "quotidian language" but there's nowhere else to go. Anything else is just making up words.

"Unlike Stuart (I think), I'd bet that it will be found to be some quite natural by-product of extreme complexity and capability, and that man-made entities of complexity and capability comparable to us would exhibit all the signatures of what we call 'consciousness'."

No, I'm with you on that one Charles. My view is that complexity is the key but not just complexity per se, rather a particular kind of complexity, the kind that can do the things brains do which we recognize as features of our own mental lives, including:

picturing (having mental images)

perceiving (picking up sensory information about the world and putting it to use)

feeling emotions

remembering

connecting (associating)

thinking about

understanding (in a broad range of its many manifestations, many of which we've already discussed above)

having a sense of self representing the processing side of the operation

And so forth. The list isn't complete nor does it all happen, on my view, on the same level (i.e., some of the functions described may be constituted by other functions in the same list, say, or partly constituted by them). As Philip has indicated in the past, I take a Dennettian view on this, i.e., that what we call "consciousness" is just an array of various information processing operations performing certain functions in a certain way and when they perform them we have features like those listed above. Whether a computer can do it is an open question, I think. Dennett argues it can and he may well be right. But that's for researchers to ascertain, not me in discussions like this. Still, I think Dennett's account is the best I've seen so far.

Philip, I take it, is not a great fan of Dennett's but that's a debate for another forum. Count me as a supporter of the complexity thesis though!

Yes, I'm obviously not a fan of Dennett. We've talked a fair bit about him so far, but just to be clear I'm equally against Nagel, the Churchlands, Chalmers, Putnam, Crick, McGuinn, Penrose, and pretty much everyone else I've read in the area of so-called Cognitive Science. That includes Searle, though I've a bit more sympathy with him than the others.

Regarding most of the above, I think their theories are not so much wrong as nonsense. But that's hardly surprising, since the very question the theories are attempting to answer is itself a form of nonsense.

For me, Cognitive Science is like advertising: it invented a problem, convinced other that they had this problem and is now attempting to sell them solutions to it.

Interesting that you're more sympathetic towards Searle, Philips, for strikes me as seriously confused in his treatment of consciousness (but we can discuss that at another time and maybe even in another place).

Is it all just nonsense with no room for theories as you seem to be suggesting? I think that's wrong because it seems pretty clear to me that:

1) brains produce consciousness;

2) brains are physical entities:, so

3) physical phenomena produce consciousness: and therefore

4) we can learn what they do and how they do it in the same way we learn what other physical entities do.

If all of this is the case, then there's a scientific question here, not just a conceptual/philosophical/linguistic one, and therefore:

5) some theory about what brains do and how they do it will be successful, given enough time and research. But for that to happen we have to grant that this can be talked about.

I think your case against this, Philip, rests on the notion that we can't really talk intelligibly about consciousness in the scientific way Dennett embraces but that strikes me as more wishful thinking than fact. It pivots on whether we can even speak about consciousness at all or whether, with other words purporting to denote mental phenomena, "consciousness" is an empty term.

But whatever else we want to say about "consciousness," there is no question that we have a mental life in the form of the stuff going on in our minds when we turn our attention there. One can say well that, too, is a confusion because we can't really turn our attention there because the stuff we want to attend to isn't really stuff at all and so isn't really there in any discrete sense. All "mental" words, on this view, really perform other duties, etc.

But I think that is a big mistake. Whatever we choose to call our mental/subjective/experiential lifes/world/domain doesn't really matter because there's a subjective aspect to our existence and the names we attach to it are merely conventional. Intelligibility is not precluded because of the special way in which words about our subjective lives operate.

I'm not sure there's really a controversy here. I agree with Philip that developing a theory around a word like "consciousness" that may have no referent isn't obviously a sensible thing to do. OTOH, some "theories of consciousness" seem to be somewhat along the lines of what Stuart suggests: specify some features of the "mental life" and try to figure out how those features might be implemented by the brain (and the rest of the body where applicable), Calling the results a "theory of consciousness" seems just some slightly irritating, but relatively harmless, marketing.

Your argument falls at premise 1 if by "consciousness" you mean what either Nagel or Dennett mean by it, because that is the very thing that's under question here.

Of course science can legitimately investigate all sorts of aspects of our cognitive faculties. Indeed, it is already doing so and, for the most part, with complete disregard for the theoretical fantasies peddled by cognitive science. If you take me to be saying that neuroscientists can't (eg) study what happens in our brains when we track an object in our visual field then you have seriously misunderstood my position.

Let me make it clear: the so-called "hard problem" in philosophy is a direct descendant of the Cartesian ghost in the machine. And attempts to "solve" it revolve around theories that show how this ghost can be produced by physical processes. And this is akin to asking how a non-material substance can interact with a material one. But the whole problem is based on a misunderstanding - and therefore any attempt to answer it is doomed to failure also. You cannot show how a non-material substance can interact with a material one because there is no such thing as interaction between material and non-material substances. Indeed, there's no such thing as a non-material substance. Inflation, as a concept, is not material yet it exists. But it is not therefor a non-material substance that mysteriously interacts with the price of commodities. To put it like that would be to misdescribe the concept of "inflation" and make it look thoroughly mysterious. And that's exactly what cognitive science does with consciousness.

This is not an empirical statement, it's a logical one. For the misunderstanding is ultimately not about the facts of the matter but about the logic of our language. A logical sleight of hand tempts us to think something mysterious is going on here and leads us into nonsense.

It should be remembered that the problem of dualism came about through Descartes' attempt to support science. He was worried that it wouldn't get off the ground unless it had firm (quasi-mathematical) foundations. That's why he wanted to find a bedrock of certainty on which it could rest. Science got off the ground alright, but not by solving the problem of dualism. It did it by ignoring it. Hopefully neuroscience will have the good sense to do the same with cognitive science.

Philips, by "consciousness" I mean the array of mental features (including those listed above) that we find in ourselves when we pay attention to that side of our lives. And that's what Dennett means (though I'm not sure what Nagle means given his "what it is like" scenario and his supposition that THAT consists of some kind of special feature). On the view I see Dennett taking, consciousness just is a bunch of things going on in the brain which look like information processing activities. Indeed, that's why he is so often accused of not explaining consciousness, as the title of his book proclaims, but of explaining it away.

As he has said in response to that kind of criticism, the only way you explain something is by doing it in other terms. If you insist on the irreducibility of consciousness (as Searle does in some cases), i.e., that it cannot be explained except AS it appears to be, then you not only get dualism, you can't explain it at all. Dualism doesn't explain it, it asserts its ultimate separateness from everything else.

I don't take you to be denying the work of neuroscientists, Philip, but I take you to be denying the possibility of theorizing about these issues and that is simply not the case, especially because neuroscientists do theorize (see V. S. Ramachandran or Stanislas Dehaene for starters). Dennett sees a role for philosophy in reconciling work of this sort with philosophical issues and, in doing so, sees room to assist in that work, at least theoretically. Of course that puts him at odds with Wittgenstein's later rejection of theorizing in philosophy but that, in itself, can't be evidence Dennett is mistaken since, to the extent there is a clash here (and frankly I don't see much of one), Wittgenstein could have been the one who is wrong, overreaching by denying the possibility of any theorizing in philosophy at all.

As to the "hard problem" I would just point out that Dennett's approach to it is to deny that there is any such thing. It's folks like Chalmers and even Searle who affirm it. Since Dennett's attempted solution involves denying the "ghost" just as you do, I don't see why you should find his work in this area problematic, even allowing for your differences with him over the role of theorizing in philosophy.

To the extent that we take Dennett to be a member of the class of cognitive science commentators (he counts himself a researcher at times, too, but let's leave that out), his thinking does not fit the mold you've presented of seeking to explain how a mental "substance" affects a physical one or vice versa. He is quite explicitly non-dualist in his approach (which is more than can be said for Searle who, while disavowing dualism, bases his core argument on a dualist presumption).

Anyway, despite your lumping Dennett with Nagle and, earlier, with some other cognitively inclined philosophers you disagree with, your case here turns out to be, in fact, pretty much in keeping with Dennett's approach. His Wittgensteinian bona fides (or lack thereof) aside for the moment, you and he are in the same camp vis a vis the untenability of dualism, the nonsensicality of a ghost in the machine and the idea that consciousness is some special sort of non-physical thing-a-ma-jig.

Philip (I don't know why I keep typing an "s" after your name -- I don't notice it when I'm doing it but only after the fact -- must be some cognitive anomaly), here is, I think, a quicker and more direct response to your denial of my argument above:

You say it falls apart in the first premise because my use of "consciousness" assumes what it is intended to explain. But in a previous post I suggested that, by "consciousness," all I mean are those mental features we discover in ourselves on thoughtful examination. Thus my use of that term should not be taken as a tacit endorsement or affirmation of the existence of some unitary feature in the universe called "consciousness" but only as a placeholder for the array of mental features that we find in ourselves. Assuming that consciousness is anything more, at the start, would indeed be circular as you suggest. But I haven't done that, even if I allowed a single word to substitute for the more complex formulation which might have avoided your criticism.

Here is a re-write of my first premise then which hopefully addresses your concern: "Brains produce the array of features we recognize in ourselves upon thoughtful examination which we call, in the aggregate, "consciousness" in ordinary usage.

by "consciousness" I mean the array of mental features (including those listed above) that we find in ourselves when we pay attention to that side of our lives.

Once again we part company at the very first step! The above definition reflects precisely the sort of notion that leads directly to qualia, the “hard problem” and all the rest of it. For example, what do you mean when you say “find in ourselves”? My thoughts, for example, cannot literally be said to be “inside” me. Where inside me? Do they take place in my head? My stomach? How do I establish it? Might I be wrong about where my thoughts are and subsequently revise my initial claim?

Of course, we often talk about what's going on inside ourselves – meaning our thoughts, emotions and so on. But this is clearly a figure of speech, like jumping out of your skin or knowing in your heart. We won't give science any useful ideas to work on by mistaking figures of speech for literal descriptions.

Likewise, what is it to find a thought (or a sensation or an emotion)? Was it hidden from me? Behind what? Might I think I've found it but it turns out not to be the one I'm looking for? Might I be able to count all the thoughts I have once I've uncovered their hiding places? No. Again, this is a figurative use of language – one that we find apt in various situations but (as the rest of our grammar relating to thoughts, sensations etc makes clear) we are in no way committed to taking literally.

Finally when you say “that side of our lives” what exactly do you mean? Since according to you “that side” includes all our thoughts, emotions and perceptions what other side is there?

Would you deny that you have memories, ideas, feelings, beliefs going on in your experience?

I suppose you might if you want to take the position that these aren't things in any sense of that term and so there's nothing to find. And yet we DO use "things" in this way (for referents) and we DO have subjective states which are "private" in the sense Wittgenstein alluded to.

This brings us back to the old divide: are we to take Wittgenstein as denying ANY possibility of referring to the subjective side of our lives or only denying referring in a way that implies a parallel with the physical entities and their combinations?

A while back I was taken to the emergency room with a heart attack. The doctors and nurses clustered around as they were wheeling me in and asked if I was feeling pain (pointing to a place on my chest). I was not, nothing that I would have called pain anyway. I tried to explain what I was feeling and then realized that was silly and just said yes. The right answer! They promptly kicked into gear.

So we CAN refer to private sensations even if it's hard to be certain both sides in the conversation have the same notion of what's being said. Referring to the subjective sides of our lives is not excluded from our practices. This suggests that we just have to be more careful in our word choices in such cases. However, in a case like talking about the array of subjective features in our lives (avoiding "mental" for now) this becomes hard if both sides aren't going to play. For my explanations to have traction, you have to recognize and use the same rules I'm applying. To the extent you won't do that we have the same sort of problem when speaking of the subjective as we would in playing any other language game by different rules.

As to "qualia," I reject the need to posit anything more than the experiences, in all their variety, in order to explain having them!

Can your thoughts be "literally inside" you? Of course not. And my use of that locution doesn't require that. We speak of things being in other things in more cases than just jars and their contents! We talk about ideas IN a book, propositions IN a theory, rules IN a game, examples IN more cases, a word IN a different sense and so forth. Just using a word like "in" doesn't imply physical containment.

Is it just a "figure of speech" as you suggest? Why would we need to think so? Why, in principle, must we assume that the water-in-a-jar paradigm is the controlling one? Sometimes it may dominate our thinking, but that can be overcome quite easily by a little analysis and comparison with other divergent but entirely acceptable uses.

Nor can I agree that this is the same as speaking of 'jumping out of one's skin.' This is clearly a verbal picture to make us recognize an extreme feeling by invoking an outrageous image, not at all like the other cases.

Is everything we find, hidden from us first? Perhaps it's just to notice what we hadn't noticed earlier. Or perhaps, as in the scientific case, it's just to elaborate an entirely new and previously unimagined picture which simply never occurred to anyone.

As to "sides," there IS the public side of our lives, where things we refer to can be picked out by others using their sensory faculties, and the private where things we think about, such as chest pain, cannot. Now this brings us back to what I've come to see as a fundamental divergence in how we take Wittgenstein. You seem to want to invoke an exceedingly narrow interpretation of his remarks about private language because of the nature of the subject matter whereas I've concluded that a looser interpretation makes more sense. We DO have a private side (the part of our experiences which are inaccessible in principle to anyone but ourselves) only it's not well suited to linguistic usage so we develop specialized usages and sometimes have to work very hard to share those usages with others.

Would you deny that you have memories, ideas, feelings, beliefs going on in your experience?

Yes. But lose the last five words of that question and I'd answer no.

are we to take Wittgenstein as denying ANY possibility of referring to the subjective side of our lives or only denying referring in a way that implies a parallel with the physical entities and their combinations?

The latter, though it depends what you mean by "referring to". Wittgenstein certainly thinks we have thoughts, feelings, memories, emotions and perceptions. How could he deny it? And he certainly thinks we can talk about them to - we do it all the time. HE does it all the time. But that doesn't make the grammar of the word "pain" any more like the grammar of the word "pebble" - or the grammar of the word "thought" for that matter.

So we CAN refer to private sensations

This makes no sense. Private sensations as opposed to what? Non-private sensations? What are they? You are confusing a grammatical rule "sensations are private" with a description of a fact.

that can be overcome quite easily by a little analysis and comparison with other divergent but entirely acceptable uses.

Good. Then you should have no problem giving me a less figurative definition of consciousness then.

there IS the public side of our lives, where things we refer to can be picked out by others using their sensory faculties, and the private where things we think about, such as chest pain, cannot.

I agree. But what I don't understand is how you differentiate between your perceptions as they occur "within" your consciousness (which is what you claim happens) and the public world that you perceive.

You seem to want to invoke an exceedingly narrow interpretation of his remarks about private language because of the nature of the subject matter whereas I've concluded that a looser interpretation makes more sense.

I'm prepared to back up my reading of Wittgenstein with detailed reference to what he actually wrote. But in a way it doesn't matter where my arguments or views come from, does it?

"Private sensations as opposed to what? Non-private sensations? What are they? You are confusing a grammatical rule 'sensations are private' with a description of a fact."

Like chest pain or toothache. If my dentist asks if I have a pain, like the cardiologist, he wants to know where it is and what it feels like to me. He wants a report, not just an exclamation akin to "ouch". Of course references to pains can play that role, too, but it's a mistake to suppose there is no interest on the part of an interlocutor in the nature of some private experience (which just means the private side of our experiences).

". . . you should have no problem giving me a less figurative definition of consciousness then."

I have: In the present case, and for this analysis, it's just the array of features we recognize as happening on the subjective side of our experience, i.e., what is private to us and thus inaccessible to others, including but not limited to, perceiving, having ideas, memories, mental pictures, beliefs, feelings, recognizing, etc., etc. That many if not all these terms have a public dimension (e.g., we ascribe them to others based on observed criteria of behavior) or that language, being public in genesis and main venue, may be most useful in the public domain, should not preclude us from recognizing occasions when private reference is meaningful, e.g., 'it hurts me right here and it feels like . . . ."

". . . what I don't understand is how you differentiate between your perceptions as they occur 'within' your consciousness (which is what you claim happens) and the public world that you perceive."

Easy. My reference to "within" relates to the aspect of my experiences that have no public presence, cannot be experienced (perceived) by anyone else but me. They are within or in me in the same way that a proposition is in a theory, a meaning is in the use, a particular is in a class, moves are in a game and so forth.

". . . in a way it doesn't matter where my arguments or views come from, does it?"

No, of course not! We would only differ over interpretation of those citations in any case (since it's clear by now that we don't interpret Wittgenstein the same way). Such a debate could only be about whose interpreting the passage correctly. Should you find a passage in Wittgenstein which EXPLICITLY denies the possibility of reference to elements found in the private domain of our lives which cannot be interpreted as I would tend to do, i.e., refer in the way in which I believe we can refer to such things, then I would be moved to reply that you've got Wittgenstein right -- but that then he was wrong.

I don't know that that would help in this discussion unless it's just about what Wittgenstein actually believed. But for now at least that isn't what we're on about, but only about whether one interpretation of his overall view is better than another.

Philip - Although Stuart is using a vocabulary that I eschew, I have little or no trouble understanding his points. Some I agree with, some I disagree with, some I have no opinion on, and I really wish he would take my advice and drop "subjective" which I consider adds nothing but unnecessary confusion. But while you seem to be objecting to something more fundamental than his choice of vocabulary, you attack individual uses of the vocabulary rather than expressing some overarching objection.

You say "We won't give science any useful ideas to work on by mistaking figures of speech for literal descriptions", perhaps suggesting that the vocabulary may mislead them. But I doubt that many scientists are going to be misled in their research by some loose talk from philosophers (although other philosophers may be). Indeed, most of the things on Stuart's list of "features of consciousness" are the subject of on-going research, and many comprise activities that are indeed "going on in the head". So, I remain confused as to what you find so objectionable.

Rereading the sections leading up to §150, especially §149 and the boxed insert between it and §150 leads me to wonder if part of the confusion that "mental" seems to be causing in our discussion stems from the distinction made in that insert between "mental states" ala §149 - behavioral dispositions (what I unfortunately earlier called "latent behaviors") - and those listed in the insert which, with perhaps the exception of pain, we often call "emotions". (Note: I'm using the Anscombe, Hacker, Schulte translation.)

I think of behavioral dispositions just as described in §149 except that I would call them what they presumably are - brain states instead of mental states. Then a behavioral disposition is a neural structure - an "apparatus" - that includes motor neurons that implement learned behavior in response to stimulation. The other criterion mentioned - its effect - is then the responsive behavior. To "know one's ABCs" is to have developed a neural structure that implements the behavioral disposition to recite/write them in response to learned stimuli (ie, recognizable neural activity) that results from verbal requests to do so. Similarly, the answer to "when can you (ie, have the ability to) play chess" is "once a neural structure that implements the minimum required behavioral dispositions has been developed".

Emotions involve other organs (I think it's called interoception), and they can ebb and wane. Behavioral dispositions are more static; once developed, they persist for a long time (but can deteriorate over time). On the other hand, their effects are typically short lived. In the boxed insert, Wittgenstein mixes these two aspects of behavioral dispositions, perhaps intentionally.

I also think of "understanding" a simple sentence as being able to respond to verbal stimulation in a way consistent with the intent of the person doing the verbalizing. So, I would alter the question re understanding to be "when did you develop the ability to understand that sentence", the answer would be: once the required neural structure had been developed. When the behavioral disposition is stimulated and the responsive behavior is immediate and public, one could say that the sentence was understood - ie, the ability was exercised - at roughly the onset of the behavior.

In a more complex case of "understanding" like Stuart's awkwardly worded sign, the behavioral disposition may have to be, in a sense, "constructed" - ie, there may be what amounts to a learning step. And that might explain the delay before the "eureka" moment. How this moment becomes manifest - if it does - is a separate question.

This seems a good example of Rorty's distinction mentioned earlier. It's fine to talk about the mental in casual conversation, but in serious discourse referring to it causes confusion. It appears to me that Stuart is ignoring this distinction in his use of the vocabulary of the mental as is Philip in criticizing that mixed use.

I can't speak for Philip but it seems to me that we cannot truly maintain the distinction you want to make in any real effort to address these kinds of questions because something is lost. The recommended way of speaking you want to follow strikes me as turning us into behaviorists. Wittgenstein rejected that characterization of his thinking. He recognized mental features (whatever we want to call them and, admittedly, it's hard to come up with a nomenclature satisfactory to all parties) as Philip has rightly pointed out. Radical behaviorism seems to want to reject taIk of anything else but behaviors.

Is behaviorism correct? I think it has much to recommend it methodologically. But I always find myself coming back to the AI question. If someone wants to implement an artificial intelligence he or she could not do it just by setting up the right correspondence between inputs and outputs. It's not that we couldn't do a convincing simulation of this type within definite contexts. I think such a project could succeed. But if we want to create the kind of synthetic brain that can motivate human-like behaviors in an open-ended context, I expect this could only be done by giving the artificial brain the same capacities we recognize in ourselves.

This is why Searle's Chinese Room fails. It explicitly excludes an account of whatever understanding amounts to within the system and so fails to spec it in. It's not surprising, then, when a device that hasn't been specked to do more than mechanistically match symbols to symbols does nothing more than that -- which, of course, is much less than what we mean by "understanding" in ourselves. As Dennett notes, Searle leaves out the requisite complexity.

Now, while we certainly don't recognize the brain operations going on in our brains when we operate consciously, we DO recognize various features we call "mental" (or subjective, a term I know you feel uncomfortable with). We have a mental life and failure to account for that in any effort at AI implementation misses a critical piece.

So we have the behaviors of the entity and the behaviors, on a finer level, of the entity's constituent parts and elements (brains at the neuronal and electric discharge levels). If you get all of the latter right you presumably get the former right. But you can't get all of the latter right only by studying all of the former. You have to also look inside -- at what's going on within the system itself. Hence the need for reference to, and analysis of, the "mental."

The system may be nothing more than physical parts doing physical things but what's being done has a subjective side which, if left out, leaves us unable to complete the picture. It's not enough to know that the neurons are forming this or that pattern of electrical activity in the brain. You have to know what the electrical activity DOES within the system it's part of. It doesn't just trigger the next follow-up pattern although it does that, of course. The point is what do these patterns produce?

At the grossest level they produce the kinds of behaviors the behaviorist is looking at and measuring. But there is still an intermediate level, the level of the functioning system. To get the behaviors the behaviorist wants to measure, you have to have constituents doing the right things within the system they make up.

Anyway, echoing Philip, Merry Christmas and a Happy New Year to you and him both. This discussion has been helpful and enjoyable and I hope we can continue it, going forward. (The best part is the absence of rancor which I have too often found accompanies discussions of this subject.)

You seem to be missing my point about vocabularies. If you want to discover how activities like recognizing, remembering, understanding, believing, emoting, etc (what I assume you mean by the "mental life") work, you can't go looking at the physiological level for the objects of those activities: recognized objects, memories, (mis)understood sentences, beliefs, emotions, etc. As Philip has argued, those entities are linguistic objects, not physical ones. However, you can try to explain those activities in physiological terms, and the explanations often involve behavior. Defining "behaviorism" as making any reference at all to behavior makes anyone who attempts such explanations a behaviorist.

we cannot truly maintain the distinction you want to make in any real effort to address these kinds of questions because something is lost.

I'm merely suggesting the use of different vocabularies for different purposes, and in hypothesizing about how activities we describe as "mental" might work I find no need for the psychological vocabulary. If you think something is missing from my physiological description, it's incumbent on you to explain in what that something is. Saying we have a "mental life" or a "subjective" aspect isn't an explanation, it's just using words which then must be explained.

we DO recognize various features we call "mental" (or subjective, a term I know you feel uncomfortable with)

My discomfort is not so much with the particular words, rather it's with the implicit assumption that your assertion is indisputable as worded. It isn't. In any event, I don't see why you think it's a relevant response to my comment. Even supposing it were true, does that mean that in hypothesizing about how understanding might work physiologically either "mental" or "subjective" must appear in the explanation? If so, why?

But you can't get all of the latter [internal "behavior"?] right only by studying all of the former [observable behavior]. You have to also look inside.

I thought that's precisely what I had done in hypothesizing the neural underpinnings. If you mean "inside" something other than the body, what is that something?

The system may be nothing more than physical parts doing physical things but what's being done has a subjective side

My impression is that you're addressing the possibility that some internal activities may be manifest in experiences, which are "subjective" (private?) in that they belong to the experiencer and can't be had by anyone else. If so, I agree, but again don't see how that's relevant to my comment.

The point is what do these patterns [of neurological activity] produce?

They produce numerous things including behavior and phenomenal experience. But when trying to explain how the production process might work, I see nothing gained by labeling any of them "mental".

I don't see how we can avoid the mental label once we grant "phenomenal experience", which you do above. This isn't about competing vocabularies, or at least not ONLY about that. It's about the experiential dimension, including phenomenality, whatever we settle in on calling it. I think the approach you want to take leaves that out, even though you can't avoid referencing it, as you do above. I just think it's better to bite the bullet than talk around it.

Describing the vocabularies as "competing" isn't consistent with my point, so we're still talking past one another. In order to have further dialogs on these topics, I think some level of mutual understanding - even if not agreement - on this recurring issue is necessary. So, please bear with me in a last ditch attempt focused on that narrow objective.

If in cocktail party conversation one recounts having "recognized" an old friend on the street, "remembered" the friend's name, "understood" a point the friend was trying to make, etc, that's what I mean by "talking about" mental events using the nontechnical day-to-day vocabulary. Any literate English speaker will know what you mean. Ie, that vocabulary is appropriate for, and serves well, the purpose of such conversation.

But if one wants to hypothesize technical explanations of how those "mental" events might be implemented in the brain, a different vocabulary is required. A mental event like remembering may involve both a neurological process and an experience (subjective, if you insist) that manifests that process. Presumably we agree that a proposed explanation of the neurological process must use the neurological vocabulary. I additionally contend that for an explanation of the whole mental event (eg, remembering) to be technical, the experience - also a mental event - must be explained in the neurological vocabulary as well. One may, of course, describe an experience using a non-technical vocabulary, but a description isn't an explanation.

So, I see the day-to-day vocabulary (including "mental" words) and a technical vocabulary as complementary rather than competing, each appropriate to a different purpose. Obviously, there has to be some mixing - introducing an explanation of mental event X in neurological terms has to involve saying something like "I propose the following neurological explanation of mental event X". But my main point is that I think considerable care should be exercised in doing so.

Yeah, we are not particularly far apart here. The only thing is that I wouldn't use neurological language except to explain what brains do (which presumably correlates with experiences had). To achieve correlation, of course, we have to be able to speak both of brain operations and experiences had. We cannot substitute one mode of discourse for the other where our aim is to describe cause and effect both. My main point is that the effect must be sought as much in the subjective realm of experiences as in the objective of publicly occurring, and so observable, behaviors of the organism (at brain and organism levels). Since both talk of brain operations and the organism's behaviors involve referring to public phenomena, we have to deal with the fact that talk of experiences, to some very significant degree does not. Do we therefore abandon talk of experiences, the subjective, our "mental" lives then? My view is that we don't do that at cocktail parties so we can't when formulating an explanation of the elements which matter in cocktail party discourse either. It is the occurrence of THOSE latter elements we have to account for in explaining what brains do.

That's great - it appears we now have a common base for further discussion.

we have to be able to speak both of brain operations and experiences had. We cannot substitute one mode of discourse for the other where our aim is to describe cause and effect both.

The problem here is the explain/talk-about distinction. (I previously made a distinction between explain and describe, but explain/talk-about is better since there are problems with "describing" a private experience.) We seem to agree that with respect to a mental event - eg, understanding - we can, at least in principle, describe the brain's publicly accessible operation in the vocabulary of neurology. But it appears that there may be a disconnect when we move to a possible attendant private experience. So, here are some claims I'd make about that experience:

1. Although we may not currently be able to explain the process that produces the experience, the process must be physiological (possibly only neurological).

2. Therefore, any explanations of the process must be in the vocabulary of physiology (possibly, of neurology).

3. The person having the experience can talk about it, and will usually do so in the day-to-day vocabulary.

4. An "experience" as the word is being used here - eg, an "Aha!" moment when something is first understood - is causally inert. Ie, no physical action results from the experience.

I'm not sure number 4 is presently relevant, but since you emphasized cause and effect in your comment, it seemed that I had to throw it in. It's presumably controversial, but I'm convinced it's probably so.

I take your #1 to be the case, but not doctrinally. It's how things look to us as of now. I disagree with the rush to dualism that many, like David Chalmers take, but wouldn't rule dualism out a priori given certain circumstances (e.g., evidence for the mental as non-physical -- though, on my view, we currently have no such evidence; or a failure of ANY physical explanation to cover what's needed in explaining consciousness -- though I think a model like Dennett's does cover what's needed, so, barring some argument that shows such a model to be deficient, the second criterion for considering dualism isn't met).

Hence, I my starting point with #1 is the same as yours.

I agree with your #2 because I agree with #1. I would add one caveat, however, and that is the description of the processes must include not just events but functionality.

I agree with your #3 (though I think we can ONLY talk about our experiences as that using the language of motives, belief, in short of our mental lives. I don't believe we can substitute descriptions of physical behaviors at the brain level for mental talk and I don't believe talk which excludes our mental lives can fully cover what's going on with us, even though I agree with what I take to be the Wittgensteinian view that language is public, not private, at bottom and so hits usages are best and primarily fitted to public usages (though, unlike some Wittgensteinians, I grant legitimacy to referential usages regarding private -- subjective -- phenomena; it's just that these uses are generally derivative and can be misleading if not carefully attended to).

I don't agree with your #4 which seems to make experience epiphenomenal which, I think, is a misleading picture.

I would describe experience (which is more than aha moments, on my view, but certainly includes them) as an aspect of the physical system that is realized functionally (what the operations of the system do). On this view, being conscious ("consciousness") is an activity of some brains sometimes (what they are doing under certain circumstances). It's not a thing in the sense of being some kind of entity parallel with physical entities but it's a thing in the sense of being something we can refer to (just as we refer to institutions, traditions and states and so forth). In this case it's operations, implemented functionalities, that we refer to.

But what is implemented is more than just publicly observable physical events. We can't observe the functions themselves (an abstraction after all) but only the physical manifestations of them (which we are sometimes tempted to equate with the functions in their entirety).

But we do experience the functions to the extent that we ARE the particular system being implemented. That's why I think it's important to preserve the notion of subjectivity and the mental in our explanation.

Whatever the physical events going on in brains are doing, they are not JUST making physical movements of the organism in which the brain is situated. They are also producing experiences, the subjective, the many features of our mental lives which we call, in the aggregate, "consciousness."

Hi, Philip, interesting post with a lot to digest. I was thinking about your emphasis on "circumstances" and was a bit unsure what to think - it is easy to start thinking: so we only use these words in very specific circumstances, but what exactly are those circumstances and how is it that I can use the words (hopefully correctly!) and yet have such difficulty specifying the circumstances? The slogan "inner states stand in need of outward criteria" approaches the issue in terms of when and what sense it makes to talk of a specific inner experience, but another way of putting the same point is to say that if the inner did not connect up with the outer, we would lose our interest in it. The issue with the statement "He understands X" is not whether this is an accurate depiction of the state of his soul (or his brain) but what makes us say this about him and what follows from this. So the concept picks out a pattern in our lives and that pattern consists of things that happen (outer things) but links them together in terms of the concept of the inner. So circumstances certainly are important :-)

Implicit in §151 is the distinction between "knowing that" and "knowing how" - the ability to continue the series being clearly an example of the latter. So it seems that "understanding" as related to that example essentially means "acquiring the ability to go on". Several ways a subject might do that are suggested: finding a formula or a rule, recognizing the beginning of a series with which the subject is already familiar. Each of these is clearly a process that is a precursor to understanding but isn't itself an understanding. And that's what I take to be the point of the last paragraph of §154 (not including the parenthetical remark - which seems to echo the boxed note between §149 and §150 in again distinguishing "mental processes" from what I would call "brain processes").

But what is the nature of the relevant brain processes? Well, the ability to do something - a "knowing how" - seems clearly a behavioral disposition (implemented as a neural structure as described in this previous comment), and such a disposition is acquired via a learning process. Consider the process of acquiring the ability to continue the series by searching for a formula or a rule. That presumably amounts to trial-and-error application of previously learned candidates. Once one that works is found, the subject's brain has the material necessary to construct what we might call a "working" behavioral disposition (to emphasize it's possibly short-term persistence). At that point the subject might be disposed to say "Now I understand how [ie, have the ability to] go on".

In general, behavioral dispositions are a function of the subject's history and current circumstances (ie, they are context-dependent). And as Philip notes, in order to have any luck at finding either, the subject has to have a wealth of a priori knowledge - ie, the circumstances have to be right. Whether this is the same sense in which "circumstances" is being used in these sections of PI isn't clear to me.

Another possible interpretation of "circumstances" is suggested in §155, which discusses the experience attendant to "being able to go on" - ie, having the requisite behavioral disposition. Which raises the question: can subjects be aware of their own dispositions in the absence of overt execution of them? Any such awareness must result from a private experience, perhaps the type I call a "rehearsal" of a behavioral disposition - the silent inner monolog/dialog that sometimes precedes overt execution of a disposition. Such a rehearsal if successful might result in the subject's then being disposed to utter "Now I understand how to go on". One could describe this by saying the "circumstances warrant the utterance", but I'd be inclined not to do so since "warrant" suggests (at least to me) epistemic justification, which seems inapplicable to the example.

W's treatment of circumstances is certainly something I want to say more about - or, to put it more honestly, get clearer in my mind.

Reading the text, I was struck by how W sets up circumstances as the "answer" (note scare quotes!) in 154-155. But the conclusion of the discussion of reading (which is specifically introduced to highlight the importance of circumstances) is more elusive and less emphatic than you might expect given what he says earlier. My summary of the argument probably forces things a bit in this regard. I make it sound like he reaches a "ta-da!" conclusion when he doesn't really do that at all.

In one sense it's natural enough that he doesn't - W hasn't actually finished the analysis of understanding at 184 and the standard division between understanding and rule-following (which I've followed) is somewhat artificial. So we might have to wait a bit until we're in a position to fully appreciate what he's getting at.

All the same, a few thoughts.

First, I think W doesn't reach a "ta-da!" conclusion because his point about circumstances doesn't answer the question in the way that we might assume. It's not the missing "something extra" in that sense. It seems to me there's a connection here with the discussion of meaning and use. Use is not to be thought of as a kind of temporally extended object which is the meaning of the word. It is not an alternative candidate in that sense. Rather, it represents a fundamentally different way of considering meaning - one that places it within the broader flow of our lives. So too with circumstances. He's not so much answering the question "what's the extra thing?" as completely reorienting our approach to the issue. Moreover, this reorientation is of a piece with what has gone before regarding meaning and use. It is about bringing us to see how these concepts are necessarily bedded in our form of life. I don't want to jump ahead, but I think this becomes clearer in the discussion of rule-following (and private language as well).

Secondly, I think it's not so much that it's difficult to say which circumstances are important as that you cannot set out in advance which ones are relevant. That depends (as usual) on the question at hand. There is no set list of circumstances that will transform mere behaviour into understanding. That would be treating circumstances as the "extra thing" in exactly the way W warns us against in 179.

As I mentioned to Frank (above), I'm going to say more on circumstances and the role they play in W's remarks about understanding. That might make things clearer. Then again, it might not. :)

First, however, I'm going to write something on dispositions, brain-states and all that. For, as far as I can see W is pretty clear on this: the concept of understanding does not and cannot rest upon theories about either brain-states or mechanistic accounts of behaviour.

Fair enough. But keep in mind that my comment was explicitly limited to "understanding" in the sense of acquiring an ability, viz, "knowing how to go on". But as you have noted, "understanding" in general has multiple uses. Perhaps other uses either can't be explained in terms of behavioral dispositions or in some contexts are more usefully explained in the psychological vocabulary.

Point taken, Charles, but I still think Wittgenstein would see considerable problems with the line you are proposing. Although he talks a lot about the importance of "behaviour" I don't think he means the same as you by that term.