Thursday, December 06, 2012

The Analytic Functionalists Were (Probably) Right!

The mind-body
problem asks: How are mental states related to physical states of the brain,
the body, and behavioral states more generally? Functionalists claim that
mental states are identical with functional roles, defined as relations between
environmental impingements, external behaviors, and other mental states. Analytic functionalists contend that
these identities are imposed by folk psychological theory. Thus, analytic functionalism makes a specific empirical claim: that ordinary people
conceptualize mental states in terms of their functional roles. Notoriously, this claim has met with intuition pump style
challenges, such as Block’s Nation of China and Searle’s Chinese Room, which
purport to demonstrate via our own intuitive assessments of cases that we do
not ordinarily conceptualize
mental states solely in terms of their functional roles. Discussion of these
examples constituted a large chunk of philosophy of mind near the end of the 20th
century, without reaching any very definite conclusions. But, as Lewis (1972)
points out, analytic functionalism’s empirical claim “can be tested, in principle, in whatever way any hypothesis
about the conventional meaning of our words can be tested.”

Recently, an experimental
challenge to this empirical claim has arisen in the form of the embodiment hypothesis (discussed here, here, and here). According to the embodiment
hypothesis, while
information about behavior and other functional cues may be sufficient for ordinary
attributions of intentional states, only entities with the right kind of
biological body are typically thought to have phenomenally conscious
experiential states. Proponents of the
embodiment hypothesis have argued that empirical research on how people
attribute mental states to groups, robots, and God support their view (here is
a succinct overview). Based on this research, it may have seemed reasonable to
conclude that analytic functionalism rested on a mistaken conception of how
people talk and think about mental states. However, Wesley Buckwalter and I
(along with others) have recently been uncovering a number of results that cast
a more favorable light on analytic functionalism and seem to challenge the
embodiment view. As discussed here, people are perfectly willing to attribute
phenomenal states to robots when they are described in a way that calls to mind
certain key functionalist assumptions. Mental state attributions to groups
constitute evidence for the embodiment view only on the assumption that those
attributions are to groups over-and-above
their members—an assumption that Adam Arico, Shaun Nichols and I have
argued against here. And Wesley and I have recently found that people are no
less willing to attribute happiness and anger to a disembodied ghost than to
his physical human counterpart, provided the ghost satisfies the appropriate
causal roles. These findings led us to re-evaluate crucial studies taken to
support disembodiment, and in many cases we found that those studies fail to
control for function in important ways. As we discuss in the attached paper,
this has led us via abductive argument to conclude that the analytic
functionalists were (probably) right. It seems that ordinary people do conceive
of mental states as functional relations between environmental impingements, external behaviors, and other
mental states.

7 Comments

[Cross-posted at Brains]
Very interesting. Here are some thoughts/ questions that could be addressed in future work:

First, the distinction between "individuals" and "groups" seems to be relative to a level of description: On a microscopic scale, our brains are vast groups of neurons, and while they interact and are closely related, so do group entities like corporations, though admittedly to a lesser extent. One question is then whether the folk, on being given a detailed explanation of how some part of the brain works, or of neurons, dendrites and synapses in general, would feel an intuitive pull towards a dualistic view: they might think such an ensemble couldn't be conscious any more than a corporation could, and posit a more unified agent like a soul (BTW, this resembles Scholastic arguments for the simplicity of the soul). Another question is, if the folk would still think that brains could be the subjects of intentional and phenomenal states but NOT that corporations could, whether the difference would arise because the members of a corporation have thoughts and experiences of their own, while neurons are thought not to have them. If so, that would seem to show that people fail to ascribe thoughts and experiences to corporations because they think that if the members of a group have thoughts and experiences the group cannot--the mental states of the members "crowd out" any potential mental states of the group--not just because they are a group. On this view groups could still be conscious if their members are not.

Thanks for your comments Jason. In general, I suppose not that people think brains are conscious or have thoughts, but rather that people do. Of course, they may also think (and I suppose they do think) that the brain plays an important role in making possible the biological functional system that is a human being.

On the first point, I think that would be interesting to look at in contrast with thought experiments like Block's Nation of China. I think that thought experiment and your proposed microscopic focus on the brain elicit the responses they do (and I think you're probably right about the responses your proposal would elicit) because they draw our focus away from the relevant functional roles in terms of which we conceive of mental states. This is why I agree with Dennett when he writes in Consciousness Explained that Block and Searle's (Chinese Room) "thought experiments rely on the same misdirection of imagination” (435).

A question about folk-psych and representationalism: have you looked at peoples' reactions to inversion scenarios, ie invererted qualia? My impression is that these types of scenarios are folk-psychologically coherent, suggesting that there's more to the folk view of experiential states than just representation. So robots may have qualia and qualia may be "disembodied" in the relevant sense, but they may still be problematic for analytic functionalism and strong representationalism.

Another question (my apologies if these things are discussed by you in the paper--I did my usual lazy scan of the thing before blogging wildly...): how seriously should we take the term "analytic" here? Might folk notions be revised over time? Perhaps the folk notions are evolving under pressure from science (and science fiction, for that matter) and so what the folk meanings are at present doesn't pin things down quite as tightly as Lewis (or maybe readers of Lewis like Block) think. Thoughts?

Thanks for the comments, Josh. Good suggestion about inverted qualia. I think that is something we need to look at. But it will obviously be difficult to get at in a meaningful way. The difficulty would be in asking about someone (S) who's experiences of bananas is like your experience of blueberries, for example, without leading them to suppose that when S sees a banana she represents blue. At the same time, we don't want to make cases so complicated that it's no longer reasonable to suppose that data reflect competent application of mental state concepts. I invite suggestions from readers on ways to go about doing this.

As for the second question, it's another good one, and something we don't say a lot about here. We want to emphasize that the process of mental state attribution we are interested in is rational in nature, by which we mean it involves reasoned (if non-conscious) application of concepts. The ascriptions of, for example, pain states that we are interested in are one's that proceed on the basis of evidence--evidence, we maintain, that the entity satisfies the functional role identified with pain. This is why it would be incorrect to refer to what we are interested in with the currently fashionable term "mind perception". People may perceive minds, via systems that are domain specific, informationally encapsulated, and obligatory. But that's not really what we're interested in. (They presumably do the thing we're interested in too, since, for example, they attribute mental states to characters in stories when the relevant perceptual stimuli are not available.) So, given that we're interested in the application of "person-level" concepts, it's reasonable to ask where these concepts and the theories of their application come from and whether they are, in any sense, fixed. (I also think it's safe to say that these sorts of questions were not at the forefront of philosophy of mind during the heyday of analytic functionalism!) I can't posit specific answers to these questions now. But I will say that we need to be careful in answering these questions to tease apart whether the notion of, say, pain is changing under pressure from science, or whether the notion of what things can satisfy pain conditions is changing. In other words, the notion of what pain is may remain constant while the notion of what a robot can do and be affected by changes, and Alan Turing could still be right when he said, "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

You say, "In general, I suppose not that people think brains are conscious or have thoughts, but rather that people do. Of course, they may also think (and I suppose they do think) that the brain plays an important role in making possible the biological functional system that is a human being."

Maybe, but I think they're not incompatible: It is indeed true that it is persons (or sentient beings in general) that have thoughts and feelings, but it may be that they do so in virtue of having a brain that itself has them. It is also true that if someone kicks a a ball that *they*, and not merely their leg, kicked it; but it's still it's the leg which actually impacts on it and sets it in motion. However, it might be that not a lot hangs on this: if we identify a person with their whole body, we have a group of various kinds of cells which intuitively are on a par with neurons, and which, even when considered as a whole, the folk (I think) would not think any more or less capable than neurons of having thoughts and feelings.

And thanks for the references -- I'll try and have a look when I have the time.

Surely it's hard to tell anything from this kind of study because the philosophical claims are about what constitutes being phenomenally conscious (or is identical to it) but the ordinary subjects might simply be treating the presence of the relevant functional properties as evidence for the presence of phenomenal consciousness? Or is the point supposed to be that if people attribute on the basis of functional characteristics the burden of proof is on those who claim that, nonetheless, they might not think of those functional properties as constituting phenomenal properties? Why think that? ich

Thanks for the comment, David, but I'm not sure what study you mean to be referring to. The paper discusses over a dozen studies, many of which use different methodologies. What specific criticism do you have in mind?