Where is my mind?

Jerry Fodor

If there’s anything we philosophers really hate it’s an untenable dualism. Exposing untenable dualisms is a lot of what we do for a living. It’s no small job, I assure you. They (the dualisms, not the philosophers) are insidious, and they are ubiquitous; perpetual vigilance is required. I mention only a few of the dualisms whose tenability we have, at one time or other, felt called on to question: mind v. body; fact v. value; knowledge v. true belief; induction v. deduction; sensing v. perceiving; thinking v. behaving; denotation v. connotation; thought v. action; appearance v. reality . . . I could go on. It is, moreover, a mark of an untenable dualism that a philosopher who is in the grip of one is sure to think that he isn’t. In such a case, therapy can require millennia of exquisitely subtle dialectics. No wonder philosophers are paid so well.

So, for example, you might have thought that the distinction between, on the one hand, a creature’s mind and, on the other, the ‘external’ world that the creature lives in is sufficiently robust to be getting on with; and that commerce between the two, both in perception and in action, is typically ‘indirect’, where that means something like ‘mediated by thought’. But plausible as that may seem, the thesis of Andy Clark’s new book, Supersizing the Mind, is that the mind v. world dualism is untenable.

The best way through Clark’s book is to start by reading the foreword by David Chalmers and the paper by Clark and Chalmers that is reprinted as an appendix. These are short, informal presentations of the so-called ‘Extended Mind Thesis’ (EMT), of which the rest of the book is an elaboration and discussion. Here, then, is a passage from the foreword: ‘A month ago,’ Chalmers tells us,

I bought an iPhone. The iPhone has already taken over some of the central functions of my brain . . . The iPhone is part of my mind already . . . [Clark’s] marvellous book . . . defends the thesis that, in at least some of these cases the world is not serving as a mere instrument for the mind. Rather, the relevant parts of the world have become parts of my mind. My iPhone is not my tool, or at least it is not wholly my tool. Parts of it have become parts of me . . . When parts of the environment are coupled to the brain in the right way, they become parts of the mind.

Similarly, later on in the book, we’re invited to consider the cases of Otto and Inga, both of whom want to go to the museum. Inga remembers where it is and goes there; Otto has a notebook in which he has recorded the museum’s address. He consults the notebook, finds the address, and then goes on his way. The suggestion is that there is no principled difference between the two cases: Otto’s notebook is (or may come with practice to serve as) an ‘external memory’, literally a ‘part of his mind’ that resides outside his body. Correspondingly, Otto’s consulting his notebook and Inga’s consulting her memory are, at least from the viewpoint of an enlightened cognitive scientist, both cognitive processes:

Such considerations of parity, once we put our bioprejudices aside, reveal the outward loop as a functional part of an extended cognitive machine. Such body-and-world involving cycles are best understood . . . as quite literally extending the machinery of mind out into the world – as building extended cognitive circuits that are themselves the minimal material bases for important aspects of human thought . . . Such cycles supersize the mind.

That’s pretty impressionistic; but unless I’ve missed it, there isn’t an exposition of EMT that is markedly less metaphorical in the book. So, could it be literally true that Chalmers’s iPhone and Otto’s notebook are parts of their respective minds? Come to think of it, do minds literally have parts? If so, do some minds have more parts than others? Roughly, how many parts would you say your mind has? (Notice that the answer mustn’t rely on assuming that your mind is your brain; brains are untendentiously of the inside; so if mind/brain identity is true, it follows that EMT is not.) Or, try this vignette: Inga asks Otto where the museum is; Otto consults his notebook and tells her. The notebook is thus part of an ‘external circuit’ that is part of Otto’s mind; and Otto’s mind (including the notebook) is part of an external circuit that is part of Inga’s mind. Now ‘part of’ is transitive: if A is part of B, and B is part of C, then A is part of C. So it looks as though the notebook that’s part of Otto’s mind is also part of Inga’s. So it looks as though if Otto loses his notebook, Inga loses part of her mind. Could that be literally true? Somehow, I don’t think it sounds right.

‘Just the sort of quibble you’d expect from a philosopher. Why don’t you guys loosen up a little? No wonder you’re so badly paid.’ Or, as Chalmers puts it: ‘The proponent of the extended mind should not be afraid of a little revisionism. Even if commonsense psychology marks a distinction here, the question still arises of whether this is an important distinction that ought to be marked in this way.’ Fair enough; in fact, right on. But the worry isn’t that a sophisticated psychology may require us to say things that sound funny. It’s that the stuff about parts of minds and the locations of the parts is all that Clark/Chalmers tell us about what, exactly, the EMT asserts. I suppose they think they could make sense of such talk if they were seriously challenged; but I don’t know how, and Clark/Chalmers aren’t telling. EMT isn’t literally true unless Chalmers’s iPhone is literally an (external) part of his mind; ‘literally’ is among Clark/Chalmers’s favourite adverbs. If minds don’t literally have parts, how can cognitive science literally endorse the claim that they do? That Juliet is the sun is, perhaps, figuratively true; but since it is only figuratively true, it’s of no astronomical interest.

These sorts of consideration are among the staples of courses on campus with names like Phi Mind 101; so it bothers me that they don’t bother Clark and Chalmers. Perhaps it’s because of their optimistic understanding of where the burden of explication lies. ‘To provide substantial resistance’ to EMT, ‘an opponent has to show that Otto’s and Inga’s cases differ in some important and relevant respect.’ This is implicit in what Clark/Chalmers call the ‘parity principle’: ‘If . . . a part of the world functions as a process which, were it to go on in the head, we would have no hesitation in accepting as part of the cognitive process, then that part of the world is (for that time) part of the cognitive process.’ If the parity principle doesn’t exactly tell us what EMT asserts, at least it purports to provide a sufficient condition for it being true that a cognitive process is literally extended. Perhaps that’s good enough for Clark/Chalmers’s expository purposes.

Let me tell you about my new vacuum cleaner. Unlike my old vacuum cleaner, this one is a sort of robot. You don’t have to push it around to get the dirt up: it’s got wheels and it rushes from place to place, vacuuming wherever it happens to be. Which path it decides to take determines what it happens to pick up. It decides what path to take at a given point by executing a ‘random walk’. If it bumps into something (the couch as it might be), the shock of the bump causes the robot to turn an arbitrarily determined number of degrees in an arbitrarily selected direction. It then proceeds to vacuum some more and continues to do so, da capo, until somebody remembers to turn it off. From time to time, it gets trapped behind things, or under things, and sometimes it gets tangled up in a fringe; but, by and large, it works pretty well. The rugs do get cleaner. And my grandchildren adore it. What they particularly like is feeding it. They put bits of detritus in its path, which the robot then duly gobbles up. The cat disdains the thing but the rest of the family is rather fond of it.

Is what my robot does when it ‘decides’ to change course a sort of thing which if it had happened inside the robot, ‘I would have had no hesitation in accepting as part of [a] cognitive process?’ The trouble with this way of phrasing the issue is that, in the crucial cases, one doesn’t know how to apply it. That’s because what one thinks about the parity principle itself depends on what one thinks about EMT. If it’s your view (as I guess it’s mine) that mental events are ipso facto ‘internal’, then you will, of course, deny that something that happens on the outside could be mental. It’s irrelevant whether what the vacuum cleaner did would have counted as making a decision if the robot had done it on the inside because, according to mind/world dualists, making a decision isn’t the kind of thing that can happen on the outside. Clark would doubtless say that this begs the question against EMT, and he would be right. But a mind/ world dualist would say that the parity principle begs the question too, only in the opposite direction. And, anyhow, what on earth are we talking about? I guess I understand the thesis that my vacuum cleaner changed course because it collided with the couch in my living-room. But how am I to understand the hypothesis that it would (or wouldn’t) have changed course if it had collided with the couch in my head? All that it literally has inside is rug dust and cat hair: I know because I’ve looked. There isn’t room for a couch in my vacuum cleaner; or in my head.

This would seem to be at best a stand-off for EMT; and, in fact, the line of argument that Clark uses doesn’t rely on the parity principle after all. His real argument is that, barring a principled reason for distinguishing between what Otto keeps in his notebook and what Inga keeps in her head, there’s a slippery slope from the one to the other. That being so, it is mere prejudice to deny that Otto’s notebook is part of his mind if one grants that Inga’s memories are part of hers. That being Clark’s argument, the parity principle doesn’t come into it; which, as we’ve been seeing, is probably just as well. But it does bear emphasis that slippery-slope arguments are notoriously invalid. There is, for example, a slippery slope from being poor to being rich; it doesn’t follow that whoever is the one is therefore the other, or that to insist on the distinction is mere prejudice. Similarly, there is a slippery slope between being just a foetus and being a person; it doesn’t follow that foetuses are persons, or that to abort a foetus is to commit a homicide.

But never mind. I propose, out of sheer goodness of heart, not to cavil at slippery slopes. I can afford not to because there is, after all, a principled difference between Otto’s case and Inga’s; and, more generally, between what’s mental and what isn’t. The general drift of what I’m about to say has been in the literature for centuries; I can’t imagine why Clark doesn’t mention it.

The mark of the mental is its intensionality (with an ‘s’); that’s to say that mental states have content; they are typically about things. And (with caveats presently to be considered) only what is mental has content. It’s thus unsurprising that considerations about content are most of what drives intuitions about what’s mental. For example, Clark (and Heidegger) notwithstanding, tools – even very clever tools like iPhones – aren’t parts of minds. Nothing happens in your mind when your iPhone rings (unless, of course, you happen to hear it do so). That’s not, however, because iPhones are ‘external’, it’s because iPhones don’t, literally and unmetaphorically, have contents. But what about an iPhone’s ringing? That means something; it means that someone is calling. And it happens on the outside by anybody’s standard. And similarly, what about Otto’s notebook? It has lots of content (it contains, for example, the phone numbers of lots of his friends); and it’s about something – it’s about, for example, his friends’ phone numbers. And also, come to think of it, what about iPhones that have had numbers programmed in? So, even if shovels and the like can’t be parts of minds, how does insisting on the intensionality of the mental rule out notebooks and iPhones?

That’s a fair question, and part of what I’ve been saying wasn’t quite true. What I should have said isn’t that only what’s literally and unmetaphorically mental has content, but that if something literally and unmetaphorically has content, then either it is mental (part of a mind) or the content is ‘derived’ from something that is mental. ‘Underived’ content (to borrow John Searle’s term) is the mark of the mental; underived content is what minds and only minds have. Since the content of Otto’s notebook is derived (i.e. it’s derived from Otto’s thoughts and his intentions with a ‘t’), the intensionality of its entries does not argue for its being part of Otto’s mind. So the intensionality of notebooks can be granted by someone who doesn’t think that notebooks are the sorts of thing that could be parts of minds.

Once one has grasped the intimate relation between being mental and being intensional, there are lots of ways of blocking Clark’s slippery slope. Here’s one. The intensionality of notebooks and the like derives from the mental states and processes of people who use them. The inscriptions in Otto’s notebook mean something because there was something he meant (meant to record) when he made them. A thing’s having derived intensionality thus depends on someone’s thinking about it (having beliefs, desires, intentions and so forth in relation to which the thing is, as philosophers say, the ‘intensional object’). This is markedly untrue of mental things. Inga doesn’t have to think about (or, in any literal sense, ‘consult’) her memories; she just has them and proceeds on her way in light of them.

So there is, after all, a principled difference between what Inga’s memories have, and what Otto’s entries have. Maybe Inga’s memories are part of her mind, but Otto’s notebook isn’t part of his. At one point, Clark almost sees this; but then he lets it slip away:

The alternative [to saying that, just as Inga’s memories are part of her mind, so the notebook Otto consults is part of his] complicates the explanation unnecessarily . . . [On the dualist’s account] there will be an extra term . . . We submit that [to explain things the dualist’s way] is to take one step too many. It is pointlessly complex to explain [Otto’s actions in terms of his beliefs about his notebook] in the same way that it would be pointlessly complex to explain Inga’s actions in terms of her beliefs about her memory . . . In an explanation, simplicity is power.

But this is quite wrong. Considerations of simplicity come into play when we are trying to choose between theories that are of otherwise equivalent explanatory power. But supposing that Otto’s notebook is in his head leads to all sorts of explanatory failures that supposing that it isn’t avoids. Here’s a sketch of one of them.

Externalists and internalists share the assumption that representational states and processes (memories and beliefs, for example) play an essential role in cognition. Their disagreement is about where these representational states and processes reside. In Otto’s case, according to externalists, some of them are ‘outside’, in the notebook. That’s where he keeps, for example, his belief that the museum is on 53rd Street. But what about Inga? Suppose we agree, for the sake of argument, that what goes on in her case is that she stores her beliefs in her (internal) memory, which she ‘consults’ when she has a trip to the museum in mind. How does that work? Surely it’s not that Inga remembers that she remembers the address of the museum and, having consulted her memory of her memory then consults the memory she remembers having, and thus ends up at the museum. The worry isn’t that that story is on the complicated side; it’s that it threatens regress. It’s untendentious that Otto’s consulting ‘outside’ memories presupposes his having inside memories. But, on pain of regress, Inga’s consulting inside memories about where the museum is can’t require her first to consult other inside memories about whether she remembers where the museum is. That story won’t fly; it can’t even get off the ground.

There are several morals; one is that there is, after all, a built-in asymmetry between Otto’s sort of case and Inga’s sort. Otto really does go through one more process than Inga: consulting his notebook really is a link in the causal chain that runs from his wanting to go the museum to his getting there. By contrast, Inga’s ‘consulting her memories’ is a fake; and it’s a particularly naughty fake because 1. it makes Inga’s case look more like Otto’s than it can possibly be, and 2. it obscures the critically important fact that the (derived) intensionality of what happens on the outside depends ontologically on the (underived) intensionality of what happens on the inside. Externalism needs internalism; but not vice versa. External representation is a side-show; internal representation is ineliminably the main event.

Externalists sometimes say that we can do without internal representations in psychological explanations because ‘the external world is its own best model’ and the external world is, of course, on the outside. This remark passes for an insight in some externalist circles, but it’s fatuous. For one thing, as Clark rightly notices, your internal model of the world contains stuff that the world itself does not; this happens not just when your beliefs are false but also when they are hypothetical (‘if there are clouds, there will be rain’ can be true even if there aren’t any clouds); or when they are modal (‘it might rain’ can be true even if it doesn’t rain); or when they are in the past or future tense (‘it used to rain here a lot’ can be true even if it doesn’t rain here anymore). Say, if you like, that my vacuum cleaner uses the world itself as its model of the world; my vacuum cleaner does, after all, change direction when it bumps into the couch, and the couch is, after all, in the world. But then, my vacuum cleaner is very stupid.

It can’t, for example, turn because it thinks that there may be a couch; or bring its umbrella because it thinks it will rain. Doing those sorts of things requires not just a representation of how the world is, but also a representation of how it would be if . . . And the intensional properties of such representations (unlike those of Otto’s notebook) would have to be underived if what my robot does is to count as literally making a decision. And (again unlike Otto’s notebook) they would have to be on the inside where they can cause the machine’s behaviour without courting regress. The world itself satisfies neither of these constraints; so the world can’t play the kind of role in causing behaviour that internalists and externalists both think that representations do. The world can’t be its own best representation because the world doesn’t represent anything; least of all itself. The world doesn’t mean anything and it isn’t about anything; it just is. So, contra EMT, there would seem to be plenty of differences between, on the one hand, Otto’s notebook and my vacuum cleaner and the world, and, on the other, Inga’s memories. If these aren’t the kinds of difference that make the distinction between having a mind and having a notebook ‘principled’, I can’t imagine what kinds of difference would. There is a gap between the mind and the world, and (as far as anybody knows) you need to posit internal representations if you are to have a hope of getting across it. Mind the gap. You’ll regret it if you don’t.

Letters

Jerry Fodor’s amusing, insightful, but fatally flawed review of my book, Supersizing the Mind, seems committed to the idea that states of the brain (and only states of the brain) actually manage to be ‘about things’: to ‘have content’ in some original and underived sense (LRB, 12 February). ‘Underived content,’ he says, ‘is what minds and only minds have.’ That’s why, as Fodor would have it, states of non-brainbound stuff (like iPhones, notebooks etc) cannot even form parts of the material systems that actually constitute the physical basis of a human mind. But just how far is he willing to go with this?

Let’s start small. There is a documented case (from the University of California’s Institute for Nonlinear Science) of a California spiny lobster, one of whose neurons was deliberately damaged and replaced by a silicon circuit that restored the original functionality: in this case, the control of rhythmic chewing. Does Fodor believe that, despite the restored functionality, there is still something missing here? Probably, he thinks the control of chewing insufficiently ‘mental’ to count. But now imagine a case in which a person (call her Diva) suffers minor brain damage and loses the ability to perform a simple task of arithmetic division using only her neural resources. An external silicon circuit is added that restores the previous functionality. Diva can now divide just as before, only some small part of the work is distributed across the brain and the silicon circuit: a genuinely mental process (division) is supported by a hybrid bio-technological system. That alone, if you accept it, establishes the key principle of Supersizing the Mind. It is that non-biological resources, if hooked appropriately into processes running in the human brain, can form parts of larger circuits that count as genuinely cognitive in their own right.

Fodor seems to believe that the only way the right kind of ‘hooking in’ can occur is by direct wiring to neural systems. But if you imagine a case, identical to Diva’s, but in which the restored (or even some novel) functionality is provided – as it easily could be – by a portable device communicating with the brain by wireless, it becomes apparent that actual wiring is not important. If you next gently alter the details so that the device communicates with Diva’s brain through Diva’s sense organs (piggybacking on existing sensory mechanisms as cheap way stations to the brain) you end up with what David Chalmers and I dubbed ‘extended minds’.

There is much more to say, of course, about the specific ways that non-implanted devices (iPhones and the like) might or might not then count, in respect of some enabled functionality, as being appropriately integrated into our overall cognitive profiles. Fodor seems to believe that such integration is impossible where parts of the extended process involve what he describes as the ‘consultation’ (and then the explicit interpretation) of an encoding, rather than the simple functioning of that encoding to bring about an effect. This kind of consideration, however, cannot distinguish the cases in the way Fodor requires. Think of the case where, to solve a problem, I first conjure a mental image, then inspect it to check or to read off a result. Imagining the overlapping circles of a Venn diagram while solving a set-theoretic puzzle, or imagining doing long division using pen and paper and then reading the result off from one’s own mental image, would be cases in point. In each case we have a process that, while fully internal, involves the careful construction, manipulation and subsequent consultation of representations whose meaning is a matter of convention.

As a final real-world illustration, consider the trials (at MIT Media Lab) of so-called ‘memory glasses’ as aids to recall for people with impaired visual recognition skills. These glasses work by matching the current scene (a face, for example) to stored information and cueing the subject with relevant information (a name, a relationship). The cue may be overt (consciously perceived by the subject) or covert (rapidly flashed and hence subliminally presented). Interestingly, in the covert case, functionality is improved without any process of conscious consultation on the part of the subject. Now imagine a case in which the same cueing is robustly achieved by means of a hard-wired connection to the brain. Presumably Fodor would allow the latter, but not the former, as a case of genuine cognitive augmentation. Yet it seems clear that the intervention of visual sensing in the former case marks merely an unimportant channel detail. The machinery that makes minds can outrun the bounds of skin and skull.