More Peanuts

Jerry Fodor

‘Dr Livingstone, I presume?’ Stanley was spot on: it was Dr Livingstone. Elsewise his presuming so wouldn’t have become the stuff of legend. A question suggests itself: how did he manage to presume so cleverly? Of all the things that Stanley might have presumed, how did he hit on the one that was both pertinent and true? Why didn’t he presume Queen Victoria, for example? Or Tower Bridge?

At first blush, that sounds like an easy sort of question. In fact, it’s an abyss. Though philosophers and psychologists have been working on such matters for a couple of millennia, the best they’ve got is less a theory than a programme of research. That is the background for José Luis Bermúdez’s book, so let’s start with it.

This seems safe: Stanley must have done some thinking. He must have inferred, on the basis of his beliefs, memories, hunches (etc) about the situation in which he found himself, that it was Livingstone he ought to presume. ‘The situation in which he found himself’ thus included not only whatever was perceptually available at the scene, but also a lot of cognitive commitments that Stanley brought with him. If he inferred that it was Livingstone, it must have been from those sorts of premise that he did so; he had nothing else to go on. I suppose that’s all pretty much truistic; still, it prompts some useful reflections.

Notice, to begin with, the intimate relation between thinking and inferring. At the crucial point, Stanley’s thinking must have consisted of drawing inferences from what he independently believed. It’s plausible that at least some kinds of thinking just are processes of drawing inferences. It’s the same for a lot of other things the mind does, such as learning, perceiving and planning. The picture that emerges is of the mind (or the brain if you prefer) as some kind of inferring machine; perhaps some kind of computing machine, since computations are themselves plausibly construed as chains of inference.

Second, if the mind is in the inference-drawing line of work, there must be symbols in which it formulates its premises and conclusions; there are no inferences without a medium (or media) in which to couch them. That matters because you can’t say just anything you like in whatever kind of symbols you choose. Pictures can’t express negative or contingent propositions – it’s not raining, or if it’s raining that will spoil the picnic. But negative and conditional thoughts play a central role in the kinds of inference that minds routinely carry out. (It’s certainly not Queen Victoria; if it’s certainly not Queen Victoria, then perhaps it’s Dr Livingstone. So perhaps it’s Dr Livingstone.) Such considerations suggest, at a minimum, that the mind doesn’t do all its thinking in pictures. In fact, they suggest a strategy for empirical research: find out what kinds of inference minds can make, then figure out what kinds of symbol they would need in order to make them. You will arrive, if all goes well, at a theory of those kinds of mental representation that figure in thinking, perceiving, learning and the like, insofar as these are inferential processes. It turns out that this kind of research is feasible, and not without significant results.

It seems likely, for example, that the kinds of representation required as the vehicles of thought are not very different from what ‘natural languages’ (English, German, whatever) provide as vehicles of communication: sentences, or something of the sort. Hence the talk in cognitive science of a language of thought in which our cognitive processes are carried out. This seems hardly surprising. English is used to communicate our thoughts, so it must be that English is rich enough to express their content. So English, or something like it, is prima facie plausible as a model of the system of symbols that we think in. That’s very convenient because we already have in hand quite a powerful account of (some of) the kinds of inference that natural languages can be used to formulate: we have logic. So the inferential account of mental processes offers a nexus between the kinds of inquiry that cognitive psychologists pursue, and the kind that logicians do. This is good news: we can all use all the help we can get.

The full text of this book review is only available to subscribers of the London Review of Books.

Letters

Jerry Fodor underestimates the complexities of Stanley’s first words to Livingstone (LRB, 9 October). He was referring jokingly to the line ‘Mr Stanley, I presume’ in The School for Scandal, not for Livingstone’s benefit – missionaries are above that kind of thing – but with an eye to posterity. Of course, it may all have been unconscious. But Fodor is surely right on the main point: we all infer like mad from the beginning to the end of life, and are only rarely conscious of the fact. Stanley’s joke is a fair example. He was making very complicated inferences about the impression he would make back home.

However, I question Fodor’s suggestion that Stanley-type inferences could be effected by ‘some kind of computing machine since computations are themselves plausibly construed as chains of inferences’. They are nothing of the sort. They are chains of implemented instructions that only look like inferences to real inferrers, which people are and computers are not.

Bill Myers writes that computer-generated inferences are ‘nothing of the sort. They are chains of implemented instructions that only look like inferences to real inferrers, which people are and computers are not’ (Letters, 23 October). Alan Turing would have asked: how can anyone (or anything) tell the difference between something that looks like an inference and an inference?

Adrian Bowyer
University of Bath

I have always thought that Stanley was saying, in coded form, that he was being so bold as to speak to a gentleman to whom he hadn't been introduced (Letters, 23 October).