In philosophy, the brain in a vat is any of a variety of thought experiments intended to draw out certain features of our ideas of knowledge, reality, truth, mind, and meaning. It is drawn from the idea, common to many science fiction stories, that a scientist might remove a person's brain from the body, suspend it in a vat of life-sustaining liquid, and connect its neurons by wires to a supercomputer which would provide it with electrical impulses identical to those the brain normally receives. According to such stories, the computer would then be simulating a virtual reality (including appropriate responses to the brain's own output) and the person with the "disembodied" brain would continue to have perfectly normal conscious experiences without these being related to objects or events in the real world.

The simplest use of brain-in-a-vat scenarios is as an argument for philosophical skepticism and Solipsism. A simple version of this runs as follows: Since the brain in a vat gives and receives the exact same impulses as it would if it were in a skull, and since these are its only way of interacting with its environment, then it is not possible to tell, from the perspective of that brain, whether it is in a skull or a vat. Yet in the first case most of the person's beliefs may be true (if he believes, say, that he is walking down the street, or eating ice-cream); in the latter case they are false. Since, the argument says, you cannot know whether you are a brain in a vat, then you cannot know whether most of your beliefs might be completely false. Since, in principle, it is impossible to rule out your being a brain in a vat, you cannot have good grounds for believing any of the things you believe; you certainly cannot know them.

This argument is little more than a contemporary revision of the argument given by Descartes in Meditations on First Philosophy (though Descartes eventually rejects his own argument) that he could not trust his perceptions on the grounds that an evil demon might, conceivably, be controlling his every experience. It is also (though more distantly) related to Descartes' argument that he cannot trust his perceptions because he may be dreaming (Descartes's dream argument is preceded by Zhuangzi in "Zhuangzi dreamed he was a butterfly".). In this latter argument the worry about active deception is removed.

Such puzzles have been worked over in many variations by philosophers in recent decades. Some, including Barry Stroud, continue to insist that such puzzles constitute an unanswerable objection to any knowledge claims. Others have argued against them, most notably Hilary Putnam. In the first chapter of his Reason, Truth, and History Putnam claims that the thought experiment is inconsistent on the grounds that a brain in a vat could not have the sort of history and interaction with the world that would allow its thoughts or words to be about the vat that it is in.

In other words, if a brain in a vat stated "I am a brain in a vat," it would always be stating a falsehood. If the brain making this statement lives in the "real" world, then it is not a brain in a vat. On the other hand, if the brain making this statement is really just a brain in the vat then by stating "I am a brain in a vat" what the brain is really stating is "I am what nerve stimuli have convinced me is a 'brain,' and I reside in an image that I have been convinced is called a 'vat'." That is, a brain in a vat would never be thinking about real brains or real vats, but rather about images sent into it that resemble real brains or real vats. This of course makes our definition of "real" even more muddled. This refutation of the vat theory is a consequence of his endorsement, at that time, of the causal theory of reference. Roughly, in this case: if you've never experienced the real world, then you can't have thoughts about it, whether to deny or affirm them. Putnam contends that by "brain" and "vat" the brain in a vat must be referring not to things in the "outside" world but to elements of its own "virtual world"; and it is clearly not a brain in a vat in that sense. Likewise, whatever we can mean by "brain" and "vat" must be such that we obviously are not brains in vats (the way to tell is to look in a mirror).

Many writers, however, have found Putnam's proposed solution unsatisfying, as it appears, in this regard at least, to depend on a shaky theory of meaning: that we cannot meaningfully talk or think about the "external" world because we cannot experience it sounds like a version of the outmoded verification principle. Consider the following quote: "How can the fact that, in the case of the brains in a vat, the language is connected by the program with sensory inputs which do not intrinsically or extrinsically represent trees (or anything external) possibly bring it about that the whole system of representations, the language in use, does refer to or represent trees or any thing external?" Putnam here argues from the lack of sensory inputs representing (real world) trees to our inability to meaningfully think about trees. But it is not clear why the referents of our terms must be accessible to us in experience. One cannot, for example, have experience of other people's private states of consciousness; does this imply that one cannot meaningfully ascribe mental states to others?

Subsequent writers on the topic, especially among those who agree with Putnam's claim, have been particularly interested in the problems it presents for content: that is, how if at all can the brain's thoughts be about a person or place with whom it has never interacted and which perhaps does not exist.