Imagine a gadget, call it “brain-ovision,” for brain scanning that doesn’t create pictures of brains at all. That’s right, no orbs spattered with colorful “activations” that need to be interpreted by neuroanatomists. Instead, with brain-o-vision, what a brain sees is what you get—an image of what that brain is experiencing. If the person who owns the brain is envisioning lunch, up pops a cheeseburger on the screen. If the person is reading a book, the screen shows the words. For that matter, if the brain owner is feeling pain, perhaps brain-o-vision could reach out and swat the viewer with a rolled-up newspaper. Brain-ovision could give us access to another person’s consciousness (1). Technologies for brain-o-vision are beginning to seem possible. We are learning how brain activations map onto emotions, memories, and mental processes, and it won’t be long before we might translate activations into Google searches for images of what the brain is thinking. There is a specific brain area linked with face perception (2), for instance, and even a neuron that fires when it sees Jennifer Aniston (3). So why, in principle, shouldn’t we be able to scan a brain and discover when it is looking at her—and eventually even learn what she’s wearing? Of course, it may be many years to the beta version. But imagine that everything works out and brain-o-vision goes on sale at Wal- Mart. Could the device solve the problem of whether consciousness causes behavior? With direct evidence of a person’s consciousness, we could do science on the question. We could observe regularities in the relation between consciousness (say, a thought of sipping coffee) and behavior (the actual drink). If the consciousness always preceded the behavior (and never occurred without being followed by the behavior), we could arrive at the inductive inference of causation and, as scientists, be quite happy that we had established a causal connection.