By taking strides toward understanding how the brain processing images, researchers are figuring out how to project a person's neural activity onto a TV screen.

Most Read

How Do They Do It?

UC Berkeley professor Jack Gallant and his team use MRI to track blood-flow changes in a subject's primary visual cortex—the brain's largest visual processing center—as he or she watches a movie. The researchers then create a model of the visual cortex that matches the blood-flow pattern with the images the subject is viewing. Algorithms are applied to compare the brain signals with a catalog of about 5000 hours of YouTube video. The images that most accurately correspond to the brain activity are compiled into a composite video, which resembles the YouTube footage.

"I think it's very impressive that they can get these admittedly crude re-creations of our internal representations of video," says Marcel Just, director of the Carnegie Mellon University Center for Cognitive Brain Imaging, who was not involved in the study.

What's It Good For?

Such brain—visual linkages could one day aid communication with stroke or coma patients. Gallant says that after his team's technique is refined, it could also record and play back dreams. The obstacle to this is understanding how the brain's visual processing changes when a person is sleeping or awake. Gallant is confident this will happen. "It's only a matter of time," he says.