Scientists are able to tell what image a volunteer was looking at 92% of the time using fMRI technology

Mindreading is something that we see often in movies and television programs. While magicians and psychics pretend to be able to read mind, the fact is that reading minds is impossible — right?

A group of scientists at the University of California in Berkeley led by Jack Gallant have developed a way to tell what someone is looking at using a type of magnetic resonance imaging (MRI) brain scan. The researchers accomplish this feat using functional magnetic resonance imaging (fMRI) to model a volunteer’s response to different types of pictures.

The two volunteers were shown various images by the researchers who cataloged their brain activity while the images were viewed. The scientists were then able to take the recorded results and identify with an impressive rate of accuracy what the volunteers were looking at when they were shown new images.

Nature reports that the volunteers were shown a series of 1750 images while their brains were monitored via fMRI scans. The researchers then showed the volunteers 120 new images they had not been shown before. Nature reports that the researchers were able to pick out which picture one of the volunteers viewed 92% of the time and with the second volunteer they were accurate 72% of the time. The researchers report on chance alone they would have chosen the right images only 0.8% of the time.

John-Dylan Haynes of the Max Planck Institute for Human and Cognitive Brain Sciences in Germany told Nature, “It’s definitely a leap forward. Now you can use a more abstract way of decoding the images that people are seeing.”

The researchers see future versions of this technology being used to diagnose and treat conditions like dementia by seeing how the brains activity changes over time or as the result of a new treatment or medication.

Gallant says that the next step for the technology is to be able to interpret what a person is seeing without having to select from a set of known images. Gallant told Nature, “That is in principle a much harder problem. You’d need a very good model of the brain, a better measure of brain activity than fMRI, and a better understanding of how the brain processes things like shapes and colors seen in complex everyday images. And we don’t really have any of those three things at this time.”