Mind Reading with Functional MRI

Mind Reading with Functional MRI

Scientists can accurately predict which of a thousand pictures a person is looking at by analyzing brain activity using functional magnetic resonance imaging (fMRI). The approach should shed light on how the brain processes visual information, and it might one day be used to reconstruct dreams.

“[The research] suggests that fMRI-based measurements of brain activity contain much more information about underlying neural processes than has previously been appreciated,” says Jack Gallant, a neuroscientist at the University of California, Berkeley, and senior author of the study.

FMRI detects blood flow in the brain, giving an indirect measure of brain activity. Most fMRI studies to date have used the technology to pinpoint the parts of the brain involved in different cognitive tasks, such as reading or remembering faces. The new study, however, adopts an emerging trend in fMRI: using the technology to analyze neural information processing. By employing computer models to analyze the kinds of information gathered from the neural activity, scientists can try to assess how neural signals are processed in different brain areas and ultimately fused to create a cohesive perception. Researchers have previously used this approach to show that some visual information can be gleaned from brain-imaging data, such as whether a person is looking at faces or houses.

According to the study, published Wednesday in the online version of the journal Nature, scientists first gathered information about how the brain processes images by recording activity in the visual cortex as subjects looked at several thousand randomly selected pictures. Neurons in this part of the brain respond to specific aspects of the visual scene, such as a patch of strongly contrasting light and dark, so the activity recorded in each area of the brain scan reflects the visual information being processed by neurons in that area of the brain. The researchers compiled this information to develop a computer model that would predict the pattern of brain activity triggered by any image.

When volunteers were later shown a new image not included in the first set, the computer model was able to correctly predict which picture out of 120 or 1,000 possibilities the person looked at with 90 or 80 percent accuracy, respectively.

“They can do this with a surprising degree of accuracy,” says Frank Tong, a neuroscientist at Vanderbilt University, in Nashville, TN, who was not involved in the research. “People will be struck by how much visual information these researchers were able to extract from the brain.”

Gallant and his team plan to use this technology to better understand how the visual system works by building computational models of various theories and then testing their ability to interpret brain scans. “The most direct way to test theories about how the brain transforms information is to measure what information is stored in different parts of the person’s mind, and how that changes from structure to structure,” says Ken Norman, a neuroscientist at Princeton University, in New Jersey, who was not involved in the research. Similar methods might also be useful in determining how those steps go awry in people with different kinds of cognitive deficits, he says.