Reconstructing Visual Experiences from Brain Activity

On March 10, 2012

Thanks for dropping by! We love to share neuromarketing news and because you're new here, you may want to subscribe to our Newsletter.

Technology now allows mind reading and computational models are able to see through the brain and reconstruct visual experiences. The work in Professor Jack Gallant’s laboratory (at the University of California, Berkeley) focuses on computational modeling of the visual system. Their goal is to formulate models that describe how the brain encodes visual information, and that accurately predict how the brain responds during natural vision.
They study the visual system and focus on discovering how different areas of the brain represent the visual world, and on how these multiple representations are modulated by attention, learning and memory, in terms of neural coding. In order to “decode” the brain, much of the work in their laboratory involves functional magnetic resonance imaging (fMRI), a rapidly developing technique for making non-invasive measurements of brain activity. Their computational models leverage many different statistical and machine learning tools, including nonlinear system identification, Bayesian estimation theory and information theory. As they state in the article, quantitative modeling of human brain activity can provide crucial insights about cortical representations and can form the basis for brain decoding devices.
Their work on reconstructing visual experiences evoked by natural movies was selected as one of Time Magazine’s 50 Best Inventions of 2011. You can watch below a clip presenting their results:The left clip is a segment of a Hollywood movie trailer that the subject viewed while in the fMRI. The right clip shows the reconstruction of this segment from brain activity measured. This is amazing!
The authors of this study present the procedure as following:[1] Record brain activity while the subject watches several hours of movie trailers.[2] Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured. (The real advance of this study was the construction of a movie-to-brain activity encoding model that accurately predicts brain activity evoked by arbitrary novel movies.)[3] Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.[4] Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (These videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction.
For the paper (Nishimoto et al., 2011, Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies, Current Biology10.1016/j.cub.2011.08.031) is available here.
Watch below another video with reconstructions from the brain activity of 3 subjects:

This video is organized as follows: the movie that each subject viewed while in the magnet is shown at upper left. Reconstructions for three subjects are shown in the three rows at bottom. All these reconstructions were obtained using only each subject’s brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. (In brief, the algorithm processes each of the 18 million clips through the brain model, and identifies the clips that would have produced brain activity as similar to the measured activity as possible. The clips used to fit the model, the clips used to test the model and the clips used to reconstruct the stimulus were entirely separate.) The reconstruction at far left is the Average High Posterior (AHP). The reconstruction in the second column is the Maximum a Posteriori (MAP). The other columns represent less likely reconstructions. The AHP is obtained by simply averaging over the 100 most likely movies in the reconstruction library. As the authors state, these reconstructions show that the process is very consistent, though the quality of the reconstructions does depend somewhat on the quality of brain activity data recorded from each subject.

Like this:

Trackbacks & Pings

[…] information on the protocols and ethical guidelines. He also presented how it is possible to reconstruct visual experiences from brain activity. He discussed ethics and protection of subjects, protection of insights and protection of youth. He […]