Scientists reconstruct brain activity in color. Amazing.

A couple of years ago I called attention to work by Japanese researchers using a functional magnetic resonance imaging (fMRI) machine to reconstruct visual images from brain activity.

Now, in a new approach by California scientists led by Jack Gallant, scientists have again measured brain activity with fMRI and reconstructed it, this time in color and with some eerie success.

The results are pretty amazing as can be seen in the video below:

How did they do it? According to the researchers they:

1. Recorded brain activity while a subject watched several hours of movie trailers.

2. Constructed models that translated between the shapes, edges and motion in the movies and measured brain activity. A separate model constructed for each of several thousand points at which brain activity was measured.

3. Recorded brain activity to a new set of movie trailers that was used to test the quality of the models.

4. Built a random library of 5,000 hours of video downloaded at random from YouTube. Each of these clips were put through the models to generate predictions of brain activity. The researchers then selected the 100 clips whose predicted activity was most similar to the observed brain activity, and those clips were averaged together.

The researchers have also posted a very interesting FAQ in regard to their study (see here) in which they address some of the biggest questions I have about their work.

Could this be used to build a brain-machine interface (BMI)?

Decoding visual content is conceptually related to the work on neural-motor prostheses being undertaken in many laboratories. The main goal in the prosthetics work is to build a decoder that can be used to drive a prosthetic arm or other device from brain activity. Of course there are some significant differences between sensory and motor systems that impact the way that a BMI system would be implemented in the two systems. But ultimately, the statistical frameworks used for decoding in the sensory and motor domains are very similar. This suggests that a visual BMI might be feasible.

At some later date when the technology is developed further, will it be possible to decode dreams, memory, and visual imagery?

Neuroscientists generally assume that all mental processes have a concrete neurobiological basis. Under this assumption, as long as we have good measurements of brain activity and good computational models of the brain, it should be possible in principle to decode the visual content of mental processes like dreams, memory, and imagery. The computational encoding models in our study provide a functional account of brain activity evoked by natural movies. It is currently unknown whether processes like dreaming and imagination are realized in the brain in a way that is functionally similar to perception. If they are, then it should be possible to use the techniques developed in this paper to decode brain activity during dreaming or imagination.

At some later date when the technology is developed further, will it be possible to use this technology in detective work, court cases, trials, etc?

The potential use of this technology in the legal system is questionable. Many psychology studies have now demonstrated that eyewitness testimony is notoriously unreliable. Witnesses often have poor memory, but are usually unaware of this. Memory tends to be biased by intervening events, inadvertent coaching, and rehearsal (prior recall). Eyewitnesses often confabulate stories to make logical sense of events that they cannot recall well. These errors are thought to stem from several factors: poor initial storage of information in memory; changes to stored memories over time; and faulty recall. Any brain-reading device that aims to decode stored memories will inevitably be limited not only by the technology itself, but also by the quality of the stored information. After all, an accurate read-out of a faulty memory only provides misleading information. Therefore, any future application of this technology in the legal system will have to be approached with extreme caution.

7 Responses

I saw this a while back, very cool but ALSO very manipulated. We must be VERY skeptical of heavily ‘enhanced’ output derived from a physical interaction with a computer. I think it works as advertised, but I will have to learn more about the algorithms that ‘develop’ the images before I write a sci-fi short about it. Still… something like this is inevitable and, like I said, VERY cool.