Researchers from Purdue University have demonstrated how deep learning technology can analyze patterns in the visual cortex to “read” what a person sees.

Researchers from Purdue University have developed a method using functional magnetic resonance imaging (fMRI) and computer algorithms to map the neural networks of the visual cortex and to build a model of their visual experience as it occurs. As recently reported in Science, the researchers have made possible technology that once belonged in the realm of science fiction.

During the “training” phase of the research project, women were shown video clips of people, animals, or scenes from nature. Each video clip was shown multiple times to enable the research team to collect data on the neural activity of the visual cortex of the brain as it responded to aspects of each clip such colour, spatial orientation, or size. Researchers then used this data to make predictions regarding which areas would be stimulated when that same person watched a specific video clip. Incredibly, the algorithm predicted which of the possible fifteen video clips a person was watching with 50% accuracy.

In the second phase of this research, deep learning algorithms were utilized to enable the researcher team to better understand how the brain divides one visual scene into parts and again gathers it into a complete understanding of the scene. The model constructed from one person’s data was then used to predict and decode the brain activity of other individuals. Even when the network was trained on data from another person, it could still predict which of the possible fifteen video clips a person was watching 25% of the time.

Their study is important because it demonstrates the potential for the application of their model to study brain function, even for people with visual impairment. The research team hopes that the application of deep learning to the reconstruction of mental imagery could be used advance the field of neuroscience and to improve artificial intelligence.