UC BERKELEY (US) — Scientists have found that the brain is wired to put the categories of objects and actions we see daily in order, and have created the first interactive map of how the brain organizes these groupings.

Maps show how different categories of living and non-living objects that we see are related to one another in the brain’s “semantic space.” (Credit: Gallant lab)

[sources]

Some relationships between categories make sense (humans and animals share the same “semantic neighborhood”) while others (hallways and buckets) are less obvious. The researchers found that different people share a similar semantic layout.

“Our methods open a door that will quickly lead to a more complete and detailed understanding of how the brain is organized. Already, our online brain viewer appears to provide the most detailed look ever at the visual function and organization of a single human brain,” says Alexander Huth, a doctoral student in neuroscience at University of California, Berkeley and lead author of the study published in the journal Neuron.

A clearer understanding of how the brain organizes visual input can help with the medical diagnosis and treatment of brain disorders. These findings may also be used to create brain-machine interfaces, particularly for facial and other image recognition systems. Among other things, they could improve a grocery store self-checkout system’s ability to recognize different kinds of merchandise.

“Our discovery suggests that brain scans could soon be used to label an image that someone is seeing, and may also help teach computers how to better recognize images,” says Huth, who has produced a video and interactive website to explain the science of what the researchers found.

It has long been thought that each category of object or action humans see—people, animals, vehicles, household appliances, and movements—is represented in a separate region of the visual cortex.

In this latest study, researchers found that these categories are actually represented in highly organized, overlapping maps that cover as much as 20 percent of the brain, including the somatosensory and frontal cortices.

To conduct the experiment, the brain activity of five researchers was recorded via functional Magnetic Resonance Imaging (fMRI) as they each watched two hours of movie clips. The brain scans simultaneously measured blood flow in thousands of locations across the brain.

Researchers then used regularized linear regression analysis, which finds correlations in data, to build a model showing how each of the roughly 30,000 locations in the cortex responded to each of the 1,700 categories of objects and actions seen in the movie clips.

Next, they used principal components analysis, a statistical method that can summarize large data sets, to find the “semantic space” that was common to all the study subjects.

The results are presented in multicolored, multidimensional maps showing the more than 1,700 visual categories and their relationships to one another. Categories that activate the same brain areas have similar colors.

For example, humans are green, animals are yellow, vehicles are pink and violet, and buildings are blue. For more details about the experiment, watch the video above.

“Using the semantic space as a visualization tool, we immediately saw that categories are represented in these incredibly intricate maps that cover much more of the brain than we expected,” Huth says.

Other co-authors of the study are UC Berkeley neuroscientists Shinji Nishimoto, An T. Vu, and Jack Gallant.