The first map of how our brain organizes everything we see

On December 27, 2012
A research published in the Cell Press journal Neuron on the 20th of December 2012 (Alexander G. Huth, Shinji Nishimoto, An T. Vu, Jack L. Gallant. A Continuous Semantic Space Describes the Representation of Thousands of Object and Action Categories across the Human Brain. Neuron, 2012; 76 (6): 1210 DOI: 10.1016/j.neuron.2012.10.014) describes the first developed map of how our brain sorts everything we see.
While neuromarketers aim to understand how people make sense of the thousands of advertisements that flood their retinas each day, scientists at the University of California have found that the brain is wired to put in order all the categories of objects and actions that we see. They have created the first interactive map of how the brain organizes these groupings, and you may see it below (it looks like fractals, doesn’t it?):
Photo credit: Alexander G. Huth, Shinji Nishimoto, An T. Vu, Jack L. Gallant / Neuron

Maps show how different categories of living and non-living objects that we see are related to one another in the brain’s “semantic space”. Photo credit: Alexander G. Huth, Shinji Nishimoto, An T. Vu, Jack L. Gallant / Neuron

“Humans can recognize thousands of categories. Given the limited size of the human brain, it seems unreasonable to expect that every category is represented in a distinct brain area,” says first author Alex Huth, a graduate student working in Dr. Jack Gallant’s laboratory at the University of California, Berkeley.

Here is a video of the author that explains his work:

The result was achieved through computational models of brain imaging data collected while the subjects watched hours of movie clips. To conduct the experiment, the brain activity of five researchers was recorded via they used blood oxygen level-dependent functional magnetic resonance imaging (BOLD fMRI) as they each watched two hours of movie clips. The brain scans simultaneously measured blood flow in thousands of locations across the brain. Researchers then used regularized linear regression analysis, which finds correlations in data, to build a model showing how each of the roughly 30,000 locations in the cortex responded to each of the 1,700 categories of objects and actions seen in the movie clips. Next, they used principal components analysis, a statistical method that can summarize large data sets, to find the “semantic space” that was common to all the study subjects. The results are presented in multicolored, multidimensional maps showing the more than 1,700 visual categories and their relationships to one another. Categories that activate the same brain areas have similar colors.

These findings may be used to create brain-machine interfaces, particularly for facial and other image recognition systems. Among other things, they could improve a grocery store self-checkout system’s ability to recognize different kinds of merchandise. “Our discovery suggests that brain scans could soon be used to label an image that someone is seeing, and may also help teach computers how to better recognize images”, said Huth.It has long been thought that each category of object or action humans see – people, animals, vehicles, household appliances and movements – is represented in a separate region of the visual cortex. In this latest study, it was found that these categories are actually represented in highly organized, overlapping maps that cover as much as 20 percent of the brain, including the somatosensory and frontal cortices.

Trackbacks & Pings

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

%d bloggers like this: