Early in life, we humans easily distinguish other humans from trees, and trees from cars. But given that we encounter an endless stream of new objects every day, how can our brains so easily categorize these new things and relate them to things already seen? To answer that question, neuroscientists at UC Berkeley recently developed a map of the brain that shows where on its surface different object types are processed.
Earlier studies used data from MRI brain scans to show that certain object categories are processed in distinct areas of the cortex. But how can brains of finite capacity categorize so many object types? Setting out to create a model both precise and comprehensive, the Berkeley team recruited volunteers to watch several hours of movies while MRIs located brain activity as the viewers observed specific objects.
From there, the scientists created a model of the brain’s “semantic space”—a map where objects represented similarly in the brain show up. Some similarities were expected—for example, objects that move are closer to one another in the brain than objects that are stationary. Other clusters, like all “place” objects, were hypothesized but not seen in the data.
The study’s lead author, Alex Huth, says future work may explore the difference between visual and conceptual features in the brain: are objects close together on the semantic map because they look the same, or because they have other nonvisual things in common? Such findings could propel the design of smarter computers that can recognize objects the way humans can. Indeed, Huth credits his interest in the brain to a childhood fascination with robots. As an undergraduate, however, he chose neuroscience: “I decided that there was no hope of building truly intelligent machines without understanding how our own intelligence works.”
Permission required for reprinting, reproducing, or other uses.