Damage to a part of the brain that processes visual information—the inferotemporal (IT) cortex—can be devastating, especially for adults. Those affected may lose the ability to read (a disorder known as alexia), or recognize faces (prosopagnosia) or objects (agnosia), and there is currently not much doctors can do.
A more accurate model of the visual system may help neuroscientists and clinicians develop better treatments for these conditions. Pittsburgh-based Carnegie Mellon University (CMU) researchers have developed a computational model that allows them to simulate the spatial organization or topography of the IT and learn more about how neighboring clusters of brain tissue are organized and interact. This could also help them understand how damage to that area affects the ability to recognize faces, objects, and scenes.
The researchers—Nicholas Blauch, a PhD student in the Program in Neural Computation, and his advisors David C. Plaut and Marlene Behrmann, both professors in the Department of Psychology and the Neuroscience Institute at CMU—described the model in the Jan. 18 issue of the Proceedings of the National Academy of Sciences. Blauch says the paper may help cognitive neuroscientists answer longstanding questions about how different parts of the brain work together.
“We have been wondering for a long time if we should be thinking of the network of regions in the brain that responds to faces as a separate entity just for recognizing faces, or if we should think of it as part of a broader neural architecture for object recognition,” Blauch says. “We’re trying to come at this problem using a computational model that assumes this simpler, general organization, and seeing whether this model can then account for the specialization we see in the brain through learning to perform tasks.”
To do so, the researchers developed a deep learning model endowed with additional features of biological brain connectivity, hypothesizing that the model could reveal the spatial organization, or topography of the IT.
“The brain doesn’t have unlimited volume,” Blauch says. “It needs to try to keep the amount of white matter—used to connect different areas of the brain—to a minimum required for effective communication, so there is space for more gray matter—or neurons—to compute information.”
Blauch also explained that most of the connections between areas in the brain are from excitatory neurons, whereas connections within a brain area are mediated by both excitatory and inhibitory neurons. In most deep learning models, artificial neurons can individually both excite and inhibit other neurons.
Following these principles, the researchers set up a basic network architecture and a cost function that emphasizes learning to recognize images, while trying to keep connections short. The scientists trained the model, referred to as an Interactive Topographic Network, to recognize images from different domains: faces, objects, and scenes. Once the model had learned to recognize these images, they found that it had produced selective spatial areas for each domain, as seen in the brain.
Next, they simulated lesions, or brain damage to each area. When they introduced lesions to the area in the model selective for face recognition, they saw a big deficit in the model’s ability to recognize faces. They got the same result with the object and scene domains. They also discovered the damage was not entirely specific.
“There is some residual damage to the other domains,” Blauch says. “It’s small compared to the preferred domain, but it shows us that the specialization within these networks can be strong but also somewhat mixed. That, combined with the general principles employed by the whole system, imply that it may be better thought of as one system with internal specialization rather than a collection of independent modules.”
A general, flexible system might be more capable of reorganization after damage, as is seen in children who largely recover visual function after damage early in life, in contrast to adults with similar damage. The researchers plan to expand the model to investigate further questions about the visual system, including interactions between the organization of IT and other areas of the brain.