For years, neuroscientists have been hard at work designing computer networks imbued with the capability to mimic visual skills that the human brain efficiently does. In a study, researchers from the Massachusetts Institute of Technology finally found a computer model called "deep neural networks" that can match a primate's brain in recognizing objects.

According to James DiCarlo, Department of Brain and Cognitive Sciences head at MIT and the study's senior author, recent network creations are based on whatever current understanding neuroscientists have when it comes to the brain's object recognition capability. As such, the success of deep neural networks in recognizing objects means that neuroscientists have already gained an accurate understanding of how the process of object recognition happens.

In creating neural networks that are vision-based, scientists took inspiration from the hierarchical way the brain processes visual information, processing data on several levels until an object is identified.

To mimic the process, neuroscientists created computations in several layers, with each layer performing a mathematical operation. As the information flows through the layers, representations for the visual object grows more complex, allowing the network to discard data that is not needed.

For the study, the brain's ability to recognize objects was first measured. Electrode arrays were then implanted into the IT cortex and the area V4, allowing the researchers to see which neuron population were activated depending on the objects being viewed by animals.

Results were then compared with representations made by deep neural networks, with accuracy determined by how similar the results were. Researchers found that the most accurate network, comparable to a macaque brain, was created by a team from New York University.

Two factors are seen to have influenced the success of the study: first, that significant computational power is available, and second, that researchers have access to massive datasets that can then be used to "train" networks.

DiCarlo's lab is now planning to create neural networks that are capable of other types of visual processing, like recognizing objects in three-dimensional form and tracking motion. Researchers are also hoping to guild models that will incorporate feedback projections typical of visual systems in humans. The current network modeling"feedforward" projections starts from the retina down to the IT cortex. After the IT cortex, connections grow 10 times in number to associate with the remaining parts of the system.

Published in the journal PLOS Computational Biology, the study received funding support from the Defense Advanced Research Projects Agency, the National Eye Institute and the National Science Foundation. Other authors include: Charles Cadieu, Ha Hong, Diego Ardila, Daniel Yamins, Nicolas Pinto, Ethan Solomon and Najib Majaj. 

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion