The brain is a beautiful, complex thing. Georgia Institute of Technology researchers created an algorithm that can help demonstrate how the brain can process large amounts of data all at the same time.
The research team found that the human brain is capable of categorizing data by utilizing just 0.15 percent of the primary information. The brain can recognize objects regardless of different variations. For example, the brain can easily identify the letter "A" as letter "A" regardless of the size, color and contour variations. The team hypothesized that one of the ways humans learn is through the presentation of random variations of a single object.
Random Projection Test
For the experiment, the team prepared three groups of abstract images measuring 150 x 150 pixels. Next, they create very tiny "random sketches" of these images. Abstract images were used to ensure that both humans and artificial intelligence do not have previous knowledge of what the images predict.
The human participants were presented with the "whole" abstract images for 10 seconds each. Next, they were shown 16 sketches of each whole image at random. The participants were then asked to identify the original images by viewing just a fraction of same abstract images.
The team developed an algorithm based on the concept of random projection, wherein information are compressed in some way. In this technique, people sacrifice accuracy so they can process the information faster.
Using the algorithm, artificial intelligence successfully completed the same random projection tests given to the human participants. The findings suggest that both artificial neural and human brain networks share similar behaviors. The two networks also found both data hard to process.
"The design of neural networks was inspired by how we think humans learn, but it's a weak inspiration. To find that it matches human performance is quite a surprise," said Santosh Vempala, Georgia Institute of Technology's distinguished computer science professor. Vempala was part of the research team that included Rosa Arriaga, David Rutter and Maya Cakmak from Georgia Tech's College of Computing.
University of California San Diego's engineering and computer science professor Sanjoy Dasgupta commented on the research by highlighting the introduction of a localized random project. In the technique, images are compressed but it still enables both humans and artificial intelligence to identify extensive categories. Dasgupta, who was not part of the study, is a random projection and machine learning expert.
The research was published in the Neural Computation journal. While the initial findings were not sufficient to conclude that the human brain relies on random projection to process a data, the team believes they give a "plausible explanation" on how the mind works.
Photo: Allan Ajifo | Flickr