Researchers say they've developed an algorithm, based on how humans learn, that allows a computer to think more like us and recognize simple visual concepts.

Such a technique could dramatically reduce the time a computer requires to "learn" a new concept, they say.

Until now, computers have required large amounts of data, which they would tirelessly grind away at before "learning" to create a desired outcome, the researchers point out.

Humans, on the other hand, can do the same thing with very little data, often requiring just a single visual example to recognize and learn an entire general class of objects, they explain.

In a new study appearing in the journal Science, researchers say they've made a small but important advance that can bring computers closer to a human-type of learning and intelligence.

"For the first time we think we have a machine system that can learn a large class of visual concepts in ways that are hard to distinguish from human learners," says senior study author Joshua Tenenbaum, a professor at M.I.T.

While computers can recognize patterns — something central to human learning — they've usually needed to be shown hundreds or even thousands of examples to even approximate what humans can do after seeing just a few examples, or even a single one.

"You show even a young child a horse or a school bus or a skateboard, and they get it from one example," Tenenbaum says.

The researchers say they set out to improve the computer learning process and make it more like the way humans gather and process new knowledge.

"It has been very difficult to build machines that require as little data as humans when learning a new concept," says Ruslan Salakhutdinov, a professor of computer science at the University of Toronto. "Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science."

The researchers turned to statistical probabilities to create an algorithm that allows a computer to perform the simple task of recognizing, identifying and copying handwritten characters from a number of alphabets around the globe.

When shown a letter from the Tibetan alphabet, for example, the algorithm allowed the computer to identify other examples of the same character in different handwriting, analyze the strokes needed to create the letter and redraw it, the researchers report.

Although still a far cry from true artificial intelligence, it is another step toward having a computer mimic many abilities of human cognition, Tenenbaum suggests.

"We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts — even simple visual concepts such as handwritten characters — in ways that are hard to tell apart from humans," he notes.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion