Columbia University researchers have developed a robot that can learn a picture of its complete body from scratch human intervention.  The researchers show how their robot constructed a kinematic model of itself and then utilized that model to plan movements, attain goals, avoid obstacles, and identify & repair body damages in a range of settings. 

Robot
(Photo : @possessedphotography / Unsplash)

The body model of humans vs robots

Human body image influences how we operate in our present environment. The human brain is always preparing ahead in order for us to move without bumping, stumbling, or falling down. We began developing our body models as newborns, and now robots are doing the same. 

A Columbian engineering team stated just recently that they have developed a robot that, for the first time ever, can learn how to picture a model of its whole body from scratch. All without the aid of humans. 

ScienceDaily reported that researchers describe how their self-aware robot generated a kinematic model of itself. Then it utilized its self-model to plot movements, attain goals, and avoid obstacles in a range of settings. Plus, it can even identify body damages and work on it the way humans do.

How robots became self-modeling

In a new article published in Science Robotics, a robotic arm was positioned inside a ring of five streaming video cameras by the researchers. The robot observed itself undulating freely via the cameras. The robot squirmed and twisted to learn how its body moved in response to varied motor inputs, much like a child experiencing itself for the first time in a hall of mirrors. 

The robot came to a halt after nearly three hours. It had finished learning the link between the robot's motor operations and the volume it occupied in its surroundings using its own deep neural network.

"We were really curious to see how the robot imagined itself," said Hod Lipson, mechanical engineering professor and head of Columbia's Creative Machines Lab, where the research was conducted. The research is part of Lipson's decades-long effort to develop ways to give robots self-awareness.

It also seeks how robots can keep up with its own wear and tear, as well as detect and adjust for damage. The authors claim that this capability is critical because we need autonomous systems to be increasingly self-sufficient. 

 "Self-modeling is a primitive form of self-awareness," he added. Lipson believes that having a realistic self-model allows a robot, animal, or human to operate better in the world, make better judgments, and has an "evolutionary advantage."

Also Read: A Robot That Can 'Think' Has Just Been Created--Here Are The Implications

Is it an advantage or a threat?

Regardless, Lipson and his team are aware of the limitations, hazards, and debates associated with giving computers greater autonomy via self-awareness. Lipson readily admits that the level of self-awareness revealed in this study is minimal compared to that of humans. But he also noted that we have to start somewhere, and move cautiously and carefully, so we can enjoy the advantages of robotics while limiting the hazards.

Related Article: Offensive Robot? Experiment Finds Flawed AI Making Racial and Gender Stereotypes

This article is owned by Tech Times

Written by Thea Felicity

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion