Following the success of drones in warfare and maintaining national security, plans are now underway to develop morally upright robots that may even outdo human's decision-making abilities in life-and-death situations. Such an ambitious project will be funded by U.S Naval Research, which has now offered a generous amount of $7.5 million grant to five universities that have accepted the challenge.

In the next five years, researchers from the universities of Tufts, Brown, Yale, Georgetown and Rensselaer Polytechnic Institute will have to come up with a robot that could reason like a human while still strictly sticking to its programmed rules of engagement.

"Even though today's unmanned systems are 'dumb' in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we've seen before," said cognitive science program director of the Naval Research Paul Bello, in an interview with Defense One.

While some autonomous machines are already considered legal in some states such as Google's self-driving cars, the Naval Research would still have to guarantee the "ethical and legal implications" that would come with the robots possessing human reason, especially since human lives are at stake.

As per the directive of the U.S. Department of Defense regarding autonomy in weapon systems, all robots must be designed in a way that humans will still be in control regardless if they are in autonomous or semi-autonomous mode. This is to ensure that in case communications with the robots fail, they would not "autonomously select and engage individual targets or specific target groups" not specified by the person in charge of them.

The same concern is also raised by the Human Rights Watch in its 2012 report. It said that while these autonomous killer robots could be a big factor in the success of a mission due to their lack of emotions, they could also be likely to kill innocent civilians.

"Robots cannot identify with humans, which means that they are unable to show compassion, a powerful check on the willingness to kill," the report said. "For example, a robot in a combat zone might shoot a child pointing a gun at it, which might be a lawful response but not necessarily the most ethical one."

However, by installing a program that could provide robots a sense of morality, fears about robots being out of control would be ruled out.

"Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved," said Steven Omohundro, an Artificial Intelligence researcher. "The military has always had to define 'the rules of war' and this technology is likely to increase the stakes for that."

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion