Researchers at Tufts University's Human-Robot Interaction Lab are programming robots that would give sci-fi author Dr. Isaac Asimov a bit of concern, according to his fictive Three Laws of Robotics: the Tufts engineers are "teaching" their robots the ability to say "no," or at least how to question what they are ordered to do, based on a certain list of criteria that they have been wired to consider before following a command.

The bipedal bots, which move with the eerily-precise mannerisms of a full-grown human being, use felicity conditions (i.e., the conditions that make a directee follow an order) to determine which commands are optimal for the robot to follow, as reported by the IEEE Spectrum:

  1. Knowledge: Do I know how to do X?
  2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
  3. Goal priority and timing: Am I able to do X right now?
  4. Social role and obligation: Am I obligated based on my social role to do X?
  5. Normative permissibility: Does it violate any normative principle to do X?

While it wouldn't be completely accurate to say that these robots have anything close to free will — at least how humans define it — they can supposedly use the latter felicity conditions to assess whether they will follow a given order, namely if the commander has the authority to have a directive carried out, and whether it will put the robot in any danger. So, while the bot might not have any sentience in a way that is entirely understandable to the average layman, it can develop, in a way, a level of trust — a very human characteristic, indeed.

Check out the rabble-rousing robots in the videos below.

 

 

Via: The Next Web

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion