Unlike the technology found in conventional lie-detector machines, a new software currently being developed by experts at the University of Michigan can detect deceit without having to touch the subject.

The unique lie-detecting software can sense dishonesty by examining both the subject's gestures and words. It was trained on a set of 120 video clips from media coverage of actual high-stakes court trials.

Some of the clips were acquired from The Innocence Project, an organization that works to absolve people who are wrongfully convicted, researchers said. The clips contained testimony from witnesses and defendants. In half of the videos, subjects were deemed to be lying.

After testing the prototype, developers found that it was 75 percent accurate in distinguishing deception, as defined by the results of the trial. They compared the accuracy of the software to that of humans in detecting deceit. Human accuracy was just above 50 percent.

The team of scientists identified several behaviors that indicated when a person was lying.

Individuals who moved their hands lied more, they said. These individuals would also try to sound more certain and would look into the eyes of their questioner a bit more often than those presumed to be telling the truth.

Lying individuals were also seen to be scowling or grimacing during questioning. This behavior was observed in 30 percent of the videos showing lying versus 10 percent of the videos showing truth-telling.

Computer science and engineering professor Rada Mihalcea, who leads the project, said it is difficult to develop a setting that motivates people to truly lie during laboratory experiments as the stakes are not high enough.

"We can offer a reward if people can lie well — pay them to convince another person that something false is true. But in the real world there is true motivation to deceive," said Mihalcea.

During the testing, researchers compared the testimony of witnesses and defendants with the verdict of the trial in order to determine which individuals were deceptive.

They transcribed the audio, counting vocal segregates such as "uh," "ah," and "um." They assessed how frequent the subjects used certain words. They counted the gestures in the clips using a standard coding scheme for interpersonal reactions. The coding scheme rates nine different motions of the hands, eyes, head, mouth and brow.

Meanwhile, Mihalcea's co-researcher Mihai Burzo said the team will soon be integrating physiological parameters such as body temperature fluctuations, respiration rate and heart rate into the new software, all gathered with non-invasive thermal imaging.

The study was presented at the International Conference on Multimodal Interaction and was funded by the National Science Foundation, Defense Advanced Research Projects Agency, and John Templeton Foundation.

Photo : Paul De Los Reyes | Flickr

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion