It's not always easy to interpret what people are saying. If you're particularly bad with social cues, you might find the help you need in the future with a wearable.

According to researchers from MIT's Computer Science and Artificial Intelligence Laboratory and Institute of Medical Engineering and Science, the wearable features an artificially intelligent system capable of predicting if a conversation is sad, happy, or neutral based on an individual's vitals and speech patterns.

Tuka Alhanai and Mohammad Ghassemi detailed their research in a paper [PDF] they will be presenting at the Association for the Advancement of Artificial Intelligence's conference in San Francisco next week.

According to Alhanai, it might not be long before people can have AI social coaches in their pocket, but they believe their wearable is the first experiment to collect both physical and speech data in a passive yet robust manner, even while subjects are engaged in natural interactions.

"Our results show that it's possible to classify the emotional tone of conversations in real-time," said Ghassemi.

How A Wearable AI Social Coach Will Work

As one of the participants tells a story, the wearable's system will analyze audio, physiological signals, and text transcriptions to determine the conversation's overall tone with an accuracy of 83 percent.

With deep-learning techniques, it can also relay a "sentiment score" for a selected five-second interval in the story. If more people in the conversation are using the wearable, the AI system's performance could be further improved, as this will provide more data that algorithms can analyze.

The researchers pointed out that the wearable was developed with privacy in mind by running algorithms locally on a user's device. Should a consumer version of the device be released, it will have clear protocols in place to ensure people whose conversations are being analyzed have given their consent.

Wearable Systems Research

Alhanai and Ghassemi tested the wearable AI system by capturing 31 conversations several minutes long from subjects telling their own stories. They were able to do this with help from a Samsung Simband, a device that gathers high-resolution physiological waveforms to assess features such as blood pressure, heart rate, blood flow, skin temperature, and movement. It also captured text and audio data that allowed the system to analyze the subject's vocabulary, energy, pitch, and tone.

The researchers then used two algorithms to analyze the data and found that they were capable of aligning with what humans will perceive from a conversation. For example, monotonous tones and long pauses are associated with stories that are sadder, while varied, more energetic speech patterns point to happier ones. In terms of body language, sadder stories have strong associations with frequent fidgeting, increased cardiovascular activity, and certain postures.

The wearable AI system is making strides in the right direction, but the researchers said it was not reliable enough yet for deployment as a social coach. As for their next step, they are looking into improving the system's emotional granularity to improve accuracy in determining exciting, tense, and boring moments instead of simply labeling interactions as negative or positive.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion