Stanford Medicine researchers are developing a machine learning and artificial intelligence system using Google Glass in order to help children diagnosed with autism.
The Autism Glass Project uses software designed specifically for the Google wearable to allow children with autism to identify human emotions and recognize social cues.
Kids born with autism spectrum disorder are not capable of recognizing human expressions, even the most basic ones, such as smiling or frowning. They don't understand the emotional language of the body, preventing them from communicating properly with other people and developing lasting relationships.
"Gaining these skills requires intensive behavioral interventions that are often expensive, difficult to access, and inconsistently administered," states the Stanford Autism Glass Project team.
Funded by Google and the David and Lucile Packard Foundation, the initiative is for children who are below 10 years old and living in the U.S. It will assist them in reading the feelings signaled by specific facial expressions. Google Glass runs a system that activates a facial pattern recognition technology, which then analyzes and labels socio-emotional body language.
This project was based on an emotion discovery app developed by Autism Glass Project founder Catalin Voss. The software allows anyone wearing Google Glass to instantly read people's feelings based on their facial expressions.
The app was so impressive that it was bought by a Japanese company. The face-and-eye-tracking technology is now being deployed in cars to enhance safety features by identifying if the driver is falling asleep or taking his or her eyes off the road.
"What we're doing is giving children with autism superpowers," Voss stated.
The Autism Glass Project is now in its second phase after a successful 40-person pilot run. The second trial will check if the tech is an effective home treatment for 100 children with a specified autism.
Stanford is still looking for people to help the team with the project.