In a remarkable breakthrough, a woman named Ann, who had been left severely paralyzed by a brainstem stroke at the age of 30, has regained the ability to speak with the assistance of artificial intelligence (AI). 

Ann's ordeal began with a sudden and mysterious brainstem stroke that left her completely paralyzed and unable to control her muscles, including those necessary for speech. 

Her subsequent five years were marked by the fear of not waking up due to her inability to breathe independently. After years of dedicated physical therapy, she regained some movement in her facial muscles, but the ability to speak remained elusive.

Ann is now collaborating with researchers from UC San Francisco and UC Berkeley to pioneer a cutting-edge brain-computer technology that holds the potential to revolutionize communication for people facing similar challenges.

Mindset Brain
(Photo : Mohamed Hassan from Pixabay)

New Tech Transforming Brain Signals

Through the use of this technology, Ann's brain signals are being transformed into natural speech and facial expressions. It marks the first instance where both speech and facial movements have been synthesized from brain signals. 

The system can decode these signals into text at an impressive rate of nearly 80 words per minute, a substantial improvement over her current communication device's rate of 14 words per minute.

Edward Chang, MD, the chair of neurological surgery at UCSF and a driving force behind the development of this brain-computer interface (BCI), envisions the potential for an FDA-approved system that allows speech from brain signals someday. 

"Our goal is to restore a full, embodied way of communicating, which is the most natural way for us to talk with others... These advancements bring us much closer to making this a real solution for patients," said Chang, a member of the UCSF Weill Institute for Neurosciences and the Jeanne Robertson Distinguished Professor.

Ann's journey began in 2005 when she suffered a brainstem stroke, dramatically altering her life. As she slowly regained control over her muscles and learned to breathe independently, she discovered an inner drive to help others in similar situations.

This determination led her to participate in the current study, where researchers implanted electrodes on her brain's surface to intercept the signals that would have controlled her speech muscles before the stroke. 

These electrodes were connected to computers through a cable attached to her head. Training the AI algorithms to recognize her brain signals for speech was a collaborative process that required her to repeat phrases from a vocabulary of 1,024 words. 

This intricate training allowed the system to recognize the unique patterns of brain activity associated with various speech sounds. The innovation lies in the system's ability to decode words from smaller phoneme components, the building blocks of spoken language, making the process faster and more accurate.

Read Also: Neuroscientists Reconstruct Pink Floyd Song by Listening to Patients' Brainwaves

Wireless Version Soon

Furthermore, Ann's speech was synthesized with the help of an algorithm, which was tailored to replicate her pre-injury voice. 

By meshing AI-driven facial animation software with the brain signals she produced while attempting to speak, the researchers were able to animate an avatar that mirrored Ann's facial expressions and mouth movements during conversations.

While the technology holds immense promise, the researchers aim to develop a wireless version that would eliminate the need for physical connections.

According to co-first author David Moses, PhD, an adjunct professor in neurological surgery, this advancement could grant individuals like Ann greater independence and enhance their social interactions. 

Related Article: Espresso Could Help Protect Against Alzheimer's Disease: Research

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion