Exciting breakthroughs in AI are happening at an unprecedented rate, and a team of researchers at the University of Texas at Austin has just added another one to the list!

According to a recent study, they have developed a new artificial intelligence system that can read people's thoughts and convert them into a continuous stream of text.

AI Brain
(Photo: Gerd Altmann/ Pixabay)


The Semantic Decoder

Dubbed the semantic decoder, the AI system can translate an individual's brain activity into text without requiring surgical implants or prescribed word lists. Instead, the system uses an fMRI scanner to measure brain activity after extensive training of the decoder. 

While being scanned, the participants listen to several hours of podcasts, which enables the machine to produce text based solely on their brain activity.

Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin, led the study.

The researchers constructed the AI system by utilizing a transformer model comparable to the ones that power ChatGPT by Open AI and Bard by Google.

What sets the semantic decoder apart from other language decoding systems is that it can decode continuous language for extended periods with complicated ideas.

"For a noninvasive method, this is a real leap forward compared to what's been done before, which is typically single words or short sentences," Huth remarked.

Read Also: Microsoft's Generative AI 'Designer' is Now Available in Preview

The Decoding Process

The AI system does not provide an exact transcript of the original words spoken or thought. Rather, it is designed to capture the essence of the message, although it is not perfect.

Despite this imperfection, the machine-generated text can closely match the intended meaning of the original words about half of the time, according to the team.

During the experiments, a participant who listened to someone saying, "I don't have my driver's license yet," had their thoughts translated as, "She has not even started to learn to drive yet." 

Similarly, when someone said, "I didn't know whether to scream, cry or run away. Instead, I said, 'Leave me alone!'" their thoughts were decoded as, "Started to scream and cry, and then she just said, 'I told you to leave me alone.'"
 
As with any technological breakthrough, there are concerns about the possibility of misuse. To address these concerns, the researchers made sure that the decoding process only worked with cooperative participants who willingly took part in training the decoder. 

The results were unintelligible for individuals on whom the decoder had not been trained, and similarly, the results were unusable if participants who had been trained earlier put up resistance, such as thinking of other thoughts.

The system's reliance on fMRI machines makes it impractical for use outside the laboratory. However, the researchers believe that their work could transfer to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

According to Huth, fNIRS measures the changes in blood flow in the brain over time, which is the same kind of signal that fMRI measures. He also mentioned that their approach should work for fNIRS, although the resolution would be lower. 

This exciting breakthrough in AI has massive implications for people who are unable to speak, and it will undoubtedly be exciting to see how this technology develops in the coming years.

The study was published in Nature Neuroscience. 

Related Article: AI in the ICU Could Help Clinicians With Decision-Making

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion