In an exciting leap at the intersection of neuroscience and artificial intelligence (AI), researchers at Google and Osaka University reported achieving something extraordinary: the ability to translate human brain activity into music.

Does this mean that we could be composing our thoughts directly into a song in the future? 

GERMANY-HEALTH-VIRUS-LIFESTYLE-MUSIC
(Photo : INA FASSBENDER/AFP via Getty Images)
A woman dances while listening to music over headphones during a 'Headphone Disco' event at the Kennedyplatz plaza in Essen, western Germany on July 9, 2021. - After a months-long pandemic break, people are allowed to dance again on Essen's Kennedyplatz on two weekends. The square will be divided into 40 plots, each of which can accommodate up to ten people dancing and partying.

AI Model Brain2Music: From Thoughts to Music

Dubbed "Brain2Music," Science X Network reported that this cutting-edge AI model has the power to convert thoughts and brainwaves to reproduce music.

To accomplish this feat, the researchers played music samples covering 10 different genres, including rock, classical, metal, hip-hop, pop, and jazz, for five subjects while monitoring their brain activity using Functional MRI (fMRI) readings.

Unlike standard MRI readings that capture static images, fMRI records metabolic activity over time, providing crucial insights into brain functions.

These fMRI readings were then utilized to train a deep neural network that identified specific brain activities associated with various characteristics of music, such as genre, mood, and instrumentation.

Additionally, the researchers integrated the MusicLM model developed by Google into their study.  MusicLM generates music based on text descriptions, incorporating factors such as instrumentation, rhythm, and emotions.

Combining the MusicLM database with the fMRI readings, the AI model, named Brain2Music, reconstructed the music the subjects had heard. Instead of using text instructions, the AI leveraged brain activity to provide context for the musical output.

Read Also: Google's Bard AI Chatbot Can Now Talk in Over 40 Languages and Respond to Visual Prompts After Latest Update

Original Music Stimulus

According to Timo Denk, one of the paper's authors and a researcher at Google, their "evaluation indicates that the reconstructed music semantically resembles the original music stimulus."

The AI-generated music closely resembled the original samples' genres, instrumentation, and mood. The researchers also identified specific brain regions that reflected information originating from text descriptions of music.

The team's shared examples revealed strikingly similar music excerpts interpreted by Brain2Music based on the subjects' brainwaves. Notably, the model reconstructed segments of Britney Spears' hit song "Oops!... I Did It Again," capturing the essence of instruments and beats with precision, although the lyrics were incomprehensible.

The potential applications of Brain2Music are vast and intriguing. As AI technology continues to advance, it could potentially revolutionize music creation, enabling composers to simply imagine melodies while a printer, wirelessly connected to the auditory cortex, automatically produces sheet music.

The findings of the research team were published in arXiv.

Related Article: Google Quantum Computer Is '47 Years' Ahead of the World's Top 1 Supercomputer: Here's What You Need to Know

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion