Google might have found difficulty in competing with OpenAI's ChatGPT as the launch of its Bard AI didn't quite go as planned. However, the company hasn't given up on the artificial intelligence field and is venturing into different fields like sound generation based on brain activity.

Google Can Use AI to Create Music from Brain Activity With Its New Tool MusicML

According to the story by ZDNet, it's important to clarify that Google won't be using AI in order to create music but instead does the music from text approach. This comes with the launch of its new tool, MusicML.

Google is increasing its efforts when it comes to using AI in order to read the user's brain as well as produce sound based on the brain activity of the user. In a new research paper with the title Brain2Music, Google reportedly used AI to reconstruct music coming from the activity of the brain.

The AI Tool Looks at a User's Functional Magnetic Resonance Imaging to Come Up With New Music

The company was able to do this by going through data from functional magnetic resonance imaging (fMRI). It was noted that the fMRI data of five test subjects were studied as all of them listened to the same thing.

The participants listened to 15-second music clips from different genres like hip-hop, blues, country, classical, metal, country, jazz, pop, rock, and reggae. The data was then used to train Google's deep neural network.

Deep Neural Network Learns More About the Relationship Between Music and Brain Activity

The deep neural network then tried to learn more about the relationship between the music elements and patterns in brain activity. The researchers wanted to learn the relationship between rhythm and emotion.

After the training was finished, it was reported that the model would be capable of reconstructing music coming from an fMRI. This would be done by using MusicLM, which is known to generate music from the text.

MusicLM is Used to Generate Music Similar to the Original Input Music Stimuli

Since the main purpose of MusicLM is to use text to generate music, it was then conditioned to come up with similar music to the original input music stimuli but on a specific semantic level. Upon testing, the music generated comes close to the musical stimuli that the participant listened to initially.

The music resembled genre, mood, instrumentation, and more. The Google research page also revealed more about the project, allowing users to listen to other clips of the original music stimuli for comparison.

Read Also: Microsoft's Bing Chat Is Now Available in Safari, Chrome - Here's What You Need to Know

MusicLM Generated Restructures Similar to the Original Sound Clip Used

They would be compared to the MusicLM generated restructures, showing interesting results. An example of this was how a 15-second sound clip of Britney Spears' "Hit Me Baby One More Time" and the three reconstructions generated by the AI showed a pop-like and upbeat result, close to the original.

The audio reportedly didn't resemble the original music because the study was more focused on the different music elements instead of its lyrical component.

Related Article: AI Found to Be Biased Against Parents in the Workforce, New Research Shows

Tech Times

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion