The mysterious Neuralink brain chip is making headlines recently as its CEO, the very famous Elon Musk, reveal more information about the 'secret' project. One of the most interesting features we heard so far is that it could be used to play music directly in your brain! How could that be possible?

On Youtube, we spotted a really interesting video that discusses what it might look like, and more importantly, what it might sound like.

The video lasting for around 8 minutes was uploaded by Joshua Valor and has generated over 4,000 views as of this writing. His video channel got almost 60K subscribers to-date.

Mr. Valor noted that while he thinks the 'super hearing' chip isn't something that we're going to see in the first generation, it is something that not only could work its way into or a competitor of Neuralink could do in the future. 

READ ALSO: Elon Musk's Mysterious Neuralink Chip Could Make You Hear Things That Were Impossible to Hear Before 

Here are some points raised by the youtuber:

Frequency Response Versus Illusion-Based

One of the strange things about [the super hearing chip] is that it doesn't apply to the current rules of audio. There's kind of two faces of audio that you actually hear. There's the more factual side of things, which is going to be things like frequency response. The frequency in a particular area from a speaker or a headphone is measurable.

 Then some things are going to be more illusion-based, which is going to be things like soundstage and imaging. Those systems are a trick of perception. They work by tricking your brain a lot of times into thinking that something is coming from a place where it is not.

"See if you were in front of me talking to me, I would have an actual center image, and you'd be able to hear my voice coming from the center. With speakers and headphones, what they do in the most basic sense to trick your brain into thinking there's something coming from the center is they play the same signal out of the right and left, and that kind of combines down the center. But this is an illusion; it's not an actual thing."

For things that are not frequency response, things that are more illusion-based like soundstage depth, they require a trick of the actual hardware to make it happen. The better the hardware is usually, the better the illusion is. So Neuralink throws a wrench in this because it doesn't work off the same principles.

"Now I think what will happen most likely is, in the early stages of this we'll see just mono sound in our heads - like we'll just hear something that doesn't have right and left orientation Tt may not even be that clear I'm not sure exactly what it's going to look like, but I'm assuming that the early stages like with most things are going to be pretty rough."

 But when it gets more advanced, that's where things get potentially more interesting because the biggest areas of distortion right now are the speakers and the headphones, or basically the final output stage of your audio chain. That's going to be your single greatest point of distortion, and it has been that way for many years.

When that point of distortion goes away, it'll be interesting to see one what music sounds like from a factual or like things like a frequency response perspective. But it's also going to be crazy to see the potential benefits of the illusions. The playing in your head is no longer going to require a trick of the speakers, so you could have something that your brain is actually processing as an audio signal that is 25 feet away at exactly 37 degrees. That's pretty crazy, and this potentially even gets more insane.

Distortion

 So let's say on a frequency perspective [a few generations down the line of this that there is no distortion]: "I don't know how you would measure it in your brain, but let's just say for the sake of example, that there is no distortion in terms of things like frequency response. But that's where imaging and soundstage could potentially really shake how things sound. "

The crazy thing about this is that it isn't just standard imaging and soundstage, which happens usually on a plane. Then sometimes you'll have this scale or height factor to things it could trick your brain into thinking that things are coming from dead center or as far as to any angle that you can think of.

But it's not just your typical gambit of right-left and forward and back depth. It can also include height and scale, and you know you could hear something far below your feet or far above your head.

But you know, let's just say frequency response was perfect - this could potentially be a real-life sound experience for your music. Assuming that the recordings can keep up and that's a whole another side of things because if you're able to get the delivery system perfect, you know or as close to perfect, as it can be the problem. After that is going to be the technology of actually capturing the audio.

Impact to Audio Companies

Now, unlucky for audio companies, this could be the death of many of them. Luckily though for audio companies, it's probably not going to happen for a very long time - that it's going to be a level of advance that sounds better than a really good pair of headphones or a really good pair of speakers.

"Now I may be wrong on this and one thing that is completely possible."

With this is that the system itself is going to be so advanced and is going to play by such different rules than we're used to. That processing audio in a way that sounds amazing is just going to be completely elementary for it; it's going to be so simple. That it's just able to do it without breaking a sweat. That is possible, and that could have happened early on. We're really in this guessing game of how clear and how good things are going to be at what stage.

"I think it's fair to say that they'll get good, but it's just a matter of what kind of time scale are we looking at there and what is it going to look like in the early stages and when do those early stages really begin."

"So I think the process that they're going to go through is they're probably going to try and acquire government funding, which means that they're probably not going to be introducing things like audio that are really not all that important in the grand scheme of things. "

They're probably going to be focusing on fixing medical issues or connecting people to the internet in a way that is beneficial specifically beneficial to the government. To begin with that way, they can get more revenue to be able to advance the technology further. Now you could think of this in the same way like Boeing and Formula one. There's this top-end technology that gets made because they have funding to do it. Then eventually, that gets trickled down into the consumer end, and everybody ends up benefiting.

The Outlook

 So to wrap this up theoretically if everything works properly (in a few generations, probably not the first one), this could make for the closest to life listening experience that we have. You know the difference between positional realism in something like a good 7.1 surround sound system versus a two-channel system in terms of hearing things behind you.

Channel systems can sound great, but they're not going to be able to throw something behind you like an actual physical speaker. Imagine something that doesn't have to have the physical barriers of speakers.

READ ALSO: Elon Musk's Neuralink Brain Chip Will Soon Allow Users to Take Charge of Moods and Emotions 

In fact, one of the fundamental problems with speakers and something that they've been very good at overcoming but not good enough is that they're trying to make something 'like this sound' like 'something like this' and they're trying to get finite points of sound not to appear like they're 'coming from this'. They want to make it appear like it's 'coming from right here' and speakers that are good at this cost thousands and thousands of dollars and still pale in comparison to actually hearing.

A real-life source like a real-life guitar never sounds like a high-end speaker.

"There's a fundamental difference mostly I imagine having to do with the exacting precision of location, something that speakers are good at but not real-life good."

Neuralink doesn't play by those rules. Your ears are just a capture device for your brain to process actual information. So if you can bypass the ear system and go straight to the processing, if you can send a correct enough signal to the processor unit it's going to be like it's real life and that's going to be insane.

Watch the full video here:

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion