Researchers bring new insight into how the brain understands speech by discovering that the superior temporal sulcus (STS), a long trench found in the temporal lobe of the brain, is responsible for processing speech rhythms.

The brain's ability to process the timing of speech is important in allowing humans to understand languages. Phonemes, which are the shortest, most basic speech units, last anywhere between 30 to 60 milliseconds, while whole syllables last longer from 200 to 300 milliseconds. Whole words, of course, are even much longer than that. For the brain to understand language, it first has to process every rapidly changing piece of information it gathers from the speech sounds it hears.

Hypothesizing that the brain breaks up speech sounds into small chunks to understand the whole, researchers from Duke University and the Massachusetts Institute of Technology decided to slice speech sounds into chunks lasting anywhere from 30 to 960 milliseconds and patching them together into a new arrangement they call speech quilts. Speech quilts made of shorter speech chunks highly disrupted the natural structure of speech, while longer chunks will be easily recognizable by the brain.

As they played their speech quilts to subjects connected to a functional magnetic resonance imaging machine, the researchers found out that the STS became highly active when listening to quilts made from 480-millisecond chunks and above, compared to the reaction to the speech quilts with shorter chunks.

To ensure that their findings were specific to speech sounds, the researchers also created speech quilts made from different lengths of chunks of non-speech sounds, including sound that mimicked the frequency of speech but not its timing, and speech with the pitch removed and environmental sounds. When exposed to these control sounds, the STS did not respond as it did with the speech quilts. 

"We really went to great lengths to be certain that the effect we were seeing in STS was due to speech-specific processing and not due to some other explanation, for example, pitch in the sound or it being a natural sound as opposed to some computer-generated sound," says co-author Tobias Overath, a psychology and neuroscience professor at Duke University.

In a related study, researchers headed by David Poeppel of the New York University's Department of Psychology and Center for Neural Science found that the STS is dedicated exclusively to processing speech sounds. Applying the same speech quilts and other sounds to subjects, the researchers discovered that while all areas of the temporal lobe connected to processing sound lighted up when listening to the speech quilts, the STS only became activated when exposed to the chopped up and reordered speech sounds.

The speech quilts were made using German words, and reordering the chunks of sound ensured that the subjects were responding to the audio cues instead of trying to guess the language, which would activate other parts of the brain responsible for processing language.

"We now know there is at least one part of the brain that specializes in the processing of speech and doesn't have a role in handling other sounds," says Poeppel.

Both studies are published in the Nature Neuroscience journal.

Photo: Garry Knight | Flickr

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion