Researchers from Duke University and MIT discovered that part of our brains that is responsible for human speech recognition.
Until recently, the entire issue remained shrouded in mystery. It was already known that certain parts of the brain react to certain types of sounds and therefore are responsible for recognition, yet that region that is at the same time sensitive to speech timing and plays a crucial role in human language still remained under the radar.
One of the particularities of human brains is their efficiency in perceiving, producing as well as the processing of finely tuned rhythmic information. This holds particularly true for music and speech. Any musical sound type triggers a reaction in the temporal lobe’s auditory cortex.
The findings of the Duke University and MIT team of scientists, published in the Nature Neuroscience journal unraveled the part of the brain that is triggered when a rhythmic speech sound is heard. That would be the superior temporal sulcus, or STS. The STS is located in the temporal lobe. The revolutionary findings help settle a long-held exchange of arguments over role-specific neurological functions.
According to the team of researchers, there are some differences between the auditory system and any other sensory system. In order to swiftly process the massive inflow of information, it has to “cut corners.” It does so, they explain, by sampling chunks of information that are of similar lengths to that of an average consonant or syllable.
Rhythm and timing are part and parcel of human speech. For one person to understand what others are uttering, the brain needs to interpret different time signatures.
The experiment conducted by the Duke University and MIT team solved the riddle haunting neural science for decades. Yes, there are brain regions that are dedicated to recognizing speech as distinctive from music or animal noises.
One thing the research team had to do was to exclude any other superior temporal sulcus activation causes. They therefore began testing control sounds which mimicked speech but weren’t. These sounds were different in either frequency, pitch or rhythms.
The making of speech is on three time measurements. Firstly, phonemes – which are the shortest time measurements, lasting between 30 and 60 milliseconds. Secondly, syllables are in the middle, lasting between 200 and 300 milliseconds. And thirdly, words.
Therefore, what the team did was edit recordings of speech in foreign languages to obtain cuts between 30 and 660 milliseconds. Following, the cuts were reassembled in what received the name of „speech quilts”.
Orath explains that the team had to go to great lengths to ensure that the STS activity they were witnessing had in fact been caused by speech processing.
Speech quilts were played to participants while the researchers determined the neurological response of subjects via functional magnetic resonance imaging (fMRI). The gathered data revealed that the STS became highly active during the 480- and 960-millisecond quilts. The 30-millisecond quilts did not have the same effect.
A test batch of quilts was designed by the researchers in order to make sure that the STS does not respond the same to animal sounds for instance. Expectedly, it didn’t. This indicates that the STS is reserved for detecting spoken word.
Image Source: INC


Latest posts by Anne-Marie Jackson (see all)
- SF Hospital Slaps New Parents with $19K Bill for Baby Treatment - Jun 29, 2018
- Furious Trump Blasts Harley-Davidson for Moving Production Overseas - Jun 28, 2018
- Warning! MRI Machines Could Poison You - Jun 27, 2018