a, Science & Technology

Sounds are better indicators of emotions than words

Researchers have discovered two separate pathways for how emotions conveyed through speech are processed in the brain. Led by Dr. Marc Pell, associate dean and director of McGill University’s School of Communication Sciences and Disorders, the work is the first of its kind to directly compare speech embedded emotions with vocalizations. The work was recently published in the Journal of Biological Psychology

Pell has been studying the human voice for more than 20 years and is a world leader in the field of social neuroscience. To study these pathways, his team used event-related brain potentials (ERP), a type of electroencephalogram (EEG), to measure responses to two different types of vocal cues. 

The first vocal cue, called ‘non-linguistic vocalizations,’ involved sounds produced by the human voice without semantic or linguistic meaning, such as growls, moans, cries, or laughter. The second, called “speech embedded emotions” were short emotional sentences with changes in pitch, loudness, rhythm and voice quality, although the actual sentence itself carried no emotional meaning, like the phrase “he placktered the tozz.”

After hearing the auditory stimuli, participants were immediately shown a face expressing either an emotion, or a computer program-manipulated face that does not convey an emotion, called a “grimace.”

“Analyses of the data collected provide new neurophysiological evidence that vocalizations, as evolutionarily primitive signals, are accorded precedence over speech-embedded emotions,” the paper reads.

As a result of these differentiations, the researchers have postulated the existence of discrete brain regions for the processing of emotions in vocalizations as opposed to speech.

According to Pell, the study actually contained only half of their findings and that the faces were part of a companion study designed to investigate ‘emotional priming,’  or the conditioning of a subject to a particular emotion via vocal stimuli. 

The study of the human voice and vocal stimuli in the field of social neuroscience was, until only recently, overshadowed in popularity by research on the human face. The voice, however, has become an increasingly attractive subject of research in recent years. This phenomenon is due in part by technological advancements. 

“Historically, the voice was much harder to study because researchers are dealing with dynamic stimuli, whereas the face is static,” Pell said. “Improvements, particularly in real-time imaging and measuring techniques have really pushed [the voice] into the spotlight.”

All participants in Pell’s study were native English speakers. As the work was conducted in Montreal, a city famous for its bilingualism, speaking English as a mother tongue was a criterion in the selection of test subjects. The reason is that differences exist in emotional processing that are dependent on language. 

“In our lab, we’ve found evidence that certain aspects of language systems affect the expression and identification of emotion,” he stated.

Pell has examined these differences with respect to Mandarin, a tonal language, and English, which is atonal.

In addition to language, Pell said that an even more important factor contributing to emotion and speech is culture. Social norms such as ‘when,’ and ‘to what extent’ an individual expresses an emotion, termed display rules, greatly contribute to individual differences. However, many questions regarding the nature of vocal processing remain unanswered. 

“We have [to] learn the similarities before we can learn the differences,” Pell said.

Share this:

Leave a Comment

Your email address will not be published.

*

Read the latest issue

Read the latest issue