The ability to interpret other peoples’ emotions is vital for social interactions. We recognize emotions in others by observing their body language and facial expressions. The voice also betrays one’s emotional state: words spoken in anger have a different rhythm, stress and intonation than those uttered with a sense of joy or relief. But how the emotional content of a voice is encoded in the brain was unclear.

Now though, Swiss researchers report that they have decoded the neural activity in the voice-sensitive regions of the brain, and demonstrate that this activity can be analyzed to predict which vocal emotion is being heard. The study, which is published in the journal Current Biology, is the first to show that different vocal emotions are encoded by distinct patterns of brain activity, and could lead to a better understanding of psychiatric disorders in which the ability to recognize the emotional information in voices is compromised.

The brain contains a number of voice-sensitive regions. All of these are found within the auditory cortex in the superior temporal gyrus, and several of them are also responsive to the emotional content of voices. Previous neuroimaging and electrophysiological studies, in which these regions have been investigated individually, have shown that their activity increases in response to emotion-laden, but not to neutral, voices. These findings therefore suggest that the voice-sensitive regions of auditory cortex do not discriminate between different emotions, but that their processing primarily reflects a distinction between emotional and neutral voices. The categorization of the emotion being expressed may occur at later stages, possibly in the frontal cortex.

Thomas Ethofer of the University of Geneva and his colleagues investigated whether vocal emotions might be represented in a spatially distributed pattern of activity in the auditory cortex. They recruited 22 healthy participants for their study, and scanned their brains whilst they listened to recordings of actors pronouncing a pseudosentence (“Ne kalibam sout molem”) in one of five emotional categories – anger, sadness, relief, joy, or neutral. The researchers used a method called multivariate pattern analysis to record the activity of all voice-sensitive regions of the auditory cortex, in both hemispheres, simulataneously. In all, they analyzed the activity in nearly 2,000 volume units of brain tissue. (Each unit is a volumetric pixel, or “voxel”, and corresponds to around 10,000 neurons.)

They found that each emotional category was associated with a distinct pattern of neural activity across the voice-sensitive regions of the auditory cortex (above). The five categories could be reliably discrminated from the activation patterns in each hemisphere alone, but the accuracy of the decoding improved significantly when data from both hemispheres were used. The accuracy also improved with the number of voxels analyzed, with optimal decoding obtained from analysis of 1,000 voxels in the right and 600 voxels in the left hemisphere. Having optimized their decoding algorithm, the researchers were thus able to predict which category of vocal emotion the participants were listening to, from the observed pattern of neural activation.

In line with earlier studies, the analysis showed that parts of the middle superior temporal gyrus had the highest sensitivity to vocal emotions. When other regions of the auditory cortex, which are less sensitive to vocal emotions, were used, the accuracy rate of the decoding was much lower. A key characteristic of emotions is the level of arousal associated with each: anger and joy are associated with levels of arousal, and sadness and relief with low levels. And indeed, the researchers also observed the biggest differences between the brain’s response to vocal emotions with different levels of arousal (e.g. sadness vs. anger); but even so, they could still distinguish between the responses to vocal emotions which produce similar levels of arousal.

Vocally expressed emotions differ in a number of properties, most notably the fundamental frequency of the vocalization. However, it is still unclear exactly which acoustic properties voices the auditory cortex is most sensitive to. Ethofer and his colleagues were able to distinguish between vocal emotions with a similar fundamental frequency, suggesting that other properties, such as the intensity and duration of the vocalization, may be more important. Further experiments using artificial stimuli in which these parameters are manipulated individually are likely to determine exactly which characteristics of the voice are most important for the recognition of vocal emotions.

The ability to recognize emotions in voices is compromised in various psychiatric disorders. Schizophrenics, for example, have an impaired ability to recognize anger and sadness in others’ voices, and depressed patients fail to recognise surprise. Future research based on these new findings may therefore shed new light on the deficits seen in these conditions. It could clarify, for example, whether they are linked to abnormal activity in the auditory cortex or elsewhere in the brain.

Comments

Another psychiatric diagnosis where these findings may have application would be in Psychopathy. Recent work suggests that psychopaths have impaired recognition of fearful vocal affect. A finding consistent with the low-fear and violence inhibition mechanism models of psychopathy. See:

I’m not an expert in reading this photos, but am I wrong or there is no connection with amygdala? It’s rather surprising as amygdala is responsible for creating emotional answers, so I always believed there have to be some connetions with recognising emotions as well…

@wybory: Yes, the amygdala plays a crucial role in emotion, especially in recognizing facial expressions, but it is also known to respond to auditory stimuli. This study focused specifically on the voice-sensitive parts of the auditory cortex, but the amygdala is likely to be part of the wider network that encodes vocal emotions.

interesting… what other applications could such a research have other than psychopathy… any ideas ?

on a different note maybe other animals do not have a complex language like ours but a more simplified language based on emotions… so perhaps we could learn to communicate with them through simple emotions…