Inspiration, ideas and information to help women build public speaking content, confidence and credibility. Denise Graveline is a Washington, DC-based speaker coach who has coached nearly 200 TEDMED and TEDx speakers--including one of 2016's most popular TED talks. She also has prepared speakers for presentations, testimony, and keynotes. She offers 1:1 coaching and group workshops in public speaking, presentation and media interview skills to both men and women.

Thursday, July 15, 2010

Human speech and music seem to share an “acoustic code” when it comes to conveying sadness, according to an intriguing new study by Tufts University psychologists. It turns out that the pattern of pitch that brings the melancholy to a melody like “Greensleeves” is the same pattern in a voice filled with sorrow.

Composers have long used the relationship between pitches—called an interval—to convey emotion. “Greensleeves,” “Brahm’s Lullabye,” or even the opening bits of “Hey Jude” rely on the interval called the minor third for their sweet sad sound. (Listen to the samples to see if you can hear the similarity)

Now listen to these speech samples from the study by Meagan Curtis and Jamshed J. Bharucha, where Tufts acting students were asked to read two-syllable lines with different emotions. The interval in a sad “Let go,” or “Come here”, it turns out, is also the minor third.

The researchers were able to connect other emotions and musical intervals, although none showed the same sort of strong correlation as the link between the minor third and sad speech. Some angry speech, for instance, shares the same interval—an ascending minor second--with the iconic “Jaws” theme. (That shark did seem a little peeved…)

Curtis says intervals are only one part of the “enormous range of acoustic cues” that speakers can use to communicate emotion, including loudness, pitch, and the “texture” or timbre of a voice. In her study, speakers also lowered the sound of their voices and used low pitch as part of their strategy to sound sad. Gestures and facial expressions can also affect the emotional content of speech, she notes. “You can hear a smile in someone’s voice, because smiling changes one’s vocal timbre.”

Speakers can “certainly learn to control the acoustic features of their voice, even the pitch patterns,” says Curtis. “It may take a little practice to produce it consciously, as it’s something that happens so automatically that it might become harder if you actually think about it.”

And if you’re looking for a gloomy role model? Curtis suggests listening to Eeyore from the classic Winnie-the-Pooh cartoons. “Eeyore constantly uses a sad vocal pattern, with a low pitch range, low sound intensity, slow articulation, and a pitch pattern that tends to have a downward minor third,” she says. “His vocalizations are really exaggerations of sad speech, but they’re incredibly effective at communicating sadness.”

So far, the acoustic code has only been tested in English speech, but Curtis’s next study will examine how emotion is communicated in Hindi speech and in Indian music. What do you think—will the code hold up across cultures?

(Editor's note: This post was contributed by freelance writer Becky Ham, who reports and writes for The Eloquent Woman on the science behind public speaking.)

Are you a member of The Eloquent Woman on Facebook? Hit "like" when you get there to join the discussion, see things before they appear here, share your slides or questions. Subscribe to the Step Up Your Speaking newsletter--it's free and monthly--using the box at right.