﻿﻿﻿﻿﻿﻿﻿﻿How the brain learns to understand degraded speech

March 9, 2016

A new paper, published in the Proceedings of the National Academy of Sciences of the United States of American (PNAS) explains why and how a commonly used training method helps people with cochlear implants to understand speech. In the research, Ed Sohoglu and Matt Davis used brain imaging (combined magneto- and electro-encephalography, i.e. M/EEG) to study what happens when adults with normal hearing listen to degraded sounds that are similar to speech processed by a cochlear implant. They found that volunteers best learned to understand degraded speech when using written subtitles. Measures of brain activity before, during and after learning showed that subtitles helped immediate understanding and longer-term learning in the same way; by reducing brain responses associated with “prediction error”. A simple computer model of these processes helps explain the brain mechanisms responsible for learning to understand degraded speech.

This work with healthy volunteers has implications for people with hearing loss in later life who are fitted with cochlear implants. This is a neural implant that can restore functional hearing to otherwise deaf individuals but provides a speech signal that is ‘degraded’ and hard to understand. It can take weeks or months for a new cochlear implant user to get the best out of the degraded sounds that they hear. Understanding why and how subtitles help with learning therefore supports a simple intervention that can help many of the half a million cochlear implant users around the world.