UCSF study shows how the brain sorts sound to make language

New research reveals complex process of turning meaningless sounds into meaningful words

Scientists at UCSF have uncovered some tantalizing clues into the complex process of how the brain hears and interprets human voices, and transforms an influx of meaningless sounds into language.

Their work, which was published online Thursday, involved studying the brains of patients with epilepsy undergoing testing to help stop their seizures.

Neuroscientists have known for more than a century that one small part of the brain - called Wernicke's area, located in a region called the superior temporal gyrus - plays a critical role in how humans process language. But it's been difficult to develop a deeper, more detailed understanding of that process, partly because scientists lacked the tools to study in real time how the brain responds to split-second sounds.

The UCSF team, which also included linguists from UC Berkeley, found that when patients listened to random sentences read out loud, their brains quickly and with great precision sorted the sounds based on very clear criteria.

The brain, it seems, immediately filters language sounds into broad groupings, with small neighborhoods of neurons activating at certain sounds. The scientists were able to build brain maps of these sound neighborhoods, showing that the same neurons "lit up" each time patients heard, for example, a specific type of vowel or consonant.

"When we hear sounds or language, our brain is actually organizing this information through very particular filters - neurons that are detecting certain sounds," said Dr. Edward Chang, a UCSF neurosurgeon and lead scientist of the brain research. "The cool thing about it is you see this real clear heterogeneity in how those neurons correspond to speech. There's definitely an organization to it."

Sorting it all out

The work offers new insight into "functional organization," or how the brain collects, sorts and analyzes the massive amounts of data it's constantly flooded with. With language in particular, humans are bombarded by sounds and the brain must instantaneously sort out what's meaningful from what's not, and then collect and process the important data into familiar words and sentences.

All of that work is done in seconds, and for the most part without any conscious effort on the listener's part.

The UCSF study suggests what may be the first step in that complex process. The research is a "beautiful example" of the kind of discoveries scientists can make by taking electrical recordings directly from the brain, said Dr. Josef Parvizi, a Stanford neurologist who has done similar work on patients with epilepsy.

'Remarkable' findings

"Of course there have been tons of studies in the past, but the tools were not sophisticated and not precise," Parvizi said. "With this type of direct recording from the human brain, what Eddie (Chang)'s group is finding is remarkable."

The study involved six patients with epilepsy who were already scheduled to undergo a procedure at UCSF to treat their seizures. The procedure involves removing a piece of the skull and applying electrodes to the surface of the brain.

Seizure activity

The electrodes measure seizure activity and help surgeons identify which parts of the brain are involved in seizures. After several days of collecting measurements, surgeons then remove the part of the brain affected by seizures, assuming it's not critical to survival or quality of life.

Over the past five or so years, scientists have been using patients undergoing these procedures to study other brain activity. Since the patients are already exposing their brains for therapeutic purposes, and since they're going to be stuck in a hospital for several days with not much to do, they make rare but ideal subjects for real-time studies of the brain.

The study

In Chang's study, the patients listened to 500 English phrases recorded by 400 different speakers. While they listened, doctors recorded the electrical activity in their brains and mapped when and where neurons fired.

The scientists found that sounds were organized based on how they're formed in the mouth. So, for example, sounds like "S" and "Z" that linguists call "fricatives" - formed by partially obstructing the airway and creating friction in the vocal tract - were grouped together. Also grouped were sounds called "plosives" - including the consonants "P" and "B" - that involve using the lips or tongue to block air before releasing it all at once.

Chang said he and other neuroscientists he worked with were surprised by the results. Intuition would suggest, he said, that the brain sorts sounds similar to how people read - so, perhaps, the consonant sound "B" would have its own spot in the brain, and so would the vowel sound "ah."

Affirms assumptions

But linguists have suspected for decades that the broader groupings - the fricatives, plosives and other descriptors known as "phonetic features" - were at the foundation of language comprehension. The new work is among the first to offer physiological evidence of an old linguistic assumption, said UC Berkeley linguist Keith Johnson, who worked with Chang on the language research.

'How learning occurs'

"We've had this notion that the brain must be organized this way, because language seems to pattern this way when we look at how it changes over time," Johnson said. "So the finding that the brain is organized around (phonetic) features like that is significant. It's firming up theories about how language is organized in the brain."

Chang said their work could someday help doctors and linguists better understand language disorders and problems like dyslexia, and may even help people improve their ability to learn a second language.

"Our hope is that with this more complete knowledge of the building blocks and fundamental aspects of language, we can meaningfully think about how learning occurs," Chang said. "We can maybe even explain why some of this goes awry."