Single tone alerts brain to complete sound pattern

September 3, 2013

Illustration 1. Method used in experiment. The experimental subjects heard noise for an hour, with a tone of 1000 Hz (the cue) being sounded every few seconds. Then they heard a second tone somewhere between 250 Hz and 4000 Hz during one of two intervals. They indicated on a computer when (in which of the two intervals) they heard the test tone.

The processing of sound in the brain is more advanced than previously thought. When we hear a tone, our brain temporarily strengthens that tone but also any tones separated from it by one or more octaves. A research team from Utrecht and Nijmegen published an article on the subject in the journal PNAS on 2 September.

We hear with our brain. The cochlea picks up sound vibrations but the signals produced as a result are processed by the brain, using known patterns. If, for example, you briefly hear a weak tone, your hearing focuses on that tone and suppresses any frequencies around it. This makes it easier to notice any relevant sounds in your surroundings. The present research has shown that this 'auditory attention filter' is much more complex than believed until now: frequencies that have an octave relationship with the target tone are also heard better.

John van Opstal, professor of Biophysics at Radboud University: 'This test proves that the brain prepares for a more extensive pattern of tones, even if the person just hears a single test tone or if he has a tone in mind. These extra tones in the pattern were not sounded during the experiment, but the brain complements the information received from the cochlea. This is scientifically interesting. Audiology, for example, at present places great emphasis on the cochlea.'

Octave relationship

The subjects undergoing the experiment did not have an easy time. For an hour they listened to unstructured noise containing very soft tones that they had to detect. Every few seconds they were presented with a tone of 1000 Hz, the cue. Then during one of two time intervals, a very quiet, short second tone was sounded. The subject had to indicate in which of the two intervals they had heard the second tone. It became apparent that tones having an octave relationship with the cue were all heard better, and those around the cue were heard less well. An octave is a well-known term in music, indicating the distance between two tones, the frequencies of which have a 2-to-1 relationship.

Illustration 2. Results of experiment. The frequencies of the tones sounded are shown in Hz on the X-axis. The Y-axis has the percentage of correct responses given by the subjects (50% is pure guesswork). Whenever the second tone was the same as the cue – 1000 Hz – detection was better. The subjects detected any tones around that cue tone less well; they more often indicated the wrong time interval. Peaks also appeared at 250 Hz, 500 Hz, 2000 Hz and 4000 Hz; that was surprising.

Voice

Van Opstal: 'We wanted to gather data on the auditory attention filter around the target tone. When we made the range larger than other researchers had done previously, more peaks suddenly appeared. This was a complete surprise to us. One possible explanation could be that the hearing system has evolved in order to hear sounds made by members of an animal's own species (voices in the case of humans) in noisy surroundings. Vocalisations always consist of harmonic complexes of several simultaneous tones having an octave relationship with each other.'

Hearing aid

The researchers, who work at Utrecht University, the UMC Utrecht Brain Center and Radboud University Nijmegen, can easily think up applications for this fundamental research. If, for example, someone no longer hears high tones because of damage to the cochlear hair cells, the hearing aid can be adjusted in such a way that it converts those tones so they sound one or more octaves lower. Since the brain itself 'fills in' tones with an octave relationship, that person's perception should then become more normal. It is also important for commercial sound producers to know how tones are perceived. That is why Philips Research is involved in this research in their department 'Brain, Body and Behavior'.

Related Stories

New research reveals how our brains are able to pick out important sounds from the noisy world around us. The findings, published online today in the journal 'eLife', could lead to new diagnostic tests for hearing disorders.

Anyone who's ever heard a Beethoven sonata or a Beatles song knows how powerfully sound can affect our emotions. But it can work the other way as well – our emotions can actually affect how we hear and process sound. When ...

Is sleep learning possible? A new Weizmann Institute study appearing today in Nature Neuroscience has found that if certain odors are presented after tones during sleep, people will start sniffing when they hear the tones ...

Your brain often works on autopilot when it comes to grammar. That theory has been around for years, but University of Oregon neuroscientists have captured elusive hard evidence that people indeed detect and process grammatical ...

Recommended for you

An achievement by UCLA neuroscientists could lead to a better understanding of astrocytes, a type of cell in the brain that is thought to play a role in Lou Gehrig's disease, also called amyotrophic lateral sclerosis, or ...

Time flows, time flies, time stands still. All these expressions show just how highly variable, depending on multiple factors, our perception of the passage of time can be. How is this subjective experience embodied in the ...

Duke University researchers have identified a common mechanism underlying separate forms of dystonia, a family of brain disorders that cause involuntary, debilitating and often painful movements, including twists and turns ...

A severely brain injured woman, who recovered the ability to communicate using her left eye, restored connections and function of the areas of her brain responsible for producing expressive language and responding to human ...

The expression "once bitten, twice shy" is an illustration of how a bad experience can induce fear and caution. How to effectively reduce the memory of aversive events is a fundamental question in neuroscience. Scientists ...

7 comments

This is great progress towards a better understanding of the central role of octave equivalence in general cognition.

Far from being a trifling harmonic property, a mere "note" on an invented scale, the equivalence of octaves is an absolutely fundamental aspect of informational binding, much more significant than currently recognised.

While good news, the article falls somewhat short in describing the octave as "the distance between two tones" - its defining characteristic is the seemingly paradoxical percept of equivalence with the fundamental. It is no arbitrary increment.

This cognitive 'parity' is not cultural or learned in any way; furthemore it is not uniquely human, nor unique to audition. It relates informational and processing entropies, and is an inevitable consequence of multi-cellular processing per se.

We process and encode all information as spatio-temporal modulation of factor-of-two symmetries. Not just music, but language, all sensation and motor control etc.

NB - sorry to prattle on, however i've been banging this drum for 20 years now and it's so frustrating to see the snail's pace of progress.... but can we get away from the whole "ratio of 2-1" thing? It's horribly inaccurate, and most certainly IS a vestigial artifact of music theory; the article here makes very clear that the response pertains to all factors of two of the fundamental within discriminable range: 250Hz - 4kHz is NOT a 2:1 ratio!!! It's 16:1... its ludicrous to try and describe octave equivalence in terms of ratios - we have ~10 octaves of pitch differentiation hence you'd need many discrete ratios to describe all the octave relationships possible for any given fundamental.

It's a FUNCTION, not a ratio. The function is all factors of two of a given frequency, upper and lower, within the processing bandwidth. Hence; factor of two symmetry, not "2:1 ratios".

All consonance and dissonance is but degrees of equivalence. Ditto all the information we process...

The octave is the 'distance' (difference) between two frequencies. An absolute.This distance processed in brain is relative to what is processed as auditory stimuli before and after the auditory stimuli event ascertained or asked of the subject to determined.

The commentary from Mr. is worst than speculation.I suggest he keep banging on his drum to rid himself of his aggression stemming from his 'frustration'. The source being ignorance.

The octave is the 'distance' (difference) between two frequencies. An absolute.

No, it is not any arbitrary distance, or difference. It is absolutely a factor of two of the first freq.

Indeed, the very basis of the equivalence is that the difference is processed as 'zero' - ie. C1 and C6 are still both C; and this equivalence underlies pitch class, all music, language and possibly much else besides...

And FWIW , the thing that is 'zero' is meta-information - ie. the information about the relationship between the frequencies. It means we have a parity dimension at the heart of our processing system, and as such, any single tone is automatically evocative of all factors of two of its frequency. Hence the fundamental 'qualia' of, say, C-ness, or F#-ness, are merely arbitrary positions on this parity plane.