JavaScript is disabled for your browser. Some features of this site may not work without it.

The Neural Basis of Perceptual Integration and Expertise in the Auditory Processing of Music

Sprecher, Kate Emily

This item is not available in full-text via OUR Archive.

If you are the author of this item, please contact us if you wish to discuss making the full text publicly available.

Cite this item:
Sprecher, K. E. (2012). The Neural Basis of Perceptual Integration and Expertise in the Auditory Processing of Music (Thesis, Master of Science). University of Otago. Retrieved from http://hdl.handle.net/10523/2236

Music is a collection of sounds perceived as a meaningful whole. The neural basis of this perceptual integration is poorly understood. Evidence from neuroimaging studies and patients with brain damage suggest that early pitch and time properties are processed independently but the evidence is less clear for the higher-order percepts of rhythm and key. Additionally, musical training can lead to changes in neurophysiology and cognitive abilities and these effects can be exploited for therapeutic purposes. However, the evidence regarding training effects is conflicting. These conflicting results may stem from the use of inappropriate, unmusical stimuli. Several frameworks propose that sound patterns are represented on hierarchical levels and that higher representations can influence early processing. Previous studies have used simple stimuli and have manipulated basic acoustic features. In contrast we explored processing of higher-order musical features (rhythm and key) within a musical context.

This thesis is focussed on two questions: Is tonal and temporal information integrated at the stage of rhythm and key perception? Are the neural correlates of this stage of processing modified by training? Importantly, our stimuli were embedded within continuous 9-minute melodies, creating a realistic musical context. We investigated the influence of expertise and attention on electroencephalographic (EEG) responses to music by comparing musicians and non-musicians in a mixed-effects design with expertise as the between-subjects factor and attention (attended or unattended) and deviant type (key, rhythm or both) as within-subjects factors. Participants listened to continuous melodies and responded to occasional deviations in key or rhythm, or both. Each block contained 300 melodies, 60 of which were deviant. In the unattended condition participants ignored the music and watched a silent movie.

EEG responses to standard notes were compared to responses to deviants. A primary finding was that key and rhythm deviants elicited topographically different EEG responses. This suggests that temporal and tonal information is not yet fully integrated at the stage of music perception when key and rhythm are extracted. The second major finding was a lack of differences between musicians and non-musicians in the EEG and behavioural responses to deviance. Thus we have shown that previous findings of no training effects still hold in a more ecologically valid context. These findings suggest that each stage of music processing may be differently affected by expertise; therefore, caution should be used when developing musical therapies. Taken together, these results provide an important step forward both for understanding the fundamental mechanisms of high-order auditory processing, and for translational research regarding musical therapy.