Zusammenfassung

Natural sounds such as communication signals and music are composed of complex temporal features that cover a wide range of time scales. Our auditory system excels at resolving this temporal structure and temporal features are crucial for identifying sound qualities and understanding speech. Yet, how auditory cortex neurons encode rapid sound features remains a matter of investigation.
To shed light on this issue we recorded the activity of neural populations in caudal auditory cortex of alert macaque monkeys during stimulation with a rapid sequence of synthetic random chords and with naturalistic stimuli. We quantified stimulus discriminability by means of both decoding and information theoretic measures. We used these data to investigate two important questions related to temporal coding in auditory cortex: 1) What is the time scale at which spike patterns of auditory cortex neurons carry sensory information? And 2) is it possible to ‘read’ such temporal activity patterns without having exact knowledge about external stimulus timing?
With regard to the first question we find that auditory cortex responses are very precise and can encode stimulus information in spike patterns at the millisecond scale. Importantly, we find that ‘reading’ responses at precisions coarser than 4ms causes a significant loss in the information, which reaches already 10 at an effective precisions of 6ms. This information loss induced by ignoring millisecond precise spike patterns was more prominent during stimulation with random chords, but for a subset of neurons also prevailed during stimulation with natural sounds.
With regard to the second question we find that one can detect the timing of stimulus onset directly from the population of neural responses with a precision of about 8ms. This population derived timing defines an internal reference frame that can be used for temporal response decoding without making reference to an external clock. Quantifying the information lost by using internal rather than external reference frames suggests that the auditory system can well achieve fine temporal stimulus encoding even without precise knowledge about external stimulus timing.