Current third-party funded projects

DFG Project "Intention-based and sensory-based predictions"

Human action serves the purpose to accommodate to environmental demands (stimulus-driven action) and to achieve effects in the environment (intention-based action). While stimulus-driven action is researched to a large extent, intention-based action is not, although it is of outmost importance in every-day life (as we gain control over the environmental stimuli via the control of our actions only). Research on this type of action-effect cycle has developed a theoretical perspective that puts a particular emphasis on predictive brain processes. By being able to predict the effects of our actions on the environment, we know which action to choose for a desired effect. The present project aims at investigating the impact of intentional action on action effect predictions, studies its modulation by attention, and compares this sort of intentional-action-based predictions to sensory-based predictions.

The extraction and learning of recurring patterns from our complex sound environment is key to the later segregation of multi-source sound input, and to the recognition and identification of meaningful auditory events. The human brain has the amazing ability to learn and recognize recurring sound patterns even when instances of the same sound pattern vary considerably. This project investigates the underlying mechanisms for the learning of sound patterns that change in their spectrum over time and tests which nodes of the auditory brain contribute to pattern learning including cortical and subcortical levels.
Recent behavioral studies confirmed that spectrotemporal sound structure can be rapidly extracted and learnt through repeated exposure even for unpredictable and fully meaningless sounds (Agus et al., 2010). According to the classical view, particularly the human planum temporale acts as a computational engine for the segregation and matching of spectrotemporal patterns (Griffiths & Warren, 2002). The current project suggests and critically tests an expanded theory claiming that hierarchical mechanisms of point-by-point spectrotemporal matching are dedicated to pattern learning and sound recognition. Such hierarchical mechanisms involve matching and identification processes at the cortical level, but likewise a modulation of how subcortical auditory areas process input patterns that previously occurred with increased statistical probability. A series of combined behavioral and EEG studies will measure auditory event-related potentials and oscillations of cortical origin and the frequency-following auditory brainstem response in healthy human subjects. Paradigms of statistical learning and repetition priming using random complex spectrotemporal stimuli, will determine the time course and the neuroarchitecture of pattern matching strategies involved in sensory learning. Further, we will characterize the tolerance of pattern learning for different types of input variability. The outcome of this project will particularly shed light on how “object templates” emerge from everyday auditory experience, which is substantial, for instance, to language learning and may on the longer run contribute to the identification of substantially refined early risk markers for language learning impairments.

When perceiving the surrounding world, humans attend to multiple pieces of information (individual properties of objects, or cues), and perceptually integrate them to identify familiar objects as a whole. For speech, this means that listeners integrate multiple auditory cues (e.g., duration, frequency) to comprehend sounds of their language. Surprisingly, however, when humans encounter unfamiliar objects, they readily use a single cue to identify them, but have difficulties with using multiple cues for that purpose. The difficulty with relying on multiple cues for novel categories runs contrary to the widely attested integration of multiple cues in the perception of familiar ones. This project aims to resolve that controversy and find out how humans acquire the ability to integrate cues in speech perception. The goal is to identify the developmental trajectory of cue integration, determine which factors affect it, and reveal whether cue integration in speech is driven by general mechanisms of auditory learning. Adults will learn novel sounds and their neural activity will be measured to uncover how cue integration learning proceeds. The data will be modelled with artificial neural networks. The findings will contribute towards our better understanding of the learning mechanisms that form a crucial part of human cognition.

DFG Project (Dr. Mathias Scharinger)

The human brain is characterized by constantly maintaining sensory predictions about upcoming events, thereby optimizing its performance. Such predictive mechanisms have been mainly investigated in the visual, and to some degree, in the auditory neurosciences. For speech comprehension, and in particular, for single-segment processing, a systematic account of predictive mechanisms has somewhat been neglected. To that end, the proposed research project aims at systematically examining predictive mechanisms in speech that pertain to the level of speech segments, words, and meanings. Importantly, predictions regarding the timing of speech events (temporal predictions) will be distinguished from predictions regarding the lexical features of speech events (featural predictions). This distinction relates to the question to what degree synchrony between speech events is beneficial for processing and, on the other hand, to what degree language-specific properties such as frequency of speech sound occurrence influences predictive processing. While previous studies focused on either temporal or featural predictions, the 7 experiments in this project will systematically assess the interaction of temporal and featural predictions, and also distinguish between predictions generated by a word context and predictions generated by a sentence context. Methodologically, all experiments will be based on Event-Related brain Potentials (ERPs), obtained by Electroencephalography (EEG) recordings. This guarantees an established measure of brain activity with a very good temporal resolution. Furthermore, previous studies have shown that predictive effects occur at early ERP latencies, such that this method is optimal for the purpose of this project. The overarching questions for all proposed experiments are the following: (a) To what degree do predictive mechanisms in speech processing differ from predictive mechanisms in general auditory processing? (b) How can existing models of speech perception with interactive levels of processing integrate such mechanisms, the characterization of which shall be refined by this project?

DFG Project "The Significance of Distractors Information on Processing in Children and Adults" (Dr. Nicole Wetzel)

New or unexpected events outside of the current focus of attention can involuntarily capture attention and cause impaired performance in a task at hand. The DFG-project aimes to examine the developmental time course oft he underlying processes during childhood. We are especially interested in factors which affect the orienting of attention and behavioral distraction.