Multisensory Perception and Action

We use all our senses to construct a reliable and robust percept representing the world with which we interact. The view we take in our group is that in many aspects of behaviour, motor actions and multisensory
processing are inseparably linked and therefore they have to be studied in a closed action/perception loop. We believe that human perception and action is tailored to the statistics of the natural environment and when the environment
changes our perceptions will follow these changes through the process of adaptation minimizing potential costs during interaction.

In the neural processing such statistics will represent itself in probability
distributions. We follow Hermann von Helmholtz in our belief that human perception is a problem of inference, for which the sensory data are often not sufficient to uniquely determine the percept. Thus, prior knowledge has to be used
to constrain the process of inference from ambiguous sensory signals. A principled way to describe the combination of prior knowledge with sensory data in a probabilistic way is the Bayesian Framework. Therefore, we regularly use this
Bayesian Framework to construct "ideal observer" models-models that use the available information in the most optimal way, provided some task and cost function. These models can then be used as a benchmark against which human
performance can be tested. To do so in our group we use quantitative psychophysical and neuropsychological methods together with Virtual Reality techniques. Quantitative psychophysical methods are important to best determine the
relevant perceptual parameters minimizing uncertainty and unknowns. Virtual Reality is important because it provides us with a tool to precisely control the perceptual situation that are investigated, while at the same time it allows
for a degree of interaction, which is necessary for studying the action/perception loop. Often, however, today's Virtual Reality techniques and Human-Computer Interaction devices are not sufficiently developed to be readily used in the
study of human perception and action. Therefore, some of our work concentrates on the development of human-machine interfaces. This is mostly done in the framework of European projects. For example, the European Projects Touch-Hapsys
and ImmerSence focused on the development of haptic interaction devices whereas the European project CyberWalk had the goal to develop an omni-directional treadmill for enabling near-natural locomotion in Virtual Reality.

News

Können wir unseren Sinnen trauen?

Tricks der Wahrnehmungsforschung

10:00-11:30 and 13:00-14:30 W3-242

Natural auditory scene statistics shapes human spatial hearing

Have you ever wondered why most natural languages invariably use the same spatial attributes–high vs. low–to describe auditory pitch? Or why, throughout the history of
musical notation, high notes are represented high on the staff? According to a team of neuroscientists from the University of Bielefeld and the Max Planck Institute for Biological Cybernetics in Tübingen, high pitched sounds feel
“high” because, in our daily lives, sounds coming from high elevation are indeed more likely to be higher in pitch. This study just appeared in the science journal PNAS.

Cesare Parise and colleagues set out to investigate the origins of the mapping between sound frequency and spatial elevation by combining three separate lines of evidence. First of all, they
recorded and analyzed a large sample of sounds from the natural environment and found that high frequency sounds are more likely to originate from high positions in space. Next, they analyzed the filtering of the human outer ear and
found that, due to the convoluted shape of the ear—the pinna—, sounds coming from high positions in space are filtered such that more energy remains for higher pitched sounds. Finally, they asked humans in a behavioural experiment to
localize sounds with different frequency and found that high frequency sounds were systematically perceived as coming from higher positions in space. The results from these three lines of evidence were highly convergent and suggest
that all such diverse phenomena, such as the acoustics of the human ear, the universal use of spatial terms for describing pitch, or the reason why high notes are represented higher in musical notation, ultimately reflect the
adaptation of human hearing to the statistics of natural auditory scenes. “These results are especially fascinating because they do not just explain the origin of the mapping between frequency and elevation” says Parise, “they also
suggest that the very shape of the human ear might have evolved to mirror the acoustic properties of the natural environment. What is more, these findings are highly applicable and provide precious guidelines for using pitch to
develop more effective 3D audio technologies, such as sonification-based sensory substitution devices, sensory prosthesis, and more immersive virtual auditory environments”. The mapping between pitch and elevation has often been
considered a metaphorical mapping and cross-sensory correspondences have been theorized to be the basis for language development. The present findings demonstrate that, at least in the case of the mapping between pitch and elevation,
such a metaphorical mapping is indeed embodied and based on the statistics of the environment, hence raising the intriguing hypothesis that language itself might have been influenced by a set of statistical mappings between the
naturally occurring sensory signals. Besides the mapping between pitch and elevation, human perception, cognition, and action are laced with seemingly arbitrary correspondences, such as for example that yellow-reddish colors are
associated with a warm temperature, or that sour foods taste sharp. This study suggests that many of these seemingly arbitrary mappings might in fact be the reflection of statistical regularities to be found in the natural
environment. Information