Special sessions

In addition to traditional CMMR topics, we propose a certain number of special sessions in the CMMR2013.
The aim of these sessions is to encourage more specific contributions.
Submissions to these sessions are done by selecting the title of the special session in the topics section of the standard paper submission page.

Cognitive Inspiration is defined as any result, theory or idea presentation with relation to cognition, in particular memory and attention,
but all parts of cognition or even perception may be eligible for this session.
It is believed that after 50 years (or more) of perceptual inspiration, time has now come for cognition to take its place in music research.
In particular, this session welcomes applied cognition, such as the use of cognitive components in algorithms,
but all work within cognitive inspiration are invited to submit to this special session, thus any researcher having knowledge within cognition to share to the CMMR community are also invited,
and the invitation also includes work on measuring brain activity (fMRI, EEG, MMN, etc) within audio and music.

Among the sensory inputs usually reported for having a clear impact on motor planning, online control and adaptive mecanisms,
auditory cues are often considered negligible compared to other available information, such as visual or proprioceptive feedback.
In this session, we aim at reconsidering sound as a powerful way to shape movement organization.
Specifically, we would like to elicit contributions related to the auditory influence on motor learning and motor control, with a special focus on the sound characteristics to be crucial for yielding substantial effects on movement accuracy and kinematics.

Sonification can arguably be considered from two different view points: that of sound design where audio form is tailored to best respond to function and to human perception,
or as an artistic principle, a way of anchoring musical composition or sound art to the 'real world', thus modifying the artist’s position in the creation, which becomes a ‘collaboration’ with the sonified environment.
John Cage incorporated the everyday into art to create 'experimental music,' which the composer discovers at the same time as the audience.
With sonification art, at least real-time sonification this might be considered as the mode of reception by-default.
What artistic horizons are opened by sonification in particular when sonification can be real-time, real-place (in situ), permanent or mobile?
These two approaches to sonification are far from being contradictory and we propose to discuss ways in which concepts relating to the origin of data and the ergonomics of sonification design articulate.

This special session is about interactive sound synthesis applied to music, sound design, and procedural audio for virtual reality and games (diegetic sounds such as environmental textures, contact and interaction sounds),
both on the side of gesture capture, analysis, modeling and recognition, and on the side of either pure synthesis, or making recorded sound available for interaction, e.g. by resynthesis, granular or corpus-based approaches.

Towards a definition of the notion of sound atmosphere.
Since a few decades the notion of sound atmosphere and soundscapes stood out little by little as a key concept to approach ? beyond the music ? such problematics as the architecture, the environment, the geography, the acoustics ?
Different scientific approaches like phenomenology, anthropology, sociology or physics were interested in the question of the sound atmosphere to understand either perceptions, the sound material itself, the world which surrounds us?
In the arts, the cinema uses itself more and more the work on the sound atmospheres and the layers of sound which constitute it.
This session is exactly interested to cross various definitions and approaches of the notion of sound atmosphere, to try to define some common features.

The relationships which exist between the images and sounds in the audiovisual works always have to reveal their secrets.
Today, it remains still very difficult to explain and to make understand how what is perceived as " a whole " generates perceptions, sensations and feelings which cannot be produced if we consider the image and the sound separately.
What takes place in this famous interaction ? Not from a neuro-physiological point of view, but from the point of view of the meanings and the sensations which come from inside us ?
How sounds and images can produce together such an increase phenomenon ?
The objective of this session is - through one or several simple examples - to show and to try to describe and to analyze forms of association of the image and the sound which produce a productive interaction.

Non-stationarity in audio signals often originates from the presence of some underlying dynamics, that modifies the intrinsic clock of some background stationary sound.
Many examples of such situations can be mentioned, among which doppler-modified signals, engine sounds during accelerating motion, audio echos from moving targets,...
Estimating non-stationarity from such sounds generally yield useful informations on the underlying dynamics.
The goal of this special session is to focus on non-stationarity estimation methods, from various points of view, in audio signal processing context.
Relevant contributions can include (but are not limited to) time-frequency analysis (or more general transforms) based estimation techniques, non-uniform sampling methods, time warping approaches and statistical modeling.

organised by Mathieu Barthet (Centre for Digital Music, Queen Mary University of London)

Research on music and emotions (or moods) is a burgeoning field in the Music Informatics and Music Perception communities with an ever-increasing number of studies, as exemplified by CMMR 2012's program.
Although pioneering works on the empirical analysis of music-related emotions date back to the 40s, the nature of listeners' and performers' emotional responses to music is still misunderstood.
From a technological perspective, computational models for music emotion recognition (MER) have the potential impact to revolutionise the way music is delivered to recreational and professional consumers.
If promising context- and content-based MER models have already been proposed, little has been done to adapt these models based on cultural and user-centered considerations.
Original contributions to this special session are encouraged in, but not limited to, the following topics: perceptual studies on music and emotions, music emotion modeling, music emotion recognition, relationships between musical genre and emotions,
mood-based music recommender systems, user-centered studies, cross-cultural studies, analysis of performers' emotions, computational musicology, and new musical interfaces.