Session:「Sound & Music」

LoopMaker: Automatic Creation of Music Loops from Pre-recorded Music

論文アブストラクト：
Music loops are seamlessly repeatable segments of music that can be used for music composition as well as backing tracks for media such as videos, webpages, and games. They are regularly used by both professional musicians as well as novices with very little experience in audio editing and music composition. The process of creating music loops can be challenging and tedious, particularly for novices. We present LoopMaker, an interactive system that assists users in creating and exploring music loops from pre-recorded music. Our system can be used in a semi-automatic mode in which it refines a user's rough selection of a loop. It can also be used in a fully automatic mode in which it creates a number of loops from a given piece of music and interactively allows the user to explore these loops. Our user study suggests that our system makes the loop creation process significantly faster, easier, and more enjoyable than manual creation for both novices and experts. It also suggests that the quality of these loops are comparable to manually created loops by experts.

論文アブストラクト：
Graphical displays are a typical means for conveying awareness information in groupware systems to help users track joint activities, but are not ideal when vision is constrained. Understanding how people maintain awareness through non-visual means is crucial for designing effective alternatives for supporting awareness in such situations. We present a lab study simulating an extreme scenario where 32 pairs of participants use an audio-only tool to edit shared audio menus. Our aim is to characterise collaboration in this audio-only space in order to identify whether and how, by itself, audio can mediate collaboration. Our findings show that the means for audio delivery and choice of working styles in this space influence types and patterns of awareness information exchange. We thus highlight the need to accommodate different working styles when designing audio support for awareness, and extend previous research by identifying types of awareness information to convey in response to group work dynamics.

Investigating Perceptual Congruence between Data and Display Dimensions in Sonification

論文アブストラクト：
The relationships between sounds and their perceived meaning and connotations are complex, making auditory perception an important factor to consider when designing sonification systems. Listeners often have a mental model of how a data variable should sound during sonification and this model is not considered in most data:sound mappings. This can lead to mappings that are difficult to use and can cause confusion. To investigate this issue, we conducted a magnitude estimation experiment to map how roughness, noise and pitch relate to the perceived magnitude of stress, error and danger. These parameters were chosen due to previous findings which suggest perceptual congruency between these auditory sensations and conceptual variables. Results from this experiment show that polarity and scaling preference are dependent on the data:sound mapping. This work provides polarity and scaling values that may be directly utilised by sonification designers to improve auditory displays in areas such as accessible and mobile computing, process-monitoring and biofeedback.