Modeling and Continuous Sonification of Affordances for Gesture-Based Interfaces

View/Open

Date

Author

Metadata

Abstract

Sonification can play a significant role in facilitating continuous, gesture-based input in closed loop human computer interaction, where it offers the potential to improve the experience of users, making systems easier to use by rendering their inferences more transparent. The interactive system described here provides a number of gestural affordances which may not be apparent to the user through a visual display or other cues, and provides novel means for navigating them with sound or vibrotactile feedback.The approach combines machine learning techniques for understanding a user's gestures, with a method for the auditory display of salient features of the underlying inference process in real time. It uses a particle filter to track multiple hypotheses about a user's input as the latter is unfolding, together with Dynamic Movement Primitives, introduced in work by Schaal et al [1][2], which model a user's gesture as evidence of a nonlinear dynamical system that has given rise to them. The sonification is based upon a presentation of features derived from estimates of the time varying probability that the user's gesture conforms to state trajectories through the ensemble of dynamical systems. We propose mapping constraints for the sonification of time-dependent sampled probability densities. The system is being initially assessed with trial tasks such as a figure reproduction using a multi degree-of-freedom wireless pointing input device, and a handwriting interface.