NOTE: All compositionally significant parameters within sonification modules have built in dynamic osc mappers that are able discover available parameters and use them to modulate their immediate state.

integration of structure-borne sound with classic room effect synthesis, spherical harmonics and other spatialization practices (ie. in ED where sound moved from above and traveled underneath the sand)

This multi dimensional platform is able to operate through a wide spectrum of contexts with a flexible structure that can be reconfigured, and authored in real-time by the user. Furthermore, whenever required, the system is able to learn, continuously re-adapt, calibrate and entrain itself on the fly. The different control levels, the flexibility of the mapping space, and broad interaction topography allows for intuitive as well as compositionally determinant authoring thus providing a system that is at once ideal for the prototyping of interaction scenarios on the fly, improvisation, composition, research-exploration, and for creating refined installations, performances, and works of art.• The adaptability, multidimensionality, and reconfigurability of this system make it particularly useful for working with dancers and performers where immediate creative ideas and interaction scenarios need to be continuously and quickly prototyped, explored, expanded upon, refined, rehearsed and performed. In user-guided machine-dancer improvisations, this chain of creative processes can overlap and evolve in real-time, helping form macro-structures and maximizing artistic expressivity.FUSIONGestural sound control is most often based on mapping gesture parameters to sound synthesis parameters. When dealing with a multi dimensional setup, the main difficulty resides in choosing appropriate mapping strategies between low or high level parameters and across different temporal zones. The Gesture Bending Suite fuses these many levels

The tone, texture and intensity of the synthesised sound is influenced by the actual sound or haptic interaction with matter as picked up on the piezo.

However, additional descriptors, extracted from the audio and cameras, further influence other synthesis parameters such as damping, pitch, and harmonicity. (So for example you could use the same gesture in different places on the surface and it would create the sound in a different pitch)