Eavesdropping is an internet-based, interactive audio system that explores network mediated, musical performance in shared public spaces. In public environments, individuals interact by means of a variety of bodily and auditory cues and gestures. These ambient communication techniques can be directed at specific individuals or may be general expressions of mood meant for anyone who happens to notice. Visitors to public spaces, such as a café, seek the passive awareness of others to achieve a sense of connectedness born of shared experience, like the audience in a music venue. This project highlights the exhibitionism and voyeurism in the public sphere by amplifying participants' moods via music and increasing shared experiences to encourage deeper interaction. It aims to develop an environment which increases audience interaction and connectedness in a localized, computer-controlled performance.

The system is a client-server architecture made of three components: (1) an audio preparation interface, (2) an interactive performance ...

Full Description

Eavesdropping is an internet-based, interactive audio system that explores network mediated, musical performance in shared public spaces. In public environments, individuals interact by means of a variety of bodily and auditory cues and gestures. These ambient communication techniques can be directed at specific individuals or may be general expressions of mood meant for anyone who happens to notice. Visitors to public spaces, such as a café, seek the passive awareness of others to achieve a sense of connectedness born of shared experience, like the audience in a music venue. This project highlights the exhibitionism and voyeurism in the public sphere by amplifying participants' moods via music and increasing shared experiences to encourage deeper interaction. It aims to develop an environment which increases audience interaction and connectedness in a localized, computer-controlled performance.

The system is a client-server architecture made of three components: (1) an audio preparation interface, (2) an interactive performance interface, and (3) a machine learning-based conductor. Musician's have contributed files to represent participants' moods. Participants input their mood during the performance. An artificial conductor mixes an acoustic ecology based on mood data. Participants are encouraged to respond to whether the audio represents the mood they've input. This allows the system to learn from audience response to more accurately represent participants' moods.