HYPERSENSE COMPLEX

Alistair Riddell

Somaya Langley

Simon Burton

About

The goal of the hypersense project is to explore different ways of interacting with the computer to produce sound.

HARDWARE
At the moment we are using flex sensors from
www.imagesco.com/catalog/flex/FlexSensors.html .
These give a variable resistance the more you bend them.
This goes to a voltage divider and into a microcontroller unit (MCU).
Each performer has an MCU, and each of those has 8 analog inputs so
we have ended up with 8 fingers each being wired up.

The MCU we use is the
atmel 8535 ($10 US) which runs at 8MHz; we use kits from
www.dontronics.com
(they have int'l distribution).

Also from dontronics we have got
these really neat little USB units ($30 US). They are wired to the MCU, with
12 data lines connecting them, 8 bit data and Tx/Rx toggles.

This then allows the micro to send data via some lengthy USB cables and
a hub (for the three of us) to a laptop. A second laptop is connected
via ethernet which then produces sound. The system also has an 8 channel
MOTU box for audio output.

SOFTWARE
The MCUs sample each sensor 100 times per second. This raw data is then encoded
in a midi-like protocol to be sent over the USB.

USB drivers for these devices are available for linux, OSX and probably windows.
We develop on linux and OSX, where the device appears as a file in /dev.
This file is read and processed by a (mostly custom built) program written in the python language.

The receiving python script is where most of the "smarts" are. It does work
interpreting gestures, building compositional structures and translating these into
individual sound events.

The audio engine used is the freely available SuperCollider (version 3, OSX only),
which the python script communicates to via the network.
Being network based has the handy benifit of allowing the use of two laptops, one for the python
processing, and the other for sound generation.
The protocol connecting python and SuperCollider is the (UDP based) open sound control (OSC), which
is kind of like "midi meets the internet" and is widely used in sound applications.

The OSC Protocol
OSC has two types of commands: immediate, and timestamped. Immediate
commands are executed straight away, while timestamped commands are
to be done at a specified time in the future.

Timestamps are used for the precise
control needed to produce regular rythms and beats. This has the limitation of
creating a lag between the gesture and the sound, because the timestamp is
always in the future. The immediate commands can be used to generate dramatic effect.

These OSC commands contain instructions to start/stop sampled sounds, change
sound effects such as reverb and echo, and other control changes such as moving
a sound to a different channel or changing the playback rate of a sound.