Contents

Executive Summary

Our project will communicate with an accelerometer to make music. The sensor can be attached by the user to everyday objects like books or a table top and the Beagle Bone Black will use the sensors' outputs to play sounds. In this way, a user can play the drums, bongos, maracas, et cetera without a drum kit or other bulky equipment.

The implementation of this project will involve the combination of four major parts.

Building node.js add-ons uses a special build manager called node-gyp, to install it you will have to first install nodejs, and npm.

You can then install node-gyp using npm install -g node-gyp (note: if you are not already root this will require sudo privileges).

Build the motion_io modules.

Change directory to the /motion_io subfolder.

Use node-gyp clean configure build to fully rebuild the module.

If you wish you can test the module using node run.js, which is a simple test script that simply prints out all motion events it receives.

Install dependencies for the server.

In the root project directory, run npm install. This will download the final-fs and socket.io libraries required by the server.

Start the server by running node server.js. This will start the server listening on port 3001.

User Instructions/Highlights

Our project allows the user to navigate to a port on the beaglebone. The bone then provides an interface which allows the user to select which instrument he or she wishes to play.

At this point, the user can shake, strike or otherwise agitate the ADXL345 accelerometer connected to the beaglebone. The beaglebone processes these movements and and will play sounds corresponding to the selected instrument through the browser accordingly. In this way the user can act like the beaglebone is a maraca, bongo, or other percussion instrument to emulate a real percussionist.

A video of our working prototype is available at (link temporarily redacted due to spam filter).

Theory of Operation

When the program first runs it initializes the accelerometer to sample at 100Hz with a range of +/-2g. Whenever a sample is collected, the accelerometer sends an interrupt signal to one of the BBB's GPIO pins. Our code then uses the I2C protocol to retrieve that sample and any others it has collected in the meantime. The accelerometer has a FIFO queue which will collect up to 32 samples in the even that the BBB does not service the interrupt before the next samples are collected.

The BBB then conditions the signal so it can ignore any offsets (such as those due to the force of gravity) or small changes, like noise of the device being rotated. By changing a few parameters of our conditioning algorithm we can change how strong a strike or shake must be before the device detects it.

To condition the signal, we first considered the fact that the accelerometer is affected by the force of gravity. This provides an offset depending on the orientation with which the user is holding the accelerometer and affects our calculation. To factor this out, we found the derivative of the acceleration by subtracting each incoming sample with the previous one. By accumulating the subsequent values we eliminate this offset.

However, if we reorient the device the offset comes back--we merely calibrated the offset for the accelerometer's starting position. To continuously factor out this offset as the orientation changes we made the accumulator only add together the latest handful of samples. This essential creates a highpass filter. By changing the number of points accumulated at once we can affect how quickly a change must occur for it to not be filtered out. This helps us detect shakes or strikes much easier.

Next we simply accumulate our filtered acceleration values to get our velocity. When our velocity is equal to zero (and our acceleration is non-zero) we have found a shake or a strike. This is when our program reports the strike to the browser. We also take the acceleration value at that point and use that to change the volume of the sound we make.

In designing our architecture we encountered an problem in how to both best transfer data to our client web application, and read data from our accelerometer. While it would be easiest to do data collection and processing using C or similar language, we wanted to use WebSockets to communicate with the web browser. As a novel compromise, we opted to write our own node.js add-on in C++, and pass back relevant updates to JavaScript, which would in turn integrate with the higher level networking libraries to communicate this to the browser.

The basics of writing a node.js add-on is described in this node.js api article, however we also utilized numerous other sources in order to build this module. Specifically, due to node.js's event-driven model, we had to integrate with libuv (node.js's runloop and thread library) in order to preserve the asynchronous style of node.js. This proved to be a relatively low technical hurdle, as most of the API in libuv closely mirrors is synchronous counterparts in standard OS libraries (e.g., uv_poll vs poll), however we did encounter some possible concurrency issues that we have not yet been able to address involving segmentation faults somewhere between our data processing and JavaScript callbacks. Our add-on appears to run smoothly for 100 to 300 invocations of our callback, at which point is mysteriously crashes, seemingly due to a bad pointer address. Were we to continue the project, this would be our final remaining bug to fix.

As for the actual implementation of the module, the data we pass back to JavaScript includes only an intensity measure, which is scaled (based on observed ranges of the accelerometer, not its theoretical limits, in order to produce a better sounding result). The callback in JavaScript then uses the socket.io library to emit a play_sound event to all listening clients, along with an intensity value which the clients use to adjust the volume of the played sound.

On the client side we used WebAudio to playback static mp3 files which are also hosted by our node.js server. When the client opens a new websocket connection, a set_sounds event is emitted by the server to the client, enumerating all available audio files. The client uses this list to initialize a SoundList structure on its end, which holds both the list of available sounds and contains logic to download and retain cached buffers of those sounds, allowing for seamless playback. Then, when a sound is selected by the client (either by explicit user action, or implicitly when a new </code>SoundList</code> is initialized), the relevant resources are downloaded from the server, and playback will begin as soon as playback events are received from the server.

Work Breakdown

Accelerometer interface -- Will Elswick - 7 hours

Signal conditioning -- Will Elswick - 9 hours

Node.js addon -- James Savage - 12 hours

Web browser interface -- James Savage - 7 hours

Inspiration/sarcasm -- James Savage - his entire life

Future Work

This project could easily be expanded to take inputs from multiple accelerometers. We didn't have any other ADXL345 accelerometers so we couldn't duplicate our inputs. With this, it we could easily modify our interface to recognize another accelerometer and assign to it its own unique sounds. This way we could have a whole percussion section or even a whole drum kit.

Additionally we could add some composing tools to the interface so that the user could record and playback or loop his performance. By switching the instrument someone could create an entire rythmic track using only the one accelerometer we have.

Conclusions

Give some concluding thoughts about the project. Suggest some future additions that could make it even more interesting.