Bipolar

Bipolar is the result of a short experimental journey into visualising sound using computer vision. The initial idea was to capture a mesh of my face in realtime, and warp it using the sound buffer data coming in from the microphone as I speak. Initially I explored ofxFaceTracker but had trouble segmenting the mesh so moved to the Kinect camera. I had a rough idea of how the final result might look but it turned out quite differently.

As this intense spiky effect began to take shape I realised this would be perfect for the chaotic and dark sound of Dubstep. Thankfully I know just the guy to help here. I met the DJ and producer Sam Pool AKA SPL at the Fractal ’11 event in Colombia. He kindly offered to contribute some music to any future projects so I checked out his offerings on SoundCloud and found the perfect track in Lootin ’92 by 12th Planet and SPL. This, of course, meant I would have to perform to the music. Apologies in advance for any offence caused by my “dancing” 🙂

This was build using openFrameworks and Theo Watson’s ofxKinect addon which now offers excellent depth->RGB calibration. I’m building a mesh from this data and calculating all the face and vertex normals. Every second vertex is then extruded in the direction of it’s normal using values taken from the microphone.

The project is still at the prototype stage and needs some refactoring and optimisation. Once it is looking a little better I will release the code.

but, it seems to be beat connected, not motion controlled as much as motion captured…am i right?
now, i DO dig the work-in-progress nature. if this is as embryonic as i think it is, james, there are many venues that this could be used in.
music-wise: check a site called FWD: if you aren’t already aware of it. a whole range of electronic (dancefloor) music ”lives” there, i like to imagine.
also, re: dubstep- for this effect you are birthing it seems waaaaay too busy.
look into some minimal/minimalism etc. so your effect is highlighted in an almost mathematically hypnotic manner, not a thunder-beat “machines-they-be-comin’-to-get-me!” kind of chaos sound.
though, i liked what i heard that mr pool had to offer.

and, oh yeah, just move, bro. those who can dance to dubstep well have been practicing. the beats and the genres change so fast it seems foolish to learn to dance to music with no set rhythm at ALL. so no need to apologize! dancing is only about that beat. only.

i obviously like what i see. a couple of your flickr pics are gonna hit my circle and we will see what who likes what.
thanks for the work.

Yes this is controlled by the sound data coming in from the Microphone.

Bipolar is intended to be, as you say, the first in a series of experiments in this area of music/sound visualisation. The idea of the more minimal, ambient approach is interesting. I might just explore this idea further. I understand how you might feel that it is too busy. Once I saw this effect develop I knew that a dark and twisted Dubstep track would, in my mind, perfectly match it.

Hey James, It looks really cool!
I am working on an installation with kinect, http://vimeo.com/22597384,
and i am looking for how to build a mesh (triangulation?) from the vertex array that gives you ofxKinect . Do you know if exists any addons that works like this?

Check out the ofMesh class. You will need to create the mesh by adding the vertices, and indices. You then need to calculate all the face normals. You then calculate the vertex normals by getting the average from all the surrounding face normals. You then add these to the ofMesh.

Thank! I don’t use Max/MSP and haven’t used Processing properly for ages. I’m not sure Processing would be fast enough to do something like this. OF/Cinder are best for this sort of thing. I released an add-on that takes care of a lot of the depth processing recently. There is no example yet but you can see it here – https://github.com/jamesalliban/ofxKinectDepthUtils