Processing is an open-source coding tool, built in Java, designed specifically to be versatile for artists and friendly to non-coders. Code is elegant and simple, but can take advantage of all the potential power and performance (no, really) of Java. Java really can be fast enough to use in live performance situations, though its one Achilles’ heal is that automatic memory management — the very thing that makes coding easier, via something called a garbage collector — can make sound glitchy at lower latencies. (JavaSound seems worst on Mac OS X, as implementation of the sound API by Apple hasn’t kept pace with improvements in Java audio on other platforms. It is possible to build a real-time-ready Java implementation that performs as well as languages like C++ for audio, but right now there’s not yet a mainstream implementation of this type.)

That doesn’t mean Processing isn’t useful for musical applications. With experimentation, sound libraries like Minim can perform quite well, especially if extreme low-latency is unnecessary. (see Processing’s libraries page for more.) And you can always use Processing as a visual front end, while sound comes from elsewhere (Max, Pd, Reaktor, or even Ableton Live or a plug-in.)

There’s plenty of incentive to work with the environment as an artist. People who never coded before are able to build entire projects in Processing, not just uber-programmer-geeks. Even experienced coders can find it a fast way of experimenting with ideas — sometimes better-suited to tasks that are more difficult with patching environments. Despite all the hype around Flash/AIR/Flex and Silverlight, I find Processing easier to develop in, and you have far more robust development options, free and open source tools and libraries, and genuine OpenGL 3D capabilities.

I put out a call for people working with Processing for music, and we’ve already got a handful of interesting examples. Because of the open community around Processing, code is available for a couple of the ideas here, so you can have a peek and learn from fellow Processing coders.

nodeSeq is a fluid music making tool that maps melodic materials to nodes, floating as particles in an interactive 2D interface. The physics-y goodness comes from my favorite Processing physics library, traer.physics. In this case, Max/MSP is providing the sound, but any application that supports OSC would work. Full source code; it’s worth just checking out the libraries used as they’re all essential downloads for this kind of work. ControlP5 is particularly useful — it maps variables to controls that you can use for live performance or troubleshooting (and they can be hidden, as well). In “early development,” so it’ll be interesting to see where Jared takes this.

“Jug Hero” by Shawna Hein (looking on in the background) and Kevin Linn, students at University of California Berkeley, uses jugs as a tangible interface. Blow over the jugs, in the tradition of American jug band music, or clink glasses for extra points. Processing provides the visual interface for the project. The physical interface itself comes from Arduino, which is an ideal companion to Processing, as it uses similar syntax and the same development environment.

What I particularly like about this project, as well, is the creators worked to have more open-ended musical cooperation and not just some of the rigid gameplay mechanics introduced by Guitar Hero. Nothing against Guitar Hero, mind you — but why not let loose a bit if you’re building your own game, and try something new?

Again, more versions are planned, so we’ll watch this as it develops.

genPunk is a wonderfully glitchy sounding music tool built entirely in Processing — sounds and all. It may not be a show piece for Java’s audio quality (well, at least they chose to make it sound this way), but it is a fantastic example of how you can use Processing to build a lightweight tool around your own personal musical ideas and style. And, being braver than me, they’ve put up all their code, warts and all, in “alpha mode.” Argentinean code madman and TOPLAP advocate Ivan Ivanoff is the scientist behind this.

If you’ve got more projects, we’d love to see them.

Later this month, too, I hope to give some background on how to really get audio and MIDI working right now on different platforms. There are some tricks to it, definitely. Stay tuned.

The fantastic cat illustration at top comes from Christer Carlsson, designer, and I think embodies the evolutionary nature of life on planet Processing.

I have wanted to use "flocking" algorithms for 3D stereo panning for over 10 year. I honestly never expected to ever see a binaural effect, and now I have several. Can I use their ideas and apply them to my project? I will at least try. Just this alone has made me extremely happy with you for writing about this!

http://www.createdigitalmusic.com Peter Kirn

Well, flocking implementations — and Java flocking implementation — are certainly nothing new. One easy way to go would be to transmit OSC (or inter-app MIDI, even) to a music app, and have the flocking doing the panning. That also gives you a nice visual. Definitely worth trying, and of course, the results are effectively unlimited — because your musical content could sound different from someone else's music.

Inter-app MIDI works using Java's MIDI output classes or the ProMIDI library. There are some tricks to it on the Mac, which I should post soon; PC works just fine.

Gavin@FAW

Have experimented with processing in the past and its very interesting and great fun.

What could be an idea is for a synthesizer to transmit say lfo or other modulation data to processing over OSC and have it use the information to transform visuals during a performance.

http://www.kakofon.com Christer Carlsson

Hi Peter, thanks for the plug!

Runagate: I'm sorry to say my skeletons don't make any sound, but that's a good idea… For anyone interested, there's a short text and video from the project on my website.

Regarding Processing and sound, my class did an interactive installation project last year. We built a harp from steel bars and wire and hooked each string up to contact microphones that sent MIDI-signals to Processing. Processing then generated kaleidoscopic visuals and triggered samples in Kontakt (using ProMIDI) for a nice soundscape. We tried generating sounds in Processing too, but in the end it was too difficult to get the right results with it.

http://www.creativebump.com Myles de Bastion

Love this stuff

http://myspace.com/farleyengineering emmett

"Later this month, too, I hope to give some background on how to really get audio and MIDI working right now on different platforms."

Hello. BOX2D is a physics engine that has been ported to several programming languages including Actionscript 3 and Java. I built a music sequencer using this physics engine which can been seen at http://youtube.com/watch?v=T1Av7p08fcc. The Flash interface communicates to Ableton Live through the following chain: Flash -> OSC -> Quartz Composer (relays OSC to Midi) -> Ableton Live.