Home

Huge thanks to Martin Fahlenbock, Shizuyo Oka, Barbara Maurer, Åsa Åkerberg, Melise Mellinger, Jaime González, Klaus Steffes-Holländer, and Christian Dierstein of ensemble recherche, to Clara Iannotta and John Pax who performed as guests, to Anthony DiBartolo & James Bean for respectively recording and amplifying the performance, to James and Seth Torres for their help with this tricky mixing job, and to everyone else at Harvard who helped bring this into the world.

Last semester I assisted Hans Tutschku with a class called Music 264 at Harvard University on improvisation with electronics, and we used Max as our main tool for students to build their own electronic “instruments.” Faced with students ranging from Max beginners to more experienced programmers, and wanting to spend as much time as possible making music, we needed a solution that would allow us to teach Max and simultaneously start exploring performance.

I wanted to write up some of our experiences, in particular focusing on a package of Max patches I built for the class called 264 Tools.

Even if one is familiar with Max, it takes time to build things that can start to be musical and responsive. The process of moving from some kind of controller input, handling that data, and mapping it to some audio processing takes time. Building a delay unit or even a versatile sound file player takes time. It is possible to teach this type of programming in a semester, but probably not in a class where you want much music to happen.

By building 264 Tools we tried to circumvent these challenges for both new and more advanced Max users. 264 Tools is a collection of patches for Max that can be loaded in bpatcher modules, providing ready-to-use playback, processing, and analysis tools with graphical interfaces. To any users who have played with BEAP, this will be a familiar approach — the difference being that where BEAP focuses on sound synthesis, 264 Tools modules provide ways of working with recorded and live audio input.

These modules don’t do anything particularly new on the inside. Several rely heavily on amazing existing work by members of the open-source community: Ivica Ico Bukvic, Ji-Sun Kim, Dan Trueman, R. Luke DuBois (munger~); Patrick Delges (filesys); Randy Jones (yafr2); Miller Puckette, Cort Lippe, Ted Apel, Volker Böhm (sigmund~); Jean-François Charles (spectral freezing); Rodrigo Constanzo, and raja (karma~). 264 Tools builds on these by providing graphical interfaces and a consistent way of communicating between modules.1 This is also not an attempt to build a “complete” suite of tools for all contexts. There are some obvious limitations: we found it immensely practical to make everything mono, for example; and the GUI makes the modules less performant than they might be. (If you have any Max GUI performance tips, let me know!)

Learning by doing

The idea of teaching with 264 Tools was to have students spend as much time as possible inside Max itself, and have that time as musically focused as possible. They may not have mastered vexpr or figured out all the ins and outs of poly~, and certainly didn’t have to go through the pain of learning how pattrstorage works, but they had Max open, were creating and connecting objects, and started to understand how one might extend a module’s functionality by adding a few basic Max objects. Most importantly, from the first week we were working with sound. For people with backgrounds in music, the immediate feedback of hearing changes — rather than only understanding them abstractly — is very helpful.

To keep students immersed in Max we exploited the strengths of Max’s package system.2 In particular, by using the extras directory in the package we could provide weekly patches introducing new modules that would appear in Max’s ‘Extras’ menu. These overviews provided explanations of each module’s functionality alongside demonstrations. Students could play with the demos and copy-paste bits of patch to their own projects. (I’m currently in the process of converting much of this to proper help files.)

In our first week, 264 Tools consisted of a delay line, a sound file player, and a filter. We added a couple of modules each week from then on.3 As students built performance patches for class, it was astonishing to see the variety of possibilities they uncovered, even with a minimum of modules.

Beyond the laptop

The most obvious requirement in order to make the students’ patches “performable” in improvisatory contexts was a controller that probably wasn’t the laptop’s trackpad or keyboard. We ended up building lightweight performance kits, and chose Korg nanoKONTROL2 MIDI controllers for the students to interact with their patches. These provide 8 faders, 8 dials, and 35 buttons and send MIDI messages over USB.

We built all 264 Tools modules to work seamlessly with any external MIDI controller. You can quickly map a MIDI fader, dial, or button to your patch using the 264.midi-learn submodule, which is built into many of the 264 Tools modules.

Beyond the MIDI controller, we also provided students with a microphone, a single-input audio interface, and a loudspeaker. It was great to be able to keep each performer’s audio discrete using these performance kits, clarifying who was producing which sounds in group performances. This tied into a kind of “instrumental” thinking while developing 264 Tools: mono sources, processing, and output lent themselves to these multi-laptop performances. We couldn’t have done this without the amazing logistical support of our studio technical director Seth Torres. The performance kits he put together were just perfect.

Whiteboard diagram in preparation for end-of-semester performances.

Conclusions

By the end of the semester our students were able to perform improvisations, in some cases with no prior Max knowledge. While my teacher’s pride undoubtedly clouds my judgement, I was incredibly impressed by their work. I can also say that a classroom of captive beta testers, who are directly implicated in the tool you’re building, is an amazing resource to have. We finished up with 22 modules, from a MIDI-ready toggle to a pitch-tracker, and built a fairly easy to use preset system (I cursed pattrstorage so no-one else had to).

This Wednesday and Thursday there are chances to hear graduate musicians from Stony Brook University performing local bond — music for four musicians collaboratively playing a set-up including viola, cello, and various other tools. The concerts are in Stony Brook and at Roulette in Brooklyn, and also feature pieces by Piers Hellawell, Paula Matthusen, Erin Rogers, and James Wood. Would be lovely to see you there!

This semester I am assisting Hans Tutschku with a class at Harvard University on improvisation with electronics. I asked several improvisers to choose pieces of music that are important to them and their practice.

For the third edition of this series, Ute Wassermann has kindly agreed to share some of her favourite music. Wassermann has achieved recognition as a vocal artist, composer, and sound artist, with her personal, highly characterised, nonverbal sonic language. In addition to a richly developed range of vocal colours, she masks her voice with bird calls, resonant objects, and develops sound installations. She appears regularly as an improviser in London and Berlin, and has also premièred works by composers such as Ana Maria Rodriguez, Michael Maierhof, and Chaya Czernowin.

Microbiography

My name is Chris Swithinbank and I write music using acoustic instruments and electronic sounds. Occasionally, I also write text about music or related subjects. I am currently a student at Harvard University with Chaya Czernowin. More about me »