Cycling 74 » articleshttps://cycling74.com
Makers of Max Visual Programming Software for Artists, Musicians & PerformersSat, 10 Dec 2016 00:48:49 +0000en-UShourly1http://wordpress.org/?v=4.3.1An Interview with Skinnerboxhttps://cycling74.com/2016/12/06/an-interview-with-skinnerbox/
https://cycling74.com/2016/12/06/an-interview-with-skinnerbox/#commentsTue, 06 Dec 2016 22:21:17 +0000https://cycling74.com/?p=370377For more than 10 years Skinnerbox have been making tremendous contributions to the wide field of electronic-music-making. Olaf Hilgenfeld and Iftah Gabbai, two talented musicians with different backgrounds, are amongst the very few that actually play electronic music live on stage. Their live shows are a great display of playfulness, artistic skills and the ability to completely improvise a thrilling performance. Over the years Skinnerbox have developed a distinctively musical and groovy style, a complex, detailed, idiomatic sound and a bunch of unique technical setups and instruments for live shows as well as studio productions.

They started performing in the legendary party contexts of Bachstelzen and Bar25 in Berlin back in 2005. Ever since they have been playing widely respected tours in the new and old world. Skinnerbox continuously release their studio productions on labels like BPitch Control, My Favorite Robot, Darkroom Dubs, Turbo Recordings including several musical collaborations with other artists.

In 2014 was the premiere of “Time & Timbre” – a software drum-sequencer and drum-synthesizer for Max for Live. It is the result of an on-going technical relationship with Ableton and the idea to make an instrument available for the public which was originally designed by Skinnerbox for their personal needs in the studio. It stands in a row with a lot of self-made, adapted or modified musical gear they use.

Olaf (left) Iftah (right) – Skinnerbox

While in Berlin for Loop, Tom visited Skinnerbox in their studio for a conversation…

How did you first learn about Max?

Iftah: Well…. (laughs!) I had a professor who asked me if I could do some programming work in Max/MSP and… well, I said yes! The thing was I had never even opened Max but I thought to myself, “How hard can it be?!” I had a few weeks and it actually came together quite cool. Of course, after that, I realized what a great thing Max was, and kept using it.

I was very young and had an idea and it worked. It just took me a week and it opened up me up to a lot of new art work, new media and installation – it was like a portal to a lot of new stuff.

When was this?

Iftah: It was 2005.

Max is a toolbox, really. Whenever we need something, it’s just there – we go to Max and we build the solution to a problem. Everything is possible. There are no limits – just the power of the computer.

Iftah: I really like its openness – You can do anything, and we always need something….

Olaf: Why do we come back to it? Because we’re always in need – for example, we wanted to have specific control of our input on our CV gear, like an LFO with special abilities. There’s no such thing in a modular system. So if you’ve used Max for 10 years, it’s not problem to make these little helpers – or bigger projects.

Iftah: Both of us just really love to sit and just program different things. We live in this individual age – it’s a lot about customization and it’s like Max is more relevant than ever. Olaf can, but I cannot write code. Max is the only way I could have done this – I really got into Max/MSP.

Olaf: Recently, I’ve been into this synthesizer, I just bought in this experimental FPGA board. It’s just a high-speed programmable gate array – It’s great, but you have to describe absolutely everything and it’s tedious. Something like Max on an FPGA would be amazing….

Iftah: It’s always been my dream to have Max as a language for programmable hardware – to just dump Max onto something.

What’s your favorite object?

Iftah: Arrrrrrrrgh excellent question…..

Olaf: I do have one, it’s the accumulator (+=~).

Iftah: Oh! I’ve been waiting to answer this question all my life, and now it’s like you’ve asked me what is my all time favorite song – I love print, it’s incredible useful…. (laughs). Actually it’s exclamation minus !-~. I also like %~, but Olaf and I always fight about it because it’s a little costly.

Olaf: I really like the simple objects, to be honest. The big ones are great, but they’re costly. I like just using a few little objects to be efficient and just make small functional pieces of Max code – my own phasor~, for example.

Iftah: The building blocks, they’re our favorites.

Tell us about being an active musician and a programmer….

Olaf: Sometimes I feel a bit stupid to spend all of this time developing something that doesn’t exist rather than just spend a bit of money to buy something that at least does 80% of the job. But I always want the full 105%.

Iftah: While we were on Mexico on tour recently, we were on a creative panel with Robert Henke and Electric Indigo (Female Pressure) and they were asking Robert what he thought about developing his own tools. He said “Sometimes I wish I didn’t know how to do it.” As a musician, you know our existence is dependent on it, but sometimes I wish I didn’t know how to. It would make my life – our lives – so easy. We’d just make music using instruments from the industry without thinking too much. On one hand, it’s amazing to know how to make instruments and program. But on the other hand, it has a price.

Olaf: Now we’re really strict – trying to separate technical time (programming) and musical time. But now (with the Ableton pack), we have a lot of really good tools. The utilities we built we use day-to-day now, especially CV4LIVE.

Iftah: Actually I cut my Modular Synth (Eurorack) right back because I was just spending too much time messing around, now I have a minimal configuration and I use it with the computer as an expansion, both ways, sending cv in from Max and sending CV out via CV4LIVE

It’s really related to this age of individualism.

To be honest, the year I got into modular hardware was probably the least productive year of music making I’ve had. I love it, but it just takes too much time to get to a sound. Maybe if I didn’t have a family… Now it’s quite optimized.

How much do you record material? What is the process?

Iftah: Sometimes Olaf or I come in with a super concrete idea, conceptually. Mostly Olaf, he’ll sit down with a scale and be, like, “I want to do this.” He’ll come with such a concrete idea that I’ll just do the audio around it.

And sometimes we just sit and start together and basically…

Olaf: Sometimes we just jam together, but that takes more time – you sit for two hours and jam but, then you have to admit to each other that it’s just bullshit and just move onto the next thing. Let’s start over, reset everything. And go again…

That takes some balls, to just look at each other and say, that’s bullshit! Let’s start again. How long have you guys been working together as a duo?

Iftah: Thirteen years. Since 2003.

Olaf: We met at a birthday party, by chance. There was a jam session at the birthday party – complete improv, just silly stuff with guitar and Yamaha PSR something and singing and stuff….

Olaf: Then we met a week later at Iftah’s birthday. We had this big mixer, the Minimoog that we still use, so we set everything up, recorded 10-channels and 2 and a half hours of crazy music. A guy came along with his didgeridoo along with a bunch of others during the session. They joined in for a while and left without stopping the recording. We still have that recording, we have gigabytes of this stuff. It’s essentially free jazz.

So you two have a pretty solid relationship now?

Iftah: Yeah it’s really getting on now. It’s not 50 years, but it’s a serious amount of time. You really get to know someone in such a profound way, you know, because we tour a lot together and when you travel you really find out about each other.

Like it’s not even just this level of being friends. We just spent 20-days up in each faces, more or less, (everyone laughs!) because you’re doing these big tours together once or twice a year.

With your recent trip to Mexico on tour, in the spare time you made a lot of live music video that gained quite a lot of attention. What pushed you to do this?

Olaf: You saw the video in the tree? Ja!

Yes, it was awesome!

Iftah: You know, I think it’s maybe more important to do this than to release records nowadays. It gets a lot more interest and the videos bring the gigs, actually.

You know we never really made money out of selling records.

Olaf: Not directly.

Iftah: You know it’s more that you have to release music to stay relevant, so you get booked. Right now I find it a little more interesting – I mean, I’m very interested in making live improvised music – but at the moment I’m less interested in ‘producing’ music, like in the studio and I’m more interested in making videos.

So you feel less precious about that need to make and release albums?

Iftah: We are two people, so we kind of interlace our needs. Olaf is now… we did a lot of sessions working on our last full-length album and I know Olaf is wanting to continue this…

Olaf: Yes! Yes, yes, yes.

Iftah: Nowadays I’m less interested in making singles – like getting into a conceptual piece of music, I’m really just into videos. I have thousands of ideas… music videos, music live improvisational videos, and the like.

Olaf: In this last tour we had maybe two days of not doing too much, taking care of yourself, you know cleaning, shaving and contact with the rest of the world….

But then there was one day where we just went hiking in the rain forest – but the rest of the time (over 20 days) was occupied by just doing stuff. Traveling, playing and making videos.

Iftah: We really got into that. We had plans for getting deeper into this field of a kind of “sonic anthropology,” and we didn’t think we’d come out with five full-length live videos.

Olaf: We just had an idea to maximize our time in this part of the World – to get everything out of it and to walk away with something in our hands of the experience.

How did you power everything in these remote locations? I couldn’t work it out….

Olaf: Actually we had a power inverter and a very long extension lead to a car. We’d run everything off the car!

I’d like to make us completely portable – I think I’ll even modify the Mini Moog so that it’s battery powered. Then we can be completely portable.

Ha! a battery-powered Moog Model D?

Iftah: It’s like a Volca. (laughs)

Olaf: Moog Volca

Iftah: You know, it’s important to mention – this is how we started. Olaf had a portable PA back in 2005 run off a car battery and a speaker, and we’d go out to the park. It wasn’t common then – in fact, we might have been one of the first (in Berlin) to do this. But now you see it a lot – oh, actually less because now the Police care – back then, nobody gave a shit.

Back then we’d go to Görlitzer Park and just Rave…. This is how people first saw us, and how we got our first gigs.

Olaf: I just built a custom box with power and the Moog Model D on top – but now it’s just too much. By about 2007, it just didn’t make sense anymore. There was too much music in the park and we didn’t we want to be competitive, so we stopped.

Iftah: I just wanted to mention this because now we’re kind of back to this – more exotic spots maybe – but this is where we started, you know. Getting outside and playing.

Olaf: We really had fun playing in the tree in Mexico. It took about 2-3 hours….

Iftah: We really want to get better at this, in fact we want to get more into using found sound.

Olaf: We have this microphone from Italy that is about 250KHz sampling rate, so we take these recordings, and then pitch them down. It’s still crisp, but then we have a cool bass drum of something.

Iftah: I have to hand it to Olaf – he had this vision and got this microphone.

Olaf: As an experiment we took this 1/8” drill bit and dropped it onto a piece of metal, then we pitch it down 4-octave and it sounded like a steel beam dropping down.

Thanks for chatting with me guys. To wrap this up, can you recommend a way for users to get into Max?

Iftah: I think the best way is first to know what you want to do with it, to have a concrete need, and not just to dive in and try to figure it all out. If you know what you want to do with it, it’s pretty much all straight forward then on and you learn so much on the way!

Max is graced with many filters, taking on many guises – some of which we don’t even think of as filters in the classical sense. The help patcher for the slide object says that it “smooths values logarithmically” – indeed, it does. It is, together with the slide∼ and jit.slide objects, a lowpass filter.

In the process of exploring the algorithm used in these objects, we’ll see how they relate to canonical filters with which you may be more familiar. Along the way, we’ll step through the mathematical procedures, one step at a time. In the interest of being thorough, I’ll try to minimize assumptions and introduce even basic mathematics explicitly.

Note: Readers desiring a quick brush-up to help with following the mathematical expressions are urged to do so before proceeding. Khan Academy is a tremendous resource for reviewing (or learning) algebra by starting with the short videos. If you want a quicker jump-start (or something more basic than algebra) check out the the pre-algebra videos here.

Conventions and Presuppositions

The following descriptions use the standard filter nomenclature where:

a0 is a coefficient to be applied to the FIR side of a canonical filter. As examples, a0 is the coefficient applied to x(n−0), a1 is applied to x(n−1), etc.

Likewise, b1 is a coefficient to be applied to the IIR side of a canonical filter – that is, b1 is the coefficient applied to y(n−1), b2 is applied to y(n−2), and so on. There generally is no b0 because the current output is unknown at the time the calculation is being performed.

The constant fs is the sampling frequency (also known as the sample rate).

Background and Applications

There many ways to smooth a stream of numbers. Perhaps the simplest is to average them. Classic averaging is a lowpass filter with a “finite impulse response”. This is discussed in videos and provided in complementary gen∼patchers published by Cycling ’74.

Averaging has a number of benefits, including linear phase response and the straight-forwardness of the calculation. The slide objects are nowhere near as easy to describe as the average∼ object, but they are far faster to compute.

All three of the slide objects – slide, slide∼, and jit.slide objects calculate their output identically, so the formula given in the reference pages for each of the objects is the same. For the purposes of this discussion, I’ll focus on the slide∼ object for processing audio, because it simplifies the relationship to real time.

Besides generalized smoothing for parameter control, a good use for this type of filter is to slow down the decay part in an audio rate envelope following. Algorithms akin to this are often used in compressors/limiters as well as visualizers for music playback software.

slide∼

The slide∼ object implements a “logarithmic” filter with a slide parameter to smooth out discontinuities in an input signal, such as for performing envelope following. The slide∼ object actually contains two filters: one filter for changes to the input that increase in value (‘slideup’) and one filter for changes to the input that decrease in value (‘slidedown’). Often these are set to the same value, which simplifies some of our expressions, so for this discussion I’ll take this to be the case and simply use the term slide for both up and down.

Given a slide value of 1, the output will always equal the input. Given a slide value of 10, the output will only change 1/10th as quickly as the input – this description of the slide∼ object’s behavior can be found in the object’s help patcher and reference page. Putting this in terms of time is complicated, or even exactly in terms of an average of N samples, since that wouldn’t be precise either. Nor is it exactly easy to describe in terms of weighting (the current incoming value is weighted by 1/N and then you have an exponentially decaying curve as you look towards samples from the past).

Mathematically, the filter can be expressed with the equation

slide

This equation expresses the filter in terms that relate to the original description of the object. However, this equation also obscures a number of characteristics of this filter, making it difficult to relate to other filters
in Max. If we rewrite the equation in the terms of a standard difference equation we can glean some insights into the operation of the slide∼ object. We start with the equation as originally expressed

Letting a0 = 1/slide, we can continue to rewrite the equation as

Equation 2 re-expresses equation 1 in terms that of the general difference equation. One immediate use of this is that we can plug the numbers into Max’s biquad∼ object to perform the same filtering, or use the filtergraph∼ object to plot the frequency and phase response. With the equation in this form, we can also see that there are two coefficients: one for the gain of the filter’s input and one to control the gain of a first-order pole, meaning that all we have is a simple one-pole filter. So, we could also plug the coefficient into the onepole∼ filter object to do the same thing.

Frequency Response

Now that we know that the slide∼ object is simply a one-pole filter, we can calculate its frequency response for any given value of the slide parameter. Remember that we have two coefficients derived from the slide value.

The onepole∼ object uses these same coefficients, but typically calculates from a desired cutoff frequency in hertz fhz. The first step is converting fhz, which depends upon the sample rate fs, into a sample rate independent form expressed in radians frad.

One potential confusion regarding radians is the fact that, in Max, some objects have a ‘radian’ mode which is expressed in something resembling radians but does not map to frequency with a linear relationship. Examples include onepole∼ and svf∼. It is unfortunate that this special mode was named ‘radian’ since it is misleading and confusing. Their use should be considered deprecated.

As an example, we can let fhz = 1000 and fs = 44100. Then

Given the frequency in radians, we can now calculate the coefficients

To simplify things, let’s combine the conversion into radians into the coefficient calculation

The relationship between a0 and b1 is the same for the onepole∼ coefficients as it is for the slide∼ coefficients, reinforcing the fact that these are really the same filter. We now have enough information to relate onepole∼’s fhz parameter to slide∼’s slide parameter through a0.

To solve for slide in terms of hertz we can flip the fractions, resulting in

or to solve for the cutoff frequency in terms of slide

See Table 5 for some example values to see how the slide parameter relates to the cutoff frequency when using a sample rate of fs = 44100.

Further Exploration

Smoothing algorithms analogous to slide in other environments include Faust. Further exploration of this class of filter is left as an exercise to the reader. If you’re interested in pursuing this topic, the Moving Average article on Wikipedia serves as a reasonable point of departure.

A few months ago, I wrote up an article about a few guitar-oriented audio interfaces, including units by IK Multimedia and Behringer. For some upcoming software reviews, I needed an interface with a bit more functionality, and I was also hoping to update my mobile monitoring situation – and maybe even replace my not-so-trustworthy Blue USB Microphone.

I finally got around to surfing for some options and ran across a unit that checked all the boxes for me: the Apogee One for Mac. It’s a small, USB-powered interface that includes both Hi-Z (guitar) and microphone inputs, a built-in ⅛” headphone jack and sports a satisfyingly large volume control on its face. It fits nicely alongside my Mac, and it features Apogee’s exceptional converters, so the sound is pretty stellar.

Using it is quite easy. There is an application (Maestro 2) you can download for detailed control, but most of the functionality is directly exposed to the Mac OS. It can be used as a system audio device, so it can be your input and output device for every application. But one of the secret tricks of the Apogee One is the built-in omnidirectional microphone. This cool add-on is a bright condenser that is super-useful for testing Max patches, quickly recording acoustic guitar or vocal lines or even recording that birdsong in the back yard. Having a higher-quality mic on-hand at all times is a great way to get surprise location recording, and really adds to the value of this interface.

Alas, the included USB cable is tragically long (but easily replaced with a mini-USB cable alternative), and the body is rather light, so your headphone cable can sort of drag it around your desktop. There is also no easy way to record stereo content; the two cabled inputs are separate connections for a microphone and a Hi-Z instrument (guitar or bass), so getting a gain match between them is not easy to accomplish. I also had a bit of spurious noise when I first used the instrument, but swapping out the USB cable seems to have solved the problem in every case except volume changes with the front-panel knob.

Other than those few complaints, I have nothing but praise for the Apogee One. It has become my new backpack-able interface, and it literally goes with me everywhere that my computer goes. Having it available to do triple-duty as my guitar interface, podcasting microphone and high-quality listening interface makes is a home run in my book.

I went to undergrad at Mills College where I was exposed to making music with electronic instruments, studying with Maggie Payne, taking her moog class. From that, I learned the basics of making electronic music. Learning on the moog gave me a good understanding of signal flow, which I could apply to Max.

When you started at Mills, I know you had already been aware of the general landscape of electronic music, but what were your first exposures as a listener?

In undergrad, when I was studying visual art at the San Francisco Art Institute, and later, Art History at Mills, I was mainly focused on new media and video art, and a lot of that intertwined with electronic music. So, artists like Steina Vasulka, early Jordan Belson, Vibeke Sorensen, and others. To some degree, contemporary film/video art and video synth experimenters introduced me to the world of electronic music.

Were you seeing a lot of live electronic music at this time?

In the early 2000’s I went to a lot of noise music shows. Artists like John Wiese, Mick Barr, 16 Bitch Pile-Up, and Matmos come to mind. I was also really into Kevin Drumm. I got into the idea of making electronic music through seeing those noise shows.

So back to Mills, you got access to the moog, and you clearly took a liking to it. How did that lead to using Max?

After Maggie’s moog class, I took a class with John Bischoff, Intro to Computer Music. I didn’t actually know what I was signing up for, I just enrolled in the class because of the title…I really had no idea what we were going to learn! Then, during the first class, I found out it was focused on Max, which I had already known about, because some friends had used it, but I had never used it before myself.

In Bischoff’s class, this was kind of an intro to Max, the basics?

Yeah, mostly how to generate synthesized sound using oscillators. Very basics of filtering, amplitude modulation, importing and looping sounds, etc.

Personally, I didn’t have a lot of exposure to modular synths before Max, it was only after I knew Max pretty well that I was able to play with modular synthesizers. I could see how this could potentially really help, just in terms of understanding signal flow.

Yeah, I felt like I understood the basics of that through using the moog, and I could translate that to Max. In John’s class, the sounds that I ended up using were moog sounds, so right away I was combining the two worlds.

So, you were create sounds with the analog synth, and then you would manipulate it further in Max? Did you find that way of working interesting?

I really liked the sound of the moog, and I didn’t know how to make those kinds of sounds in any other way. I felt like I could further transform those sounds in Max. I remember trying to make different kinds of sounds, harsh sounds, on the moog, and it was rather difficult. By combining Max and the moog I could get more variation.

During this time, were there other hardware instruments that you were using?

I played around with some other synths. I used the Arp 2600 and the Buchla system at Mills. I also purchased an Oberheim 6-R (which I still have), and a Dave Smith Evolver, and used some of those sounds in Max as well.

You went to Mills for undergrad, and then had a break in-between, but then ended up back at Mills for graduate school. What was your music practice like at that time?

I actually graduated undergrad with a degree in Art History, but near the end, I was mainly just taking music classes. So, when I graduated, that’s basically what I was spending my spare time doing, making electronic music. In the 2 1/2 years between undergraduate and graduate school, I continued to make music, but very slowly. I played a few shows at this time, but was more interested in fixed media and diffusion pieces. I played at the San Francisco Tape Music Festival, and a few other odd shows here and there, but playing live wasn’t really a priority.

Fast forward to grad school, what were the reasons for going back to school?

Maggie Payne, who I kept in contact with, encouraged me to apply to the grad program at Mills. I actually wasn’t sure I wanted to study music. I was thinking about anthropology or musicology, but I decided to apply, just to see what happened. Mills ended up giving me a full ride so I took that!

How did you find going back to Mills for grad school? Was it a very different experience, as opposed to undergrad?

It was. I wasn’t really involved in the music scene in undergrad, as the music department is fairly insular. So even though I was taking music classes in undergrad, I felt separate from the Music students. When I went back, I was much more more directly involved in the department. I was a TA for several classes, and worked at the Center For Contemporary Music. But, it was also nice coming in with some familiarity of the department. I knew what I was getting into!

When I went to Mills, aside from everything that the incredible staff bring, there was a lot of interaction and inspiration taken from other students that were there. Every year is very different of course, but while you were there, you ended up finding some kindred spirits.

Yeah, there were many new friends and collaborators that I worked with. Sarah Davachi, Monisola Ghadibo, William Ryan Fritch, and others.

When I began at Mills, I was making work in much the same way as I always had, using hardware and then processing it in Max. But as I went on, I started to prefer acoustic instruments over electronic. I ended using harmoniums and string instruments more. I found that I could get more subtle variations through acoustic instruments. It is of course possible to get incredibly subtle with electronics, I just found it easier to work with these acoustic sources to find the subtlety that I was looking for. In general I’d say that my work at this time became more focused on the subtleties of sound. And, at the same time, I got into making my own instruments through a class with the master instrument builder Daniel Schmidt. In that class, I ended up building a glass armonica, which falls in line with my interest in instruments that have a droning quality. I guess I was, in a way, trying to make instruments that made sounds similar to a synth. Basically trying to make oscillators, but with more natural subtleties. I also made some experiments with installation-based sound instruments, like aeolian harps, which, again, could make those continuous, sustained sounds.

The Glass Armonica

So when you started working with acoustic instruments, was there something that was appealing to them besides the sound?

I liked the physicality of them, especially with hand making my own instruments. It wasn’t until I started doing this that I connected the dots to when I was a teenager I built a few dundun drums, which are double-headed African drums. I do really like working with my hands.

But, I returned to Max because it lets me focus on minute details like nothing else can, and I can be very specific about things, but within that specificity, I can tell it to be more or less random. I find that it lets me alter sounds in ways that I haven’t been able to do with hardware or acoustic instruments.

The Headspace Installation

Fast-forward a handful of years, and now you find yourself working at an elementary school teaching wood shop. Can you talk about that?

I teach kindergarten and first grade kids basic wood shop skills. How to use saws, hammers, screwdrivers. How to sand things, the very basics.

So, you are probably keeping them away from the power tools!

Yeah! They do get to lower the arm of the drill press, but that is as close as they get to a power tool. We make very basic things, like boxes, name tags for their doors, but I want to move on to more musical projects, like rattles, simple string instruments, and other things.

Backing up, you ended up working at some pretty interesting places after grad school. What were they?

I was teaching at the Art Institute of San Francisco, teaching basic sound design classes, and experimental sound classes, where we got into some instrument building and sound installation. I also ended up working for Don Buchla as his assistant, helping out with research and development. Which meant widely different tasks every day, some of them having nothing to do with electronic music hardware. Like, trimming the bushes, eating bagels, or learning about 3d modeling and printing. It varied greatly, depending on his mood that day!

After that, I went on to work at Dave Smith Instruments, working as a repair and support technician. It is a very small company, and when I was there, everyone was involved in designing new instruments.

When you left Buchla and started at Dave Smith, I remember thinking that the two experiences couldn’t be any more different, just in terms of how the companies were organized.

They were very different. Dave is very organized, and knows how to run a business. He knows how to design his instruments so that they appeal to a wide variety of people. Don, by contrast, was extremely unorganized, the work space was in constant disarray, and he didn’t ever really care about what other people thought, so he made new designs just based on what he wanted to use in his next project.

Don always struck me as a dreamer, and there was really no concern for a marketplace.

I never got the feeling that he ever thought about the marketplace and what people will buy. It was always just his personal ideas, that also happened to usually be one or two steps ahead of everyone else. The companies (Dave Smith and Buchla) are different in many, many ways, but both make incredible instruments. After moving on from Dave Smith, I traveled a bit, and did some residencies, one at EMS, actually working with a Buchla system the whole time, and then one in Oregon at Sitka.

Working at EMS

I’m now also working at Intersection for the Arts, and teaching a Max programming class at SFAI, where I began my undergrad.

What have your experiences been teaching Max in that setting?

At first, I realized that I had been using Max too long, and had to take a step back to see that it really was another language to people. After I taught the first course, I realized I needed to slow down. People respond really well to the visual aspect of it. These students are mainly young visual artists, some of them never having used computers for creative projects before. It is challenging to teach, and challenging to learn, but students get really excited when they complete their first patch. There is an instant gratification when something obvious happens, people love that. And, as they developed a bit more as Max programmers, they could get excited about the little things. It was gratifying for me as a teacher to see that development. Also, with Max 7, I find it’s easier for students to find the objects that they are looking for. You know, when an expert isn’t sitting next to you, you may have no idea how to find something. But, now it’s so much easier to find what you are looking for, and explore other options that are available to you.

Getting back to your solo work, this past month saw the release of your first solo album called Ballads on Drawing Room records. I know that this release was a long time coming. The album itself consists of two side-long pieces, and is on LP and digital. When did you start working on the album?

Yeah, actually the track Hummen was started at Mills in 2012. A long time ago! Hummen is the B side, and the A side is Bourdon. The first iteration of Bourdon was made for a performance I gave at the San Francisco Electronic Music Festival in 2013. That version was a bit different than what ended up on the record.

Did “Hummen” change that much in the years between the initial composition and what ended up on the record?

Compositionally, it didn’t change that much. I think that my ears have gotten a lot better since then, as far as being able to mix the piece. The track instrumentation is two harmoniums, two guitars played with e-bows, glass armonica which I built, aluminum rods, and Max. Max conceptually ties all of the different sounds together. Each harmonium note has two reeds, and since those two reeds are out of tune with each other on my instrument, it produces beating patterns. The piece came about because of these imperfections. I used Max to get to these rhythms, and isolate these patterns, record them, and play them back. With the glass harmonica, it was taking the sound of the instrument, changing it’s pitch very slightly, then playing it back with the original sound. So, these minor differences in pitch would also create these “beating” patterns. So, I was using Max to basically bring rhythms out of seemingly static sounds.

How is Max used on these tracks?

The patches itself, for both Hummen and Bourdon, are somewhat similar. But, the inner workings are slightly different. The patch used with Bourdon focuses on more low tones, so the beating patterns captured are in the lower register. In the Bourdon patch, there is an 8-voice synthesized rhythmic component that creates short phasing patterns. The envelope on the pulses are dynamically changing, so their textures also change somewhat. Bourdon has a more intentional rhythmic component.

Here’s a small portion of the Bourdon Max patch:

When you first started making electronic, it sounds like it was fairly solitary. Since then, you’ve been involved in more collaborative projects, or working on pieces that require instrumentalists. What changed for you, and how do you find working with people as opposed to working alone?

I think I got a little bored with knowing what to expect from myself. I kind of know what I’m going to do, and I wanted to be surprised by things. So, when working with other people, even if they are just playing what I’ve asked them to do, there is an uncertainty that can be inspiring. I remember when rehearsing Hummen for the first time, I had a friend, Noah Phillips, play guitar on it. I was having a lot of trouble getting the right sound from his guitar. It wasn’t smooth enough. But, then there was this one sound he made that ended up inspiring the whole ending of the song. It wasn’t a sound that I could have told him how to play. It is moments like this that make me want to work with other people. There is something amazing about the certainty of software and hardware, it’s very reliable. But, in the composing process, I prefer to be surprised.

For the album, you had Teddy Rankin-Parker play cello on the album. What was it like working with him?

At first I felt embarrassed that I was just asking him to play seemingly simple gestures. But he was so sensitive to sound, and a very good musician. He understood what I was trying to go for, and that I was going for the subtleties in his bow strokes. He got really into it, and it was surprisingly easy to communicate with him.

As a lapsing instrumentalist myself, it is sometimes the most difficult thing to do, play something very simply.

Yeah, I think that is why I was a little nervous, because I wasn’t sure if he would brush it off as “easy” and not approach it the way I was hoping, but he did. It is hard to find people that will be focused on something that seems to be simple initially, only to realize that it’s not that simple. He went into it knowing that it wasn’t going to be simple, and looked at it as a challenge.

The new album is titled, “Ballads.” We probably all have a musical association with the word, but what does it mean to you?

To me, the word “ballads” means a story, or a journey, and I’ve always looked at the music I make like that. I also enjoy using terms that people might not associate with the type of music it is, re-appropriating a word for a different purpose. To me, the pieces are ballads, just not with words.

There’s almost a romantic connotation to the word.

I think there is emotion in the tracks on the album. There is a lot of tension and release, and I can see how they can be viewed as having romantic connotations, but to me, they are just stories told with sound that unfold over time.

The question of inspiration and its sources isn’t necessarily something that comes up often in the Max Forum in any but the most oblique of ways – it’s usually more latent than blatant. Some reasons for it aren’t all that surprising, when you think of it: in some sense, every Package Manager download or Projects page posting or even patch grovel can be seen as an inspiration or responses to inspiration. In addition, we’re all using a shared programming environment that encourages us to be the person making the connections, and – in the process – owning our “version” of what inspired us. That exchange is a great part of the Max community, really.

Our shared sources of inspiration also tend to group together – in marketing terms, think of those fancy recommendation engines that tell us what books or recordings or products we should acquire next based on what “other people who bought this” do. But what place is there for source of inspiration that “leap the gap?” – those objects or collections of ideas that won’t register in terms of patterns of acquisition?

I’ve been thinking about this a lot in the last month – I was honestly surprised at the response to my recent review Christopher Alexander’s Pattern Language – my email inbox filled with personal notes from people in Maxland who were enthusiastic about discovering the book or connecting it to their love of object-oriented programming.

My initial plan was to return this time to to reviews and pointers for what I guess we could call “my basic reference shelf,” (books like this and this and this and this) since it’s been extremely popular. I promise to do that next time out, but I’m going to do a shout-out to one more “sideways” source of inspiration and pleasure and say a few things about what it has to teach us before I go.

It’s a book. Or, rather, it once was a book – before it became something else entirely.

You probably know the work of the British artist Tom Phillips even if you don’t know his name – his paintings adorn the covers of Brian Eno’s Another Green World and Thursday Afternoon, King Crimson’s Starless and Bible Black (the original of whose inside gatefold would be something hanging on my wall, were I a rich man) as well as any number of Iris Murdoch’s novels if you’re a UK reader. Allow me to introduce you to what is arguably his best-known work: the Humument.

One day in the 1960s, Phillips went to a used booksellers with the idea (hatched with his painter pal R. B. Kitaj) that he’d buy an inexpensive used book and use it as the source material for a piece of art. He wound up with a copy of a once-popular Victorian novel by W. H. Mallock – A Human Document. Once he had the book, he began to draw or paint or collage over the original text and leaving some of the original words peeking through as little bubbles connected by serpentine paths – the Human Document thus because the Humument.

In doing so, he created a “new” text – a story about someone named Bill Toge (whose name appears any time the word “together” or “altogether” appeared in the original) and his pursuit of the mysterious IRMA – the elusive object of his desire (This pursuit, in the form of the opera IRMA, with music and stage instructions and a libretto generated from pages of the Humument, are available as a score, or in several recorded versions – one realized by Gavin Bryars for the Obscure Music label, and one that includes Phillips himself, along with members of the experimental music ensemble AMM).

You can hear Tom himself reading the resulting work aloud – one page at a time. It reminds me in places of John Cage’s Norton Lectures:

It’s a singular and amazing idea – The only book I can think of that comes anywhere close to the project is Jonathan Safran Foer’s “Tree of Codes,” which takes the German writer Bruno Schulz’ “Street of Crocodiles” and makes a new story by actually cutting away words and spaces on the original page (which made producing the book itself into an interesting technical challenge).

You can look at an interesting presentation of the entire Humument project here, which will show you the original page of the novel together with the works that were created from it. You can also try your hand at doing a page yourself, thanks to a contest that the Venus Fabriculosa website ran earlier this year that includes an actual page of the novel, should you desire. Finally, if you’re not inclined to acquire a copy of this wondrous object for yourself, you can always opt for the iPad version (which features a cool I-Ching-like oracle function that I quite like).

My reason for telling you about this now isn’t because the holiday season is coming and that the book makes an amazing present for your unsuspecting cool close friends (although that’s certainly true), but that something amazing is about to happen. The project is now officially 50 years old (Yes, you read that right. Fifty years. No kidding), and it’s finished. In the years following the publication of the first edition of the book in 1983, Phillips has quietly been going through the book again, making new versions of every single page. Subsequent printings of the book have each included new versions of the some number of the initial pages spread throughout the book, and the project was to end once there were two versions of each and every one of the 360+ pages in the original novel. That’s now done – and Thames & Hudson are releasing the final edition of the book, having exhibited the final version in its entirety in the United States and Great Britain.

In addition to it being an amazing and beautiful piece of work worth of your attention, this “treated Victorian novel” got me thinking about the idea of work – the kinds of work we do, patch to patch, performance to performance, problem to problem.

It is so easy to think of what we do as a sequence of things in time, ephemeral or otherwise. Seeing a single kind of “set me a task” problem expand into the work of a lifetime (or perhaps a shorter span, if that frightens you a little) turns the mind away from the idea that the next thing we do is The Big One, and also from the idea that the skills we acquire (say, patching) are merely the dismissal of a set of unrelated obstacles on our way to that Next Big Thing.

Can you imagine a kind of challenge for yourself that you could work at for an extended period of time? R. Luke DuBois’ A Year in MP3s project comes to mind here, or Joost Rekveld‘s investigatory sequences of film/video. Perhaps the act of naming itself can frame the undertaking – Carl Stone‘s works, though they vary in technique, all take their names from a decidedly quotidian source – the restaurants in which he has eaten over the since he began creating astounding electronic music in the 1970s and 1980s.

When you think of your own work, what insights could you imagine emerging from such a practice? In the case of Luke Dubois’ year in MP3s, he has some particularly interesting things to say about what happens when you work on a project like that. In the case of the Humument, it’s interesting to speculate on whether or not Tom saw his humble project becoming a life’s work. He talks about working on it as a little task at the end of his studio day here, in fact – as a kind of regular and patient labor.

If you’re not the sort of person who thinks of setting such tasks for yourself, how do you think you might recognize an idea that could contain within itself the seed of a life’s work? In the midst of finding the solution for the task immediately before you, how might you imagine or sense a curve or trajectory or informing kind of interest for your own humble projects? In short, what might an audio (or visual) equivalent of a project like this be – for you?

]]>https://cycling74.com/2016/11/15/book-review-a-humument/feed/4My Favorite Object: buckethttps://cycling74.com/2016/11/15/my-favorite-object-bucket/
https://cycling74.com/2016/11/15/my-favorite-object-bucket/#commentsTue, 15 Nov 2016 18:48:47 +0000https://cycling74.com/?p=368883Several years ago, Darwin Grosse and I worked on a project that used optical flow to track people running around in circles to simulate the jog wheel on old analog video tape decks. As is often the case with many sensor projects, the raw data was really jumpy and jittery and required some serious sweetening and massaging. Darwin’s solution, much to my surprise, was the bucket object. It’s been a valuable tool in my Max swiss army knife ever since.

Running Averages

The bucket patch I use the most allows you to get a running average over some finite set of events (if you want a running average for an stream or unspecified set of values, you’d use the mean object). Here’s an example that calculates the average of the last 8 floating point input values:

This technique is really useful, but can be a lot to patch when averaging a larger value set. The included bucket.avg abstraction included with the downloadable patches lets you type in the number of values you want to average and scripts the connections for you. Keep in mind that the zl sum object will default to a maximum list length of 256, so if you want a larger value, you’ll have to alter the zl object.

Here’s the patch in action (make sure you have the bucket.avg abstraction in your search path):

Velocity and Acceleration

The bucket object is also computationally useful for calculating first and second derivatives for streams of data. The terms “first and second derivative” are math-speak for keeping track of:

the rate that things are changing (that’s what velocity is – the rate of change in a value over time.)

the rate at which the rate that things is changing – that’s what acceleration (the second derivative) is.

Here’s how the patch looks:

A bucket for Symbols

The bucket object is a little unusual in that you don’t need to worry about using arguments to the object in order to have it work with floating point values. In fact, bucket is so useful that people often ask whether or not there’s a Max object that is similar but will work with symbols or lists. While there isn’t a specific object that works with symbols the way bucket does with numbers, here’s a Max patch that does exactly that (and it helpfully outputs stuff in standard right-to-left order, too):

As with the first example, this can mean a lot of patching when you start making one for larger lists. The bucket.sym abstraction makes easy work of this – just type in the number of outlets you want as an argument (i.e., bucket.sym 4).

I hope you can see how I’ve come to rely on the bucket object, and how it has influenced me to make similar tools for other message types. If you have other uses for the bucket object, please jump onto the forum and share!

]]>https://cycling74.com/2016/11/15/my-favorite-object-bucket/feed/12Advanced Max: Learning About Threadinghttps://cycling74.com/2016/11/08/advanced-max-learning-about-threading/
https://cycling74.com/2016/11/08/advanced-max-learning-about-threading/#commentsTue, 08 Nov 2016 21:24:35 +0000https://cycling74.com/?p=368536Understanding how the threading model in Max works will help you patch more efficiently, and also be on the lookout for potential bottlenecks and trouble spots. In this 20-minute video, I’ll briefly describe the threading model that Max uses, and show you some Max externals you can use to optimize your patches.

]]>https://cycling74.com/2016/11/08/advanced-max-learning-about-threading/feed/31Hardware Review: The Expert Sleeper Disting (updated)https://cycling74.com/2016/11/01/hardware-review-the-expert-sleeper-disting-updated/
https://cycling74.com/2016/11/01/hardware-review-the-expert-sleeper-disting-updated/#commentsTue, 01 Nov 2016 20:33:25 +0000https://cycling74.com/?p=368070The Disting is an analog Eurorack module from the Expert Sleepers that I see in almost everyone’s system. From the beginning it’s been a real all-purpose tool in terms of functionality (just take a look at the manual if you don’t believe me) that’s been continuously improved via an impressive sequence of firmware updates. With the recent addition of SD file (and midi) playback and at just 4hp, there’s basically no reason why you wouldn’t run it.

I’ve been rocking some Expert Sleepers hardware in my system for a while and there are a few things I love about it. First, Os (the English guy who runs the show) is one of those designer/programmers whose care for his products continues long after he’s sold the product to you. Second, he listens to user requests and implements them where he can (see the huge feature request list for the Disting on the MW forum), many of which he’s tackled.

But the recent firmware update provided something that no one could have seen coming (okay – I sure didn’t see it coming): the addition of the functionality of the discontinued Expert Sleepers ES1 and ES2 modules as modes of operation in the Disting – and this is where Max comes into play!

You can now use you Disting as an interface between your computer and your modular, CV In/Out! Rather than doing a lengthy textplanation of how to make this work, Os from Expert sleepers has helpfully made a couple of videos to explain things (Note: as you watch and listen, assume that Max could be your software substitute).

In short, the Expert Sleepers Swiss Army Knife just grew another blade!

I’m not sure why, but it seems like lots of beginning Max users think that the only way to do anything cool with Max includes hours of meditation and days of careful patching. While that’s sometimes true, there are a lot of useful and entertaining things you can come up with on a grey and rainy afternoon. Often, it starts with just playing around and having fun, combined with the kind of basic background understanding of how Max works that a little quality tutorial time will give you.

Here’s a rainy-day afternoon hack that, frankly, sounds a lot more complicated that it was to do. It all began with playing around with an old favorite Max patch of mine – Forbidden Planet.

The patch has been around a long, long time as an example of how to do convolution filtering in MSP using the pfft~ object – it was even turned into a Pluggo plug-in back in the day. The current version of the original Convolution Brothers patch has been further hacked into the pfft~ form we have today by Richard Dudas (Or xoax, as he is known among Max afficianados. You owe him a drink, at the very least).

The idea behind the original patch is simple and elegant: using a multislider object, you can draw a contour for a 512-step convolution filter which gets applied to your input. It’s a lot of fun (I’ve included a copy of the patch and the abstraction the pfft~ object needs – fp_fft.maxpat in the download package).

I was drawing filter curves to compete with the sound of the rain outside my window when I suddenly thought to myself “There’s no real reason that I have to be the one drawing those filter curves. What if…”

And that’s how it began.

In these moments, I tend follow the “What if…” question by running through my usual approaches to generating and organizing variety – stuff like using random values or a drunkard’s walk to generate the list of values, using LFO outputs to write the list of input values (as I tend to love to do), and that sort of thing. In this case, I started to wonder about using Jitter as a source.

Jitter is great at creating and processing images, but it’s also useful because it works with matrices – large batches of data that represent the frames of the image being displayed, the OpenGL object you’re working with, and so on. In the case of visual images, each pixel in the movie contains 4 planes of data in the range 0-255 and is part of one big array – an array you can slice up in various ways to generate lists of data you can do other things with.

So I thought, “I wonder what I’ll get if I slice up a video image and grab a scanline and use it to fill the multislider? If I can do that, I don’t need to modify the Forbidden Planet patch’s innards at all. It should just work and maybe do something cool with little effort on my part.”

(Author’s note: since I’m writing a more-or-less tutorial, what you read next not only condenses the time I spent playing around and checking the refpages – it also leaves out some other great ideas that came from just playing around and having fun. I won’t go into them here, but I just wanted to remind you how important fooling around and having fun can be.)

Slice and Dice

I added a playlist object to my Forbidden Planet patch, and then clicked and dragged to load a movie (I went with dishes.mov), and clicked on the circular button to enable looping of my movie. A jit.pwindow let me keep track of what I had as I worked. I was working with a video, so I knew I had a 4-plane matrix full of char data. What I needed was what the Forbidden Planet patch’s multislider contained: a single list of 512 floating-point numbers in the range 0. – 1.0.

Broken down into pieces, here’s what the problem looked like:

Create a horizontal slice from the image

Take that 4-plane char matrix “slice” and convert it into a single-plane matrix that contains 512 items, each of which is a floating-point number in the range the multislider object in the Forbidden Planet patch uses.

I’m breaking this down because lots of folks out there who are interested in converting images to data (that process of to-from or from-to has the fancy term transcoding) do this a lot, which means that this is a good thing to learn to do, and a useful skill to have. As is often the case in Max, there are a number of ways to do this; I’ll show you what I think is a quick and easy way:

To create the horizontal slice from the original image, I’ll use the jit.submatrix object, created to do exactly that. The object uses attributes (@dim and @offset) to define how you want to slice things up (the example below shows you what the output looks like)

I only need a single plane for my list. Instead of using jit.unpack and deciding which color plane to use, I’ll keep it simple: the jit.rgbluma object will output a 4-plane matrix where each plane contains the same information: a monochrome image.

What might seem to a beginner to be the trickiest thing to do – changing the number of entries in the matrix and converting the data from the char range (0-255) to floating point numbers (float32) is made simple by virtue of a feature of jitter matrices: they adapt. If you specify different attributes when you create a jit.matrix object and send it a matrix whose attributes are different, the receiving jit.matrix object will automatically adapt by changing the dimensions of the matrix and number of planes, as well as scaling the incoming matrix and converting the values.

After that, all that’s left to do is use a jit.spill object to convert the matrix to a list of values to sent the Forbidden Planet’s multislider object.

Here’s what the process looks like:

To keep things compact, I removed the jit.pwindow objects and connected the jit.spill object to the multislider in my original patch, which suddenly started making much more interesting noises.

Putting it all together (and taking it apart again).

My rainy day patch sounded kind of cool as it was. One of the great things about being able to reuse patches such as the Forbidden Planet is that you’re done quickly, and have more time to think about improvements.

While my patch sounded great, I was thinking that it would be awesome to have a stereo version. The lazy (or efficient? You decide!) way would be to just duplicate the pfft~ object and hook everything up again….

But I was thinking that I’d like to look inside and examine the patch loaded by the pfft~ object. How hard would it actually be to create a stereo version? Here’s what I found:

The inside of the patch wasn’t scary at all! I had a single-channel buffer~ object (EQFun~) that had list values unpacked by a listfunnel object and written into the buffer using using a peek~ object. The EQFun~ object was connected to the third outlet of the fftin~ object, which a quick look at the refpage explained output the frame index for the FFT.

So – all I’d really need to do would be to have a second fftin~/fftout~ object pair for the right channel of my pfft~, together with a second buffer~ that I could use for getting index values.

Wait… the EQFun~ buffer has only one channel. Why couldn’t I replace it with a stereoEQFun~ buffer and then both write the incoming list data to one of the channels and then have the peek~ object use one of the two channels to do the indexing? What ought to work fine.

And it did. Here’s what the new abstraction (called stereo_fp_fft.maxpat) looks like now:

I had two more ideas from my Jitter life that I thought I might try, too.

The horizontal slice I created was the very top of the video image (@offset 0 0). Since attributes work as messages to Jitter objects, I could send the message offset (number) 0 and grab any horizontal line in the video – all I’d need to do was to add a message bpx with the contents offset $1 0 connected to a number box and then send that to my jit.submatrix object!

I love smearing video images using the jit.slide object (okay, I’m actually doing cellwise temporal envelope following, but I think it looks smeary) and using the messages slide_up and slide_down for different effects. There’s no reason I can’t do that with the matrix that contains my list of values before I unroll it and sent it to the multislider….

So I spent a few minutes prototyping it and it worked like a charm – I could scan back and forth through the movie, and produce subtle, almost vocoded effects. And the fact that I was now working in stereo meant that separate channels could have separate controls, as well – setting the offset for each channel to near neighbors produced really subtle effects. After my proof-of-concept messing around, I rolled both of these things up in a nice subpatcher that kept things neat and tidy:

The result? Not at all bad for a rainy afternoon. I hope you enjoy these patches, and feel free to make your own improvements. Happy patching!

While it’s not something that’s immediately obvious and — to my knowledge Stretta (the designer of BEAP) has never mentioned — there are a lot of similarities between BEAP and the old Nord Modular software.

The Nord Modular software platform was used to patch together modules to make sound and process sound, then you would save that patch to your hardware and take it off to play somewhere without the need for the original patch/computer, revolutionary for its time.

The great thing about the Nord Modular Community was its dedication and continued collaborative nature in pushing the scene to make bigger and better patches. It was also a great platform for education which meant that these patch developments were documented very well and in an easy-to-learn format.

The way in which both the Nord Modular and the BEAP platforms were designed, paying homage to electronic synthesis, means that anyone can follow along with this documentation, learn about electronic music and apply those skills to almost any form of programming or synthesis in both hardware and software. It’s fantastic!

One such document that is still floating around is the Advanced Programming Techniques for Modular Synthesizers by James J. Clark, a phenomenal resource and just one of many that surround the Nord Modular community. I remember when I first found this document years ago whilst digging through old forum threads and I’d felt like I’d hit the jackpot. I’ve learned a lot from the resource over the years.

A favorite section of mine is the Frequency Modulation Technique. While I knew about FM and was already using it, this section helped me understand the fundamentals better and start to apply those methods in a more advanced way in my software and hardware programming. FM is a truly powerful technique for electronic synthesis.

In this Frequency Modulation Technique section of the documentation, there are 7 Nord Modular Patches – I’ve taken the time to go through these and recreate each of them in BEAP – when looking of the patch be sure to cross reference with the corresponding text in the document to gain a complete understanding of how these patches work and what type of FM sound we are trying to obtain.

I’d like to begin this month’s review of books you might want to have in your library by telling you a story.

Once upon a time, I shared an apartment with a City Planner. It was a neat and orderly place, with one unusual feature: he kept a copy of this book in his bathroom – little thing, red cover, about the size of a breviary or one of those Classical Library hardbound copies of Xenophon or Cicero’s orations. It was about how people designed and modified and imagined they spaces they lived in.

It was an odd little volume. You could open it at random to any place in the book. There’d be a title, a little photograph, and a description of the idea, along with a listing of other more or less abstract ideas elsewhere in the book. You could sit there and read only a piece, or start racing back and forth in the text when something caught your fancy. It didn’t take long to see why it was a perfect bathroom book.

Little ideas that you’d carry around with you – patterns you’d notice hours or weeks later, detonated remotely by the act of attention. The darned thing was like no other book I’d ever encountered – bit by bit, it changed how I thought of architecture and design – I’d like to say that it became the stuff I thought with rather than thought about. When hyperlinking came along, I was ready for it, having used the patterns in Alexander’s book to carom about in an idea space.

So, of course, I bought a copy of my own (and still have it). I foisted the thing on my friends for ages, prefacing my enthusiasms with the idea that this was some kind of idiosyncratic passion of mine I hoped was worth sharing. And then, one day, I discovered that I wasn’t the only person who’d stumbled up on it (actually, the book reportedly sells in the tens of thousands every year, and has done so since its appearance in the 1970s). From time to time, I would see it (it’s not a hard book to spot, with or without the dust jacket) here and there – on the bookshelf of a man at Bell Laboratories responsible for a good bit of the C programming language. In Carl Sagan’s office at Cornell University. I still run across it here and there. Brian Crabtree owns a copy. So does David Zicarelli (I honestly don’t remember whether I was the zealot who foisted the thing on him a squillion years ago or not. I’d be honored, but I doubt it).

It was a wonderful surprise to discover that my favorite little red book had somehow tunneled its way into the world of software – most elegantly with the arrival of the now-famous “Gang of Four’s” book “Design Patterns

Here’s what I think is a reasonable summary of the ideas behind the book, and how it relates to Christopher Alexander’s work. If you’re a Max person, I’m sure you’ll recognize these ideas pretty quickly (SFX:cough cough snippet cough cough package cough cough): a design pattern is a form of a solution to a design problem that’s re-usable. When you’ve got an organized group of those patterns that work in the particular thing you want to work on, you’ve got the beginnings of a pattern language. That language provides you with a way of discussing and sharing work with others.

Sound familiar?

(By the way – in case you’re wondering exactly how it is that a bunch of programmers heard of Christopher Alexander’s work in the first place, I stumbled on a discussion of exactly that here – by members of the Gang of Four themselves….)

The book and its ideas remain controversial in both the architectural and the coding communities. Sadly, the single most bracing bit of diss directed at Alexander won’t be easily available to you if you don’t have access to online University libraries – the full range of invective of William S. Saunders’ May 2002 piece in the Harvard Design Magazine is only tantalizingly quoted here (it’s a doozy). Critiques of the design pattern movement in software are more easily accessible – here, for example. Interestingly, Alexander himself gave an IEEE lecture in which he describes and comments on his work with specific regard to software practice. You can find a transcript of it here. It’s a fascinating read.

I hope that I’ve tried to suggest that this little book might find a place on your shelf not only for the movements it created, but for its ability to do what any seminal book does – it will rewire your brain and redirect your thinking.

P.S. If you’d like to peruse the thing without the actual pleasure of it as a book, you can peruse a PDF of it here.

P.S. There are two directions you can go from reading and loving this book: toward Christopher Alexander’s Ph.D. dissertation “Notes on the Synthesis of Form,” which remains the more general formulation of the Pattern Language, or to Alexander’s ambitious and contentious 4-volume masterwork “The Nature of Order”. It’s a sprawling and ambitious work that collects and follows on from the Pattern Language, but doesn’t really reduce easily (you might try this review as a basic overview of the big ideas. I’d suggest starting with Volume 1, and then deciding whether to continue on from there. For the Cliff’s notes version, you can find a summary of its basic ideas here).

]]>https://cycling74.com/2016/10/11/book-review-a-pattern-language/feed/2Review: Novation Launch Control XLhttps://cycling74.com/2016/10/04/review-novation-launch-control-xl/
https://cycling74.com/2016/10/04/review-novation-launch-control-xl/#commentsTue, 04 Oct 2016 22:40:53 +0000https://cycling74.com/?p=366312
MIDI controllers are an obsession for a lot of us. As sometimes the sole physical component of a performance setup, it’s hard to ever feel totally settled on a single device. For many years I was a die-hard fan of the Korg NanoKontrol as my go-to MIDI device. It fit a perfect combination of “toss-it-in-your-backpack” size, usefulness, no musical affiliations, and affordable enough to mistreat it and buy a new one next year. I still keep one around, but over the past year I had been looking for something a little more substantial to travel with. I have a hard time giving up on a familiar solution, but I get a little tired of showing up to a visuals gig and having the LD snicker at my tiny sliders.

I first spotted the Launch Control XL when I was doing a visuals night with my co-worker Tom Hall. He was using one to run his DJ set. It struck me immediately as a good solution for what I was after. With 8 nicely sized sliders and 24 knobs above, it was clear that it could take over all the controls I needed from my NanoKontrol while only needing to reprogram a couple of things in my patch. The way the knobs are laid out in triple rows recalls my long-gone days with my old UC-33e, and lends itself nicely to things like RGB or HSL controls, leaving the sliders to do more expressive gestures. The editor app is pretty simple to work with, and it was a short time to having a completely custom template that matched my previous controller mappings. The addition of backlit, multicolored buttons along the bottom of the Launch Control immediately found use as indicators for different sections of a performance, to cue new presets, and to activate different features of my patch. Lighting up the buttons is just a matter of sending MIDI note messages back to the Launch Control and using special values to set the color. You’ll want to download the Programmers Reference from Novation for reference.

Besides all the available controls, this thing looks pretty good lit up and sitting next to my laptop at the show. The all-rubber bottom keeps the thing sitting steady on the table (a real problem with the NanoKontrol). It also pairs nicely with the Novation Launchpad for jobs that require more light-up buttons. The low-profile design also makes it fit nicely into my backpack – which is ultimately the deciding factor on whether a piece of gear will get regular use in my life.

]]>https://cycling74.com/2016/10/04/review-novation-launch-control-xl/feed/5A Few Minutes with BEAP, Part 12https://cycling74.com/2016/09/27/a-few-minutes-with-beap-part-12-2/
https://cycling74.com/2016/09/27/a-few-minutes-with-beap-part-12-2/#commentsTue, 27 Sep 2016 17:52:45 +0000https://cycling74.com/?p=365884In Part 12 of the “A Few Minutes with BEAP” tutorial series, we look at an unusual source for generating CV for your favorite BEAP modules – video. It’s a quick way of adding UI controls to run multiple modules, too.

It’s all too common these days to encounter instruments and approaches to performance that are solutions rather than works in progress of processes. That isn’t necessarily surprising. After all, all instruments are – to some extent – current solutions to one kind of problem or another. It’s just that updates to, say, a piano, are few and far between. I always love the opportunity to talk to my friends and acquaintances about their work in progress. Not only do you get a sense of the challenges and solutions while they’re in the thick of it, but it’s also a great look at how your friends think. I love watching the string of prototypes, the occasional humorous glitch, the joy of the serendipitous solution.

So when I had a chance to sit down and chat with my friends Bob Pritchard and Kiran Bhumber on their collaborative work in progress – an interface for dancers and wearable instrument while it’s in progress, I jumped at the chance, delighted to have articulate and generous people to talk to about work as it happens (to quote the Canadian Broadcasting System)….

The design of “new instruments” is always a tricky business, for all kinds of reasons: We’ve got this huge history about physical instruments we already know as ways to transduce small motor movements to actuate strings or redirect the flow of air, and so on. Questions about virtuosity come as part of the freight charges, too. Increasingly, we talk about instruments as “interfaces” of one sort of other, as well. So I’m always curious about new instruments or interfaces started – How’d you get here?

Bob: The UBC Digital Performance Ensemble SUBCLASS (it’s kind of like a laptop orchestra, but not really) is made up of students from across faculties, and all but the performance specialists have taken Max/MSP/Jitter programming. We focus on tracking aural and physical performance gestures, using the resulting data to control synthesis and real-time manipulation of live audio and video. Over the past several years we’ve used such things as microphones, tablets, smartphones, Wiis, webcams, Kinects, and custom-built hardware. The ensemble members appreciate the very strong performance skills of the dance and music specialists, and usually augment those skills with software, rather than asking the performers to learn completely new ways of performing. So, there is an inclination to develop interfaces that take advantage of what performers are already good at doing.

Kiran: There has been a great deal of research on how dancers create audio/visual media through movement in space using non-contact sensors such as webcam and Kinects. However, in these instances, the dancer is restricted to the point of view of the sensor with their movement being a function of the space. In expanding this approach, I was thinking about how to create a system where dancers could freely move in space, and embody audio/visual media through physical contact with their bodies, in a way that is analogous to how an instrumentalist would perform (ex slide fingers up a neck of a string instrument). This lead to the idea of the bodysuit, which would consist of sensors that a performer would make contact with – essentially “playing themselves”. About six months ago Bob and I were talking after a new music concert and I asked how one would create a performance suit with sensors on it. Over the next few weeks we chatted and emailed about the topic, and then decided to move forward. Bob explained how different sensors might work and I gradually refined what my idea of the suit would be: it would have multiple, reconfigurable sensors that were more than buttons, it would be used by musicians and/or dancers, it would be aesthetically interesting, and performances would celebrate the human form and movement.

Can you talk a little bit about the bits of the current body suit “under the hood” a little bit (if it’s not a huge secret, of course)? Were your design choices informed at all by what similar approaches *weren’t* doing [a chance to expand on your comments on reactive fabric]?

Bob: The current version of the suit uses parallel tracks of resistive and conductive fabric for each sensor. Simultaneously touching the two tracks (with a metal thimble or a highly conductive finger) completes the circuit, and the resulting voltage depends upon how far along the resistive fabric you touch.

Essentially, the voltage is read by the Arduino, the data is massaged a bit in Max/MSP to handle noise and do some averaging, and is then used to control scrubbing through samples or video. There are a lot of web demos of conductive fabrics such as KITT’s Zebra Conductive Fabric or on the Instructables website, but they tend to show turning lights on and off, controlling TVs, etc. We don’t see a lot of examples of using fabric for instrument interfaces in performance: we tried using other types of conductive and resistive materials (including analog cassette tape) but they weren’t a good fit for this project: soft membrane sensors shorted out over body curves, and other materials were either too stiff or not durable enough. We ended up substituting conductive fabric for the conductive thread because it gave us better continuous contact, and less noise. Working with the fabrics did have its problems: for the final prototype I hand tacked the fabric onto the suit, so it’s a kludgey-looking bit of sewing! We now have professionals machine sewing the materials for the next set of suits. Originally we had a LilyPad snapped to the back of the suit, and it handled all the data. However, an interesting experience with conductive thread, electrical shorts, smoke, and sparks made us rethink that!
The prototype uses an Ethernet cable to communicate with the laptop. Obviously wireless is a very attractive idea, but – as Perry Cook likes to say – “The only thing worse than wired is wireless.” I’ve experienced data loss on wireless networks in performance, and it’s not fun.

It’s hard not to watch the video [see below] and not think of the body suit as a general interface rather than something created to do a specific dance piece or something like that. I also expect that having a prototype working also means that the kinds of media you’re connecting to it are, in turn, being modified by the person that uses it or the interchange of various technical features and behaviors that work well in the environment. Have there been any surprises in terms of how having the prototype changes or redirects what or how you’re doing with it?

Kiran: We started off thinking of the suit as an interface for controlling sound, much like you might think of joysticks, or faders, or such, and all the testing concentrated on controlling the suit as a musician. I think we were both surprised at how different the results are with a dancer as compared to a musician. The dancer has much more “performative” movements when shifting from one part of the suit to the next – we weren’t expecting the difference to be so great but it should have been obvious – and the result is very elegant and engaging. The musician was much more involved with playing with micro aspects of the sound – scrubbing, retriggering, layering, and so forth, and played the suit more like known interfaces.

What does the “development cycle” of the body suit look like? How do you design or refine its behaviors in practice? Tuning general behaviors? Tailoring the interface to a specific user? Building a library of states or behaviors?

Bob: We began by exploring different types of resistive and conductive materials such as various threads, cassette tapes, and foils by laying them out on a desk and measuring changing voltages. And laying them out again. And combining them. And changing them, etc.

We were interested in being able to have bare fingers complete the circuits, and that worked for some materials, but those materials weren’t practical for attaching to a body suit. We settled on using conductive fabric, and then worked on eliminating the noise and jitter in the data coming off the circuits. We found that problematic and decided to try using membrane slide sensors in fabric pockets. However, the courier company lost our order (!), and Immersion systems didn’t have any more sensors of the length we needed, so we went back to refining the fabric sensors. We decided to have the users wear metal thimbles to eliminate most of the noise generated with finger conduction, and that complemented the idea of a sewn fabric-based interface.

Kiran: Each of the suits is tailored to a specific user, since they need to be able to easily reach the entire length of each sensor in performance. Part of constructing a suit is selecting the right size of bodysuit for each performer and then having a fitting session to tack on the fabric. The basic circuit is the same for each suit (sending continuous data ranging from 0 to 1023) but the tuning of the circuit and the default settings differs depending upon the piece – what samples or synthesis methods are being controlled, what happens in different sections, and so on. Since we are early on in the development of things the Max/MSP patches are fairly basic. It will be interesting to see how they develop as we work on pieces with video control, or on pieces where two or more dancers play each other. I expect that the design and placement of the sensor strips will also change as we create pieces and critique the results, and as we source different materials.

What’s next for the work? What’s next for its repertoire? What refinements suggest themselves to you as you work?

Kiran: In early November Marguerite Witvoet (new music performer and vocalist extraordinaire) will be performing with the suit at a conference at the Western Front, doing a piece by me where she controls samples and real time vocal manipulation. That same month two dancers will perform a piece by Bob at the UBC Museum of Anthropology as part of SUBCLASS’s concert opening an exhibition of textiles from around the world. I’m also working with a dancer at the U. of Michigan to create a work where I wear the suit and play clarinet while the dancer controls sample triggering and manipulation of my sound.

Bob: We’d like to develop multiple performer pieces, with performers (dancers and/or musicians) interacting to control their own and their partner’s suit, but like Kiran I’m interested in combing the suit with live processing of acoustic instruments. We also need to explore the whole issue of wired vs. wireless.

Kiran: We still have two analog channels to work with, and lots of digital. We could use the analog channels to allow the performer to switch between different modes, to control which samples or video are being used, what types of processing, and so on. Infusion systems has a Pi shield that opens up 8 channels of analog input on Raspberries so that might be worth exploring as well.

The suit raises the issue of body touch in performance, from functional and artistic/expressionist viewpoints. How do respond to that?

Kiran: Performance with the suit is meant to be sensuous, in the same way that dance performance is sensuous through the celebration of body motion and pose. The RUBS interface can make the audience more aware of the male or female body in performance, but it also gives the performer a strong self-reference, since there is a haptic component in – and self awareness of – the location and pressure involved in controlling the sounds and audio/video processing. Like much of Canada, Vancouver has an active contact dance and improvisation scene, so close body contact in performance is not unusual for audiences.

Bob: In a sense all instruments except for the voice are interfaces acting as extensions of our human form, and we use those extensions to express non-verbal emotions and ideas. The suit is still an extension, but it pulls the interface back to the surface of body, so in some cases there might be a closer identification of body gesture and resulting sound or video. The control interface is far more elegant than the textile keyboards, drum pads, and circuits that are found on ties and tee-shirts. Body touch is required to perform on those interfaces, but those are simply transpositions of an more “remote” interface onto the body, where they were never intended to be. We think the use of fabric sensors fitted to the body rather than hard or semi-rigid control surfaces results in a more artistic presentation and enhances the production and interpretation of audio or visuals.

]]>https://cycling74.com/2016/09/27/the-wearable-interface-an-interview-with-bob-pritchard-and-kiran-bhumber/feed/2Managing Multiples Made Easyhttps://cycling74.com/2016/09/20/managing-multiples-made-easy/
https://cycling74.com/2016/09/20/managing-multiples-made-easy/#commentsTue, 20 Sep 2016 22:36:11 +0000https://cycling74.com/?p=365444One of the most powerful objects in the jitter library is jit.gl.multiple. It allows you to quickly create and manage arrays of OpenGL objects by manipulating matrix data to control the GL parameters, with results that would be incredibly tedious and difficult if done by hand. For the expert in jit.expr, it seems like anything is possible, but for beginning users the barrier to entry can feel really high. Over the years I’ve seen a lot of people get hung up trying to get started with it and wanted to provide an example of how easy it can be to use.

In this patch we leverage the matrixoutput attribute of jit.gl.gridshape to generate interesting and organized position data to feed our jit.gl.multiple. In this case, we use ‘@matrixoutput 2’, which not only gives us access to the basic shapes that jit.gl.gridshape provides in the desired float32 type, but also includes transforms (scaling, rotation, positioning) performed on the object in the output matrix data. This makes it easy to manipulate the entire group all at once.

Furthermore, now that the geometry is a standard matrix, we can use any number of jitter objects to tweak it. A jit.xfade allows us to combine the outputs of two separate jit.gl.gridshape objects, opening up a world of possibilities.

For the rotation data, we use jit.expr to create a normalized range of values across each axis of rotation, then multiply that by a custom value in degrees. If we set the value to 360, we will see the forms make one full rotation over the length of the array, regardless of how many we are drawing. Adjusting the offset determines the initial angle of rotation.

The scale is being set globally using the setall message to a jit.matrix, but could also be controlled by any matrix data we choose.

Anyone who’s paid attention to your earlier electronic/violin work may or may not find your debut as an instrument designer to be a real surprise. Of course, since we know you through your work more than spectating on your life, I expect that the change seems quite different to you. How did you become an instrument maker?

Well, we can start waaaay back when I was 3 years old and I had my first musical instrument obsession. I really liked the violin, so grabbed a shoebox and put rubber bands over it, plucking it under my neck and tromping around the house. Flash forward a couple decades, after many violin lessons, in graduate school I decided to deconstruct the instrument once again when I created the “Self-Oscillating Violin” (2005). Here’s a little bit about it:

A second-hand violin is transformed into a self-oscillating system by the use of electromagnetism and a computer running Max. Motion detectors excite long tones of oscillation. The audio vibrations of each string are transduced into surface waves of a thin layer of water on a concave mirror. The shadows of waves reflect onto the ceiling by a suspended blue flashlight.

Then I decided to lose the violin form altogether, and, I created another instrument “String TV” (2009):

A long string “self-oscillates” or has no visible performer. The oscillations of a long string are visually manifested on a TV monitor. The monitor is hanging from above, shining down on the string.

Each of my iterations of instruments have focused on visual elements and transduction of sound into light in some way. And, having come from a classical music background, I’m also still deconstructing my assumptions about what an ‘Instrument’ might be. There have been two versions of the Macro-Cymatic Instrument, the first one was made of wood, and had a string on top, and so resembled more of a classical instrument concept. The newest version is more sculptural, made of epoxy resin and acrylic.

Can you talk a little bit about the act of using your instrument? What would we find it plugged into, were we to go see you live?

When I’m using the instrument, the first question to answer is – am I designing the sound for a specific visual effect, or am I shaping a pre-existing sound idea to create a visual representation of it? I’ve worked both ways, for example with my piece “Recognition: for Sine Waves, Prepared Piano, and Macro-Cymatic Instrument” (2015). I consider this the first piece that the sound was composed specifically for how they created fluid motion at the macro-photographic scale. I had the instrument set up with a Max patch where I could create different overtone scales and modulations of sine waves. These sine waves mixed together with the prepared piano, was being fed into the Macro-Cymatic Instrument’s audio driver, creating the fluid motion. This liquid is captured at the macro-photographic scale, and the live video projected onto a screen.

In the newest version of the instrument, there is also data running to the LED light arrays, running from the Arduino. Right now this is set up as one program per piece, and I will program the lighting to fit the tonal hue of the music, and synchronize with a BPM. The result is an immersive audiovisual experience – where the light and patterns are derived from the music, but in an abstracted way.

More info, demonstration video, and a music video from my new record Star Core from the Creator’s Project here.

Instrument building is always some kind of dialogue between intention and materials. Common instruments tend to represent a kind of consensus over a long amount of time about the most efficient ways to transduce small motor behavior or breath (or both) to make sound. Inventing an instrument can circumvent that entirely – you’re finding new ways to actuate things, wondering about the best materials without the benefit of history, and so on. Can you talk a little about the process of developing your instrument?

At first, I developed the instrument in the tradition of classic music instruments with the classic musical instrument material: wood. I carved it by hand, and studied its resonances. I created a sound board with sounding body and iteration was based on creating a balanced frequency response as well and the best transduction of vibration to the water. One of the most difficult elements is suspending the sound board without dampening the transduction of sound waves to water. Working with patterns of resonance and nodes and antinodes, various methods of suspension were explored. This instrument also had a string mounted on top, that could be bowed. The sound is good as in a traditional wooden instrument, but because the board needs to vibrate freely, the instrument wiggles when the string bowed, resulting in wobbling water. So, the string was one of ideas that proved less practical when translated to this new instrument.

This year while at residency at Djerassi, I was filming a lot with this first instrument, and realized that the wood was holding me back. I kept framing my shots and creating very high contrast visuals in order to avoid seeing the wood grain. So I started building new instruments made of epoxy resin casted in molds made with ceramic. The shape is similar to the original instrument, and while the resonant characteristics aren’t the same as wood, the visual and sculptural characteristics have offered me a whole new visual terrain to explore!

I guess that one of the things that building instruments has in common with writing code has to do with prototyping, and the process by which what you make suggests something that you didn’t see or think of when you started. Obviously, the situation is different when comparing code to material objects in the world, but I’m wondering whether there was any part of the process where the instrument itself suggested some change to you….

Since there is both code and material involved in the instrument I can think of aspects of both that suggest changes or new considerations. For example, the physics of light propagation and features of the camera hardware favor certain ways of programming the lights but also create unimaginable results. When I started using a 8×8 LED matrix, I had no idea what it would really look like reflecting off the water at a macro-photographic scale…. And as I started playing around with more and more subtle shifts of color I was amazed by the results. Very small shifts in hue became a whole new frontier to explore, like the shifting hues of a horizon.

Since you’re an instrument maker, it logically follows that you are the sole virtuoso on your instrument in the entire universe. What’s it like to *play* the instrument? Does the act of interacting with it drive the development of a given performance?

The instrument definitely has a mind of its own, and is very picky about the type of sounds, frequency and volume it reacts to. In assessing these, my profession as an audio engineer and designer has helped me to more rapidly adjust sounds to the visual behavior I’m seeing. Playing the instrument involves live manipulation of the audio, sometimes including the addition of pure tones via Max to amplify the lower frequency range of a composition. Then there’s the performance of the camera position, exposure, panning, zoom, shutter speed, and focus.

Being the First Virtuoso also means that you’re intimately involved with developing a repertoire, and with letting the instrument “tell you” what kind of music it delights in making. What does the process of thinking about performing with your instrument look like to you? How do you imagine the instrument as part of an ensemble, for example?

The Macro-Cymatic Instrument definitely shines with music that focuses on minutiae and delicately crafted details. It invites you to dive into the tiny details of a sound, by magnifying them. The only other music that the instrument has performed that isn’t composed by me, is Chuck Johnson’s new work for pedal steel guitar and synthesizer (from a record forthcoming 2017 on VDSQ). Chuck performed the music live while I performed the visuals live at Gray Area (SF) in June.

]]>https://cycling74.com/2016/09/20/artist-focus-marielle-jakobsons/feed/0Crossover Filter Design Video Tutorialhttps://cycling74.com/2016/09/13/crossover-filter-design-video-tutorial/
https://cycling74.com/2016/09/13/crossover-filter-design-video-tutorial/#commentsTue, 13 Sep 2016 17:29:52 +0000https://cycling74.com/?p=365063Building on my previous filter design videos (see below), I use the filterdesign, filterdetail and gen~ objects to make a crossover filter that is perfect for use in multi-band EQ’s, compressor/limiters or sound design applications. By using these tools to create a Linkwitz-Riley filter system, join me for a 20-minute trip into examining, testing and ultimately building a filter that doesn’t otherwise exist in Max.

]]>https://cycling74.com/2016/09/13/crossover-filter-design-video-tutorial/feed/5Federico Foderaro’s Amazing Max Stuffhttps://cycling74.com/2016/09/06/federico-foderaros-amazing-max-stuff/
https://cycling74.com/2016/09/06/federico-foderaros-amazing-max-stuff/#commentsWed, 07 Sep 2016 01:21:49 +0000https://cycling74.com/?p=364791For those that keep track, you will recall me mentioning Federico Foderaro in the past. He’s responsible for driving much of the content development and tutorials in a few of the key Max and Jitter Facebook groups.

He’s recently shifted gears to a new series — and I’m sure many of you will appreciate it — to Javascript for Jitter which is not often covered in video tutorials.

The Point Clouds tutorial below is an excellent start. Be sure to travel back and check out the others in his series and bookmark his page to keep an eye out for regular updates, and visit his Amazing Max Stuff Facebook page.

]]>https://cycling74.com/2016/09/06/federico-foderaros-amazing-max-stuff/feed/0Review: Valhalla VintageVerbhttps://cycling74.com/2016/09/06/review-valhalla-vintageverb/
https://cycling74.com/2016/09/06/review-valhalla-vintageverb/#commentsTue, 06 Sep 2016 21:45:08 +0000https://cycling74.com/?p=364793One of my friends had an off-the-cuff statement that has stuck with me: “When I die, bury me in reverb…”. I may understand where he is coming from – especially since I found a beautiful-sounding, simple-to-edit reverb that I literally use in every patch and Live project. That reverb is Valhalla VintageVerb, and it represents the best $50 I’ve ever spent in my musical life.

What makes it so special? Well, first of all, the reverb algorithm is spectacular, avoiding most of the plastic/hollow sound of similar algo-verbs. Secondly, its interface makes it easy to zero in upon (and change) the most important controls – especially with the huge Decay control, which can change the reverb decay time from .20 seconds to a whopping 70 seconds. Round this out with some tweaky options for damping, modulation and EQ and you have a fully-spec’d reverb that can be turned into almost any classic reverb sound with ease.

And in the end, it’s the sound that counts. I find the Valhalla VintageVerb to be – hands down – the best sounding reverb, either hardware or software, that I’ve ever experienced. Given that it is a simple VST effect, it couldn’t be much easier to include within my favorite patches.

What connects them all is that it gets really hard for me to make a separation between designing the hardware/instrument/installation and programming it. It could be a dedicated hardware for a dedicated program, like in the Alle Komponerer piece or something more reusable – like the effects and instruments that I use with the guitar (which are mostly Max for Live devices).

At the same time, I have a hard time distinguishing between instrument design and composition. The instruments are mostly custom made/programmed for one special piece anyway, so they are already crucial to the sound of the finished piece. So in my creative process there is no real separation between making art and concepts and programming and composing, and Max is the tool that keeps these borders fluid and permeable, which is crucial to me.

It seems like you enjoy working in collaboration with others, What are some of the difficulties you face in collaborating – especially if you are trying to inject or interface your tech with others’ work?

For me there are mostly two different things to consider: technical collaboration (when you are developing the piece, installation etc.) and artistic collaboration.

Let me start with the technical part. I often work together with Timm Ringewaldt in the duo Audiokolor , and there we use Max projects a lot – it keeps everything neatly organized and when you share it on Dropbox and stay a little organized, you can go a long way. The next level is having the whole thing under source control like git.

As far as connecting to different tech – Sensors, Lights, Data input etc. – in Max, there is an object for almost everything, and if there isn’t, you just make one. It is amazing what you often can achieve with some MIDI and a little bit of tinkering.

As for artistic collaboration, it is different for different projects, of course – working together on concepts and design and that kind of thing. When actually playing with people in a band-like situation, I’ve found that I reduce the dependency on tech as much as possible, meaning no sync, no data exchange, and so on – just people with instruments (self-programmed, sometimes) that play together and react to each other. Very old school.

I know that you’ve done some travel related to your artwork. Where are some of the places you’ve gone, and where did you perform/install the work?

Various places the Angelica Festival Bologna, Signal and Noise Vancouver, Portikus Frankfurt, HBC Berlin, Coded Cultures Vienna, the Austrian EXPO Pavilion in Shanghai, Neue Nationalgalerie Berlin, the Oslo Museum of Contemporary Art, Go with the Flø Norway, and so on. But whatever the place, the people you meet are the real deal.

There is this modestly sized but very dedicated community of people that are into this kind of art-technology crossover and you keep running into them all over the place, which is just awesome :)

Impressive list of locations! Do you have any problems moving between countries? Are there limits to the technology you can use, or do you find other limits more widespread?

Nope, as long as they let me into the country, and can provide power and (sometimes) Internet, everything is fine. On the contrary, I find different environments very inspiring.

If you were starting new today, what would you do to prepare for the future in the media art world?

The funny thing is that I never prepared to be part of anything. I studied Jazz guitar and Computer Science out of interest, played in bands, and did all kinds of weird shit. So, I guess I wouldn’t do anything different. Just do stuff, because then you learn stuff.

Nathan Wolek is an audio artist and researcher as well as Associate Professor of Digital Arts and Chair of the Creative Arts Department at Stetson University in DeLand, FL. His technical and research work has recently been featured in the Max Package Manager. I caught up with him to ask a few questions about his artistic and teaching life…

What got you started?

My creative life in music technology started because there was a new course offering during my undergraduate degree called “Computer Music”. Prior to that, I had really never considered that these two words would go together as a field of study. I was immediately intrigued and talked my way into the course even though it was at the 400-level and I was only a sophomore at the time. We worked our way through the Curtis Roads tutorial and used Csound in that class running on an SGI workstation. I was hooked after that. Being able to craft my own sounds at that level of detail was amazing to me.

The next semester (spring 1997), there was a course called “Advanced MIDI Techniques” with the same professor, so I signed up for that one too. We used Max in all its pre-MSP glory. I still have a folder on my hard drive with those projects and amazingly, I can still open those patches in Max 7! I just checked by opening a project from that term with the filename “Algorithmic Composer”.

Can you tell us more about the Mobile Performance Group? How did it get started? Are there any particularly memorable or crazy experiences performing with that collective?

It was the concept of my colleague, Matt Roberts. He had this idea to do audio-visual street performance using our laptops and other technologies. I signed on immediately because I thought it was a good way to give our students experience performing with the tools we were teaching them. Something about having a gig helps to raise the expectation level. Together we were trying to push the limits by capturing sounds and video from a place, then re-working that into material for performance. In 2005, the gear was just starting to allow us to record directly onto digital formats like flash cards. You can lose a lot of time moving a video or audio tape to a digital file on your computer.

So many memories and a huge collection of audio files that I continue to mine occasionally. One thing I will never forget is performing on the streets of San Jose during ISEA 2006. Up walks a guy that offers us some cash like we were busking, which we hadn’t really considered. I mean, who busks with laptops, LCD projectors, and MIDI controllers, right? Then he introduced himself as Scot Gresham-Lancaster! That was an amazing moment to be performing with MPG and have a member of The Hub come up and compliment us on our work, not just with words, but with cash!

I am super proud of the work Matt and I did with our students during that time period. Many of those students have gone on to do some awesome things. Since 2013, Matt and I just decided to go in different directions creatively. He is doing some great work in augmented reality. I’ve been focussing more on on DSP programming and sound art projects.

How does your role as teacher/mentor/educator inform your art? How does your artistic practice inform your role as teacher/mentor/educator?

I think it is important for educators at the college level to be active themselves, practicing whatever it is they teach. You need to model for your students what it looks like to be actively engaged in your topic of expertise. Telling a student that you can’t meet with them that afternoon because you are working on a creative or research project is a healthy thing for them to hear. They need to know that I am being intentional about setting aside time for my work that is separate from the time I spend teaching, because I expect the same from them. Where I teach now really supports a healthy kind of balance between mentoring and research. Stetson calls it the “teacher-scholar” model.

Sometimes it is hard to separate the two, because when teaching is done well, there is an element of performance and artistry to it. And being a musician by training makes me very conscious of timing and duration. The class meeting or the semester are just two types of time duration, so I often find myself organizing them in much the same way I would a composition. Some sections are fixed, some are aleatoric, then I want the class to end a certain way with a culminating project.

How do you know when something you are working on is finished?

Usually because the deadline is near. But seriously, I have always been interested in algorithmic processes that drive the work. So there is not always a need to have a definitive, final version of my work. For that reason, I do a lot of listening. Tweak the patch – listen – tweak the patch – listen – repeat. And sometimes I sit in my office working on other things with a patch on in the background because I want to hear how the results evolve (or don’t) over longer periods of time. When it starts to hold my attention for more than an hour, then I know I am nearing completion.

What’s next?

On the teaching side, I have been revamping my Computer Music course this summer. I should point out that this is the same course that I mentioned in response to the first question, because Stetson is also where I did my undergrad. It’s pretty cool to be charged with teaching the course that got me into music technology. We have long since left Csound and started using Max exclusively in that course, by the way. But this year, I wanted to mix things up again, so I am adding Ableton Live, Max for Live and some analogue synth modules into the course. I am trying a comparative approach, where we will look at something like frequency modulation and then see how to approach that technique with each software and hardware tool. It’s been a lot of work and classes start this week, so I am pretty excited to roll that out to my students. To get ready for that this summer, I have been spending time working on my Ableton Live chops. I have about 20 different projects started and in various stages. My goal was to try out a lot of things and just make sure I am comfortable. I imagine some of those will eventually turn into finished pieces.

On the creative side, I have a few projects slated for this coming year. I will be working with Virgil Moorefield on his next piece for a premiere in late October, then I am doing a sound design for a theatre project here at Stetson early in 2017. I spent a lot of time in 2015 working on “every tree”, a large sound installation project where I used Max to algorithmically edit almost 6 hours of binaural audio recordings. I still think there is more to be done with that combination of rapid edits and binaural recordings, so I have some ideas for subjects to tackle with them.

Then of course, there is always work to be done for Jamoma and the LowkeyNW package. Overall, there is never a shortage of interesting work to be done, which is what I love of about this stage of my career! But the trick is not letting the necessary work crowd out the interesting stuff.

]]>https://cycling74.com/2016/08/23/a-few-minutes-with-nathan-wolek/feed/0Demystifying Filters Video Tutorialhttps://cycling74.com/2016/08/16/demystifying-filters-video-tutorial/
https://cycling74.com/2016/08/16/demystifying-filters-video-tutorial/#commentsTue, 16 Aug 2016 20:19:01 +0000https://cycling74.com/?p=363731In previous tutorials, I provided a tour of filtering tools for Max users, and also discussed using Javascript for buffer access. In this 28-minute video, I’ll build on those skills and build some filters from scratch in MSP and Gen, and examine their characteristics.

]]>https://cycling74.com/2016/08/16/demystifying-filters-video-tutorial/feed/11A Few Minutes with BEAP, Part 11https://cycling74.com/2016/08/16/a-few-minutes-with-beap-part-11/
https://cycling74.com/2016/08/16/a-few-minutes-with-beap-part-11/#commentsTue, 16 Aug 2016 19:26:02 +0000https://cycling74.com/?p=363735In Part 11 of the “A Few Minutes with BEAP” tutorial series, we continue our series of creating systems using BEAP modules. This time out, we’ll build something that begins with the Karplus – a BEAP oscillator based on the Karplus-Strong plucked string algorithm. Please join me for a restful 15-minute patch along – I won’t string you along!

]]>https://cycling74.com/2016/08/16/a-few-minutes-with-beap-part-11/feed/1What You Hear Is What You See: An Interview with Andrew Blantonhttps://cycling74.com/2016/08/09/what-you-hear-is-what-you-see-an-interview-with-andrew-blanton/
https://cycling74.com/2016/08/09/what-you-hear-is-what-you-see-an-interview-with-andrew-blanton/#commentsTue, 09 Aug 2016 22:00:04 +0000https://cycling74.com/?p=363397

Andrew Blanton is a media artist and percussionist. He received his BM in Music Performance from The University of Denver (2008) and a Masters of Fine Arts in New Media Art at the University of North Texas (2013). He is currently an Assistant Professor of Digital Media Art at San Jose State University in San Jose California teaching data visualization and a Research Fellow in the UT Dallas ArtSciLab in Dallas Texas. His current work focuses on the emergent potential between cross-disciplinary arts and technology, building sound and visual environments through software development, and building scientifically accurate representations of complex data sets as visual and sound compositions. Andrew has advanced expertise in percussion, creative software development, and developing projects in the confluence of art and science.

Your early background was in percussion performance. How did you get involved in new media? Was it a natural progression for you or a departure?

I started playing the drums in 5th grade and have not stopped since. I see the use of digital tools as a very natural extension of my percussion practice. At the University of Denver, where I studied classical percussion, I focused explicitly on acoustic instruments. By the time I finished my undergrad, I really wanted to expand the sounds I could produce with percussion. I was also getting very interested in the physical properties of sound. By augmenting physical instruments with software based signal processing, I was able to begin constructing reverberant structures not possible in physical space through Max.

Having the opportunity to study at North Texas in the iArta cluster really allowed me to hone my practice in transdisciplinary ways. There I focused explicitly on cross modal representation of human sensorium (the representation of visuals sonically, the haptic representation of sound, etc.). At the same time I had been studying phenomenology (principally Heidegger and Merleau-Ponty) and thinking a lot about representing these ideas both in New Media Art and Classical music. This continues to be a major trajectory of my research. I’m really interested in how we as humans interpret sensorium weather expressed visually, sonically, or otherwise. So for me, the two are deeply intertwined.

You work with a wide range of hardware and software instruments and tools both as an educator and visual and sound artist with Max often playing an important role. How does Max fit into your practice?

At this point, Max is important in three primary ways for my practice. Firstly it feels like a very fluid environment for me to sketch ideas out very rapidly, second, it acts as a great glue, or really, a central core to connect multiple environments. And finally, I have been trying to add more organization and architecture into my patches, building robust standalone applications for performance. My ideal scenario is software that I can have open and ready to play in one click as well as reuseable smaller components throughout the software. Because Max is an amalgam of different types of data processing (as in numbers, signal, and matrices) for me it works really well as a platform for making connections. Like say for instance if I’m using custom built drums and microphones with Max, I can easily connect to other environments such as Processing, Unity, Maya, Open Frameworks, node.js etc.

Your recent work seems to deal a lot with the visualization and sonification, sometimes by directly mapping data sets but also through modeling networked behaviors. Can you talk a little bit about your approach and some of this work that’s come out of it?

Networks really interest me lately. I have been sonifying and visualizing node edge graphs including human connectome data as a part of a collaborative team of artist and scientist as well as using the physicality of the internet as a resonant chamber. In particular I have been sending impulses through the networks and generating responses based on the edge weights of the graphs. I guess it’s something like if you clap your hands in a big concrete room and hear the sound reflect off all the surfaces, but in this case I’m listening to the connections of the network. Because these environments are so flexible and virtual, each component of how that impulse reflects through the network is controllable. Technically, I achieve this sonically by using gen~ with custom multi tap delay lines, feeding the data into the delay lines, or by using a general multiband effect and affecting each band with data, among other processes.

We talked recently about your new work Waveguide, which seems like a really good example of this investigation. Can you tell us about it?

As an extension of the idea of network resonance, I was working on sending data from Max to a node server to be able to play the audience’s cell phones in real time. I had been in conversation with artist and theorist Yvette Granata (who wrote the text for the piece) about the conceptual framing of the work. We had talked a lot about the interesting new challenge of our constantly divided attention between reality and our digital devices, both inside and outside of the concert hall. This led to the idea of taking over people’s devices and permeating control over that space within the concert hall. We wanted to embed a critical discourse within the technology for the performance. Interestingly this opens up many possibilities when all of the sudden you have access to a mass array of tiny cell phone speakers and screens in the performance space. Listening to the resonance of the network through a large participatory installation/text/performance, is for me, a way to appropriate the audience’s smartphones for a bigger communal experience, by somehow exposing the interconnectivity and networked nature of these devices. This will all be presented as a new work Waveguide at Gray Area Theater in San Francisco on September 3rd as a part of the Soundwave Biennial.

This has been an ongoing body of research for me, some of the works have included the visualizations and sonifications of a neural spiking network, visualizations and sonifications of human connectome data, and sonifications of star data to name a few.

As an assistant professor at San Jose State University, you have been teaching Max in the Digital Media Art program for a little while now. What do your classes focus on, and do you think teaching Max has changed the way that you work with it?

Teaching Max has been such an interesting experience. I personally started learning max in a very unstructured way, and it was not until I had been using Max for about three years that I got the opportunity to study with Darwin Grosse. That class helped me frame a basic set of objects and learn how to solve problems with those objects. I try to approach my classes in the same pedagogical method. It’s always fun and surprising to see the creativity expressed when students bring fresh eyes to assignments. I think a lot of times students can be overwhelmed by the possibilities of Max. By limiting each week to the formal introduction of a few objects while presenting conceptual and artistic problems for the students to solve, two paths can be simultaneously undertaken. First learning the technology, while secondly, positioning their work as not just technical demonstrations, but working toward a conceptual ends as well. Through this process we have led large scale collaborations with the School of Dance at San Jose as well as the School of Music. For instance, last semester our students from my data visualization class as well as our interactivity class led by my colleague Craig Hobbs were able to work with the School of Dance to create real time animation for performance. My class also collaborated with Pablo Ferman’s composition class to create real time audio visual works.

What’s next for you and where do you see your work going?

I’m really interested in the politics of performance venues – the top-down nature of the performer on stage disseminating experience to a passive audience. I’m also focused on creating work that highlights our experiences, our perceptions, as humans. This for me has been a major point of exploration in digital space. For instance if I’m having a conversation with someone on Facebook, that interaction is limited to just the text. We miss out on all of the other social cues that we interpret as part of a conversation. In that way technology limits our understanding of empathy and interpersonal communication, I’m really interested in building art and music that brings people together and furthers understanding of our lived experience, not diminishing it or refining it for data and analytic consumption. Building software that helps us understand what it means to be human is a primary goal of mine, and doing so in collaboration with the audience is ideal. The underlying question here being, can we as artist form a discourse about what all of this technology in our lives means. I’m really interested in the ways that classical music (an ancient art form) is evolving with technology, the concert hall is the perfect place to explore that territory.