The idea, and approach, itself goes back a few years. One of the groups I was performing in at the time, the recently retiredTakahashi’s Shellfish Concern, started moving away from the large paintings we were creating during performances. At the time we (Angela Guyton, Anton Hunter, and myself) would perform with contact mics attached to the back of a large canvas. Angie would paint, and Anton and I would process the audio from the contact mics.

(click to view performance)

This was a very exciting way to perform. In no small way this was because of the transience of it. Angie would add and remove paint, markings, and material, creating a dynamic, living performance. One thing started to become an issue — there was the problem of artifacts. Unfortunately, there was this aesthetically pleasing painting left at the end of a performance. It is beautiful not only for its forms, shapes, and gestures, but for the snapshot of the process it represents. It is the very nicely printed receipt for 1 x genuine aesthetic experience.

We wanted to move away from this. From the object-oriented nature of a physical piece of art. The art should be, like the music, ephemeral. It should dissolve. This is something that Angie and I discussed a lot, and you can read some of her thoughts about this here.

We moved towards light. At first this came through experiments using a projector, as we had one readily available. The initial ideas were simple, big bursts of color that would fade over time, combined with long patches of darkness. Here is an unusual transitional performance which used a canvas as well as a projector behind the painting, backlighting it:

We eventually purchased some DMX lights which we started using in a similar way, knowing that the goal was to eventually be left only with light. Using just light as a performance medium. The big shift came when we got the idea to smear an image in time, one color at a time, approaching light like a visual granular synth. I built a Max patch let you load any image, select a subsection of that image, and use that as a pool of colors that would be sent to the DMX lights.

(click to download the (messy) patch)

When the colors are triggered at a high speed (80-150ms) you get an impression of the image, but it’s difficult to pick out individual colors. It just feels like the image. There’s also really crazy things that seem to happen with your eyes. When you turn the lights on and off quickly your pupils attempt to dilate to compensate, but are unable to do so, creating a physical sensation to the experience. This is somewhat similar to the feeling of looking at a strobe light but when you use random variations in timing you get a more unpredictable sensation, which counteracts the strobe-ness.

Here is a video we short around that time, testing some of the light behaviors, as well some of Angie’s biometric control of the system: (It should be noted that due to the nature of the PWM used to dim LED-based DMX lights, this conflicts with the shutter speed of the camera, creating really drastic light banding. In person, the lights do not band at all.)

Here is a video showing more examples of the artwork Angie and I (as well as other collaborators) have made over the years:

The core ideas from this patch are used in the Light Vomit performance video at the start of this blog post, but in a different manner. For this performance I wanted the lights to be reactive with the sound. Violently so. This is something I’ve been experimenting with in another project (which is still in development (but it’s…..fucking crazy)). So I made a patch that combined aspects of the Color Picker patch with some of the audio analysis-based processing of The Party Van.

The patch looks like this:

(click to download the (VERY messy) patch)

In the screenshot above you can see the presets which are assigned to the 10 buttons of my SoftStep. On the left is the audio processing, and on the right is the corresponding DMX behavior. The audio processing is quite simple in this patch. Onset is simple attack detection, AutoSample records and plays back audio, again based on audio analysis, and Random Stutter turns on/off the Stutter effect in The Party Van.

For the DMX behaviors, I first had Angie create a small library of color palettes and pre-recorded light patterns (using the Color Picker patch). I used random color interaction in the Grassi Boxvideos before, but really wanted to capture the feel of the Color Picker patch, and Angie’s manual and biometric control of it. That overwhelming Light Vomit vibe. These pre-recorded patterns are used in conjunction with the audio analysis to create an intertwined audio/visual experience that I am very much immersed in, as one of the DMX lights is pointing directly at the snare.

The less than dramatic (although festive!) looking setup looks like this:

One DMX light directly under the snare, and a second one just behind the chair pointing towards the ceiling. In the photo you can also see the SoftStep, and the assorted crotales, pot lids, and cymbals used in the performance.

The ideas I’ve tested in the Light Vomit patch I plan on developing further in a couple of ways. First, I plan on expanding the general performance setup based around this approach to light, which I will use again in another performance in early 2016, and later in the 3rd and final .com piece, which will use DMX lights as a kind of encoded notation. I plan on using the time-smeared overwhelming sensation (where it’s difficult to pick out individual colors) to communicate elements of form and behaviors in the piece.

Here are some of my early sketches for the piece (involving a kind of short-scale time-travel!):