Friday, January 16, 2009

This post is (belated) documentation of a project I worked on in 2007-8, creating an audio-responsive generative system for a permanent installation for the Jackie Chan Science Centre (yes, that Jackie Chan) at the John Curtin School of Medical Research, on the ANU campus. Along with some Processing-related nitty gritty, you'll find some broader reflections on generative systems and the design process. For less process and more product, skip straight to the generativeapplets (and turn on your sound input).

In mid 2007 my colleague Stephen Barrass and I were approached by Thylacine, a Canberra company specialising in urban art, industrial and exhibition design. Caolan Mitchell and Alexandra Gillespie were designing a new permanent exhibition, the first stage of the new Jackie Chan Science Centre, housed in a new building - a razor-sharp piece of contemporary architecture (below) by Melbourne firm Lyons. Instead of just bolting a display case and a few plaques to the wall, Mitchell and Gillespie (wonderfully) proposed a design that hinged on a dynamic generative motif - a system that would ebb and flow with its own life cycles, and echo the spiral / helix DNA structures central to the School's work, and already embedded in the building's architecture.

My initial sketches (below) took the spiral motif fairly literally, drawing vertical helices and varying their width with a combination of mouse movement and a simple sin function - the results reminded me of the beautiful spiral egg cases of the Port Jackson Shark. At that stage we were talking about the possibility of projecting back onto the facade of the building, which has big vertical glass panels; this structure informed the vertical format. I made a quick video mockup of the form on the facade - which was incredibly easy, thanks to the robust, adaptable, extendable goodness of Processing (a recurring theme in the process to come).

These sketches meet the simplest criteria of the brief (spiral forms) but do nothing about the more interesting (and difficult) ones: cycles of birth, growth and death, and dynamics over multiple time scales. Over the next couple of months I developed two or three different approaches to this goal.

The phyllotaxis model blogged earlier was one attempt. Spurred on by the hardcore a-life skills of Jon McCormack and co. at CEMA, I built a system in which phyllotactic spirals self-organised spontaneously. Because in Jon's words, anyone can draw a spiral, what you really want is a system out of which spirals emerge! The model worked, but I had trouble figuring out how phyllotactic spiral forms might meaningfully die or reproduce. Also, by that stage I had two other systems that seemed more promising.

From the early stages I wanted to make the system respond to environmental audio. The installation would be in a public foyer with plenty of pedestrian traffic, so audio promised a way to tap in to the building's rhythms of activity at long time scales, as well as convey an instantaneous sense of live interaction. In the two most developed sketches audio plays a key role in the life cycle of the system.

One sketch moved into 2d, and started with a pre-existing model for growth, by way of the Eden growth algorithm (this system would later be adapted again into Limits to Growth). I had already been playing with an "off-lattice" Eden-like system where circular cells could grow at any angle to their parent (rather than the square grid of the original Eden model). This system also made it easy to vary the radius of those cells individually. The next step was to couple live audio to the system; following a physical metaphor, frequency is mapped to cell size, so that larger cells responded to low frequency bands, and smaller cells to high frequencies. Incoming sound adds to the cell's energy parameter; this energy gradually decays over time in the absence of sound. Cell reproduction, logically enough, is conditional on energy.

The result is that cells which are best "tuned" for the current audio spectrum will accumulate more energy, and so are more likely to reproduce, spawning a neighbour whose size (and thus tuning) is similar to, but not the same as, their own; so over time the system generates a range of different cell sizes, but only the well-tuned survive. The rest die, which in the best artificial life tradition, means they just go away - no mess, no fuss. In the image below cells are rendered with stroke thickness mapped to energy level. The curves and branches pop out of rules sprinkled lightly with random(), resulting in a loose take on the spiral motif, which is probably the weak point in this sketch. I still think it has potential - nightclub videowall, anyone? Try the live applet over here (adjust your audio input levels to control the growth / death balance).

The third model takes this approach to energy and reproduction - about the simplest possible a-life simulation - and folds it back into the helical structures of the first sketches. In this world an individual is a 3d helix, built from simple line segments. Again each individual is tuned to a frequency band, which supplies energy for growth; but here "growth" means adding segments to the helix, extending its length. Individuals can "reproduce", given enough energy, but here reproducing means spawning a whole new helix, with a slightly mutated frequency band. All the helixes grow from the same origin point - they form a colony, something like a clump of grass.

This sketch went through many variants and iterations over the next month or so; in retrospect the process of working to a brief, within a design team, pushed this system further than I ever would have taken it myself. At the same time I was testing the system against my own critical position; I've argued earlier that the generative model matters, not just for its generativity but the entities and relations it involves.

From that perspective this system was full of holes. Death was arbitrary: just a timer measuring a fixed life-span. "Growth" was a misnomer: the number of segments was simply a rolling average of the energy in the curl's frequency band, so the curls were really no more than slow-motion level meters. Taking the organic / metabolic analogy more seriously, I worked out a better solution. An organism needs a certain amount of energy just to function; and the bigger the organism, the more energy it needs. If it gets more than it needs, then it can grow; if it gets less than it needs, for long enough, it will die. So this is a simple metabolic logic that can link growth, energy and death. Translated into the world of the curls: for each time step, every curl has an energy threshhold, which is proportional to its size (in line segments); if the spectral energy in its band is far enough over that threshhold, it adds a segment - like adding a new cell to its body; if the energy is under that threshhold, it doesn't grow; and if it remains in stasis for too long, it dies. Funnily enough, the behaviour that results is only subtly different to the simple windowed average. Does the model really matter, in that case? It does for me at least; if and how it matters for others is another question.

Next, the curls developed a more complex life-cycle - credit to Alex Gillespie for urging me in this direction. In line with the grass analogy, curls grow a "seed" at their tip when they are in stasis; when they die, that seed is released into the world. Like real seeds, these can lie dormant indefinitely before being revived - here, by a burst of energy in their specific frequency band. After several iterations, the seed form settled on a circle that gradually grows spikes, all the while being blown back "down" the world (against the direction of growth) by audio energy (below). As well as adding graphic variety, seeds change the system's overall dynamics. Unlike spawned curls, seeds are genetically identical to their "parent" - attributes such as frequency band are passed on unaltered. Because each individual can make only one seed, that seed is a way for the curl to go dormant in lean times; if it gets another burst of energy, it can be reborn. The curls demo applet demonstrates this best (again, adjust your audio input and make some noise).

A few technical notes. One big lesson here was the power of transform-based geometry. Each curl is a sequence of line segments whose length relates to frequency band (lower tuned curls have longer segments); each segment is tilted (rotateZ), then translated along the x axis to the correct spot. A sine function is used to modulate the radius of each curl along its length; frequency band factors in here too; this radius is expressed as a y axis translation. Then the segment is rotated around the x axis, to give depth. I iterate this a few hundred times to get one curl, and repeat this process up to twenty times to draw the whole world - each curl has its own parameters for tilt, x rotation increment, and frequency band.

In the live applet audio energy ripples up the curls, from base to tip. This was added to reinforce the liveness of the system and add some rapid, moment-by-moment change. It was implemented very simply. I used a (Java) ArrayList to create a stack of audio level values; at each time step, the current audio level value is added at the head of the list, and the ArrayList politely shuffles all the other values along. So each segment's length is a combination of three values; the base segment length, a function to taper the curl towards the tip, and the buffered audio level.

The graphics are all drawn with OpenGL - following flight404 I dabbled with GL blend modes, specifically additive blending, to get that luminous quality. The other key visual device here is the smearing caused by redrawing with a translucent rect(); instead of erasing the previous frame completely this fades it before overlaying the new frame. It's an easy trick that I've used before. But as Tom Carden explains, in OpenGL it leaves traces of previous frames. I discovered this firsthand when Alex and Caolan asked whether we could lose the "ghosts." I was baffled: on my dim old Powerbook screen, I simply hadn't seen them. Eventually, juggling alpha values I could reduce the "ghosts" to almost black (1) against the completely black (0) background - but no lower. Finally I just set the initial background to (1) instead of (0), and the ghosts were gone.

The adaptability of Processing came through again when it came to realising the installation. The final spec was a single long custom-made display case, with three small, inset LCD panels. These screens would run slide shows expanding on the exhibition content, but also feature the generative graphics when idle; the case itself would also integrate the curls as a graphic motif. For the case graphics, I sent Thylacine an applet that output a PDF snapshot on a key press; they could generate the graphics as required, then import the files directly into their layout.

The screens posed some extra challenges. The initial idea was to have the screens switch between a Powerpoint slideshow, and the curls applet; but making this happen without window frames and other visual clutter was impossible. In the end it was easier to build a simple slide player into the applet: it reads in images from an external folder, allowing JCSMR to author and update the slideshow content independently.

So to wrap up the Processing rave: it provided a single integrated development and delivery tool for a project spanning print, screen, audio, interaction, animation and even content management. Being able to burrow straight through to Java is powerful. Development was seamlessly cross-platform; the whole thing was developed on a Mac, and now runs happily on a single Windows PC with three (modest) OpenGL video cards. The installation has run daily for over six months, without a hitch (touch wood).

Some installation shots below, though it's hard to photograph, being a glass fronted cabinet in a bright foyer - reflection city. I'll add some better shots when I can get them. If you're in Canberra, drop in to the JCSMR - worth it for the building alone - and see it in person.

And very finally, photographic proof of the Jackie Chan connection - image from The Age.