Fully algorithmic music video partly inspired by a quote
from Alex Rutterford, who said in 2002 that it was impossible for a piece of software to make intelligent decisions about pace and animation.
I had approximately four days to make this video, which wasn't enough time to hand-animate four minutes of footage, so I instead wrote a
system that would creatively interpret it's imagery from the audio. In doing so I hope I proved Rutterford wrong.

WonderStuff Studios in Newcastle commissioned me to create a generative identity for the 2013 Northern Design Festival. I built them a tool capable of turning short text strings into 3D forms which they used to visualise the twitter handles of the festival's followers. The tool was also exhibited as an installation at the Globe Gallery in October 2013.

When a client doesn't like any of the 10 prototypes you've produced, how about giving them 10,000 to chose from with the next delivery. Futuredeluxe were in this situation when creating imaginary alien technological artefacts for the film Earth To Echo. I built them a tool to algorithmically construct 3D forms, which could be output to Cinema4D for rendering.

An exercise in intelligent theft. I was pondering why no-one had ever made use of Aaron Koblin's House of Cards data dump, so decided to conduct my own experiments with this 'data that time forgot'. The Android version is on the Play Store, and the (simplified) HTML5 version is here. The philosophy behind the project (which is probably more interesting that the app itself) was discussed in an essay for The Creators Project.

An experimental workflow. Generative 3D shapes (in these images created from audio data) which can be exported to Cinema4D. Theoretically this animates, but with the render time so far I haven't managed to get more than a couple of seconds out of it at the quality I want. Expect to see this revisited in future projects.

Ambitious projection-mapping project in collaboration with Keiichi Matsuda for the London Design Festival. The installation was housed in the cupola of the V&A, an area never before opened to the public. Realising our "data view" of London took a team of 30+ coders. My time was mostly spent coordinating the flood of code, and working out a way to map and mask 48 live panels across five machines while still animating at a decent clip. It worked, somehow.

A neuro-/bio-responsive food installation. A collaboration between me, Jotta, Bompas & Parr and neuro-scientist Ben Seymour. They built the pod, I built the software. The system used facial EMG, brain EEG and heart rate signals as data sources, which it converted into a unique visual. To sell ice cream.

Interactive built around notions of digital fragility, for a long running 2011 exhibition in Stoke-On-Trent. The generative audio/visual animation existed only when a viewer was present, and was created by their movement within the space. In 2012 I resurrected the system for the Final Light Exhibition in Brighton, where the work was projected onto life models. Sound design was by Tim Diagram.

A unique generative print experiment of which I was proud to be a part. A run of 200 coffee table art books, no two of which were alike. Each contributor submitted their work as an applet, a chunk of code, rather than a finished piece so even the artists themselves had never before seen the work that was attributed to them.

During the 2011 Perth Arts Festival, every visitor to the website inadvertantly created their own unique "totem" upon selecting the events they were attending. I built the system from a series of generative animations, prototyped in Processing then converted to ActionScript. I also built tools for exporting the web totems in a high-quality print format. Won a bunch of awards.

For Brighton's 2nd White Night festival, the BANG! group projection mapped the Unitarian Church opposite The Dome. I mashed together some generative Processing pieces to an Autechre track for the occasion.