Archive for the 'installation' Category

I recently travelled to China to install Cell at a new media arts exhibition held at Audi City Beijing. Cell in an installation I made in collaboration with Keiichi Matsuda in 2011 that is a provocation, a comment on the commodification of identity and a vision of how we might present ourselves in the coming years (more here).

It was always our intention to change this installation over time to implement new technologies and adapt to different contexts. In this instance we decided to give the visitors the opportunity to contribute to the piece by submitting the tags. This was achieved via a web app that would present the user with one of twenty questions such as – “Where did you meet your first love?” or “What is something you couldn’t live without?”. The answer is submitted and added to the collection of tags. Whereas the original piece would allow users to adopt the role of fictional characters, the result of this version was a crowd sourced cloud of words and phrases that formed a collective identity over the course of the week long exhibition.

There were several challenges this time round. The display consisted of 2 “PowerWalls” that, when combined, consisted of 8×4 plasma screens – an overall size of 11x3m. We went with a very powerful custom PC (made by Gareth Griffiths of Uberact) as we needed to significantly increase the tag count and split the image over the 2 walls (using a DualHead2Go). We also needed the extra power as there were 5 kinects (all running from separate PCs). This allowed for up to 10 simultaneous users and meant more calculations than usual. Cell is an open source project and the code for the new iteration is available here. The piece requires openFrameworks v0.8.0 and Visual Studio Express 2012.

I was pleasantly surprised to discover that my app Konstruct (made with Juliet Alliban) was also exhibited at the event. This section was part of the AppArtAwards exhibition and was organised by the Goethe-Institut China and ZKM.

Finally, huge thanks to Audi for holding the exhibition, to ADP Projects for helping to curate the event and acting as producers in Beijing, to Keith Watson for providing some space at Level39 for testing, to Juliet Alliban for helping with the setup and to Gareth Griffiths for building the PC.

Bipolar is an experiment in using the human form as a medium for sound visualisation. It is an audiovisual virtual mirror that warps the participant’s body as they wander through the space. A soundscape designed by Liam Paton is generated from the presence and motion of the participants. The data from this (in addition to sounds from the user and environment) is used to transform the body into a distorted portrait that fluctuates between states of chaos and order.

This piece has evolved from an experiment I made 18 months ago when exploring the possibilities for using the body as a canvas for visualising sound – have a look here for more information on the technology. Since then it has been exhibited at a number of events including Digital Shoreditch, The Wired Popup Store in Regent St, The New Sublime exhibition at Brighton Digital festival and The BIMA awards. There are plans to install it at several more spaces in the coming months.

Bipolar at Wired Popup Store

Bipolar at Digital Shoreditch

In the time since the original experiment, Bipolar has gone through several changes and optimisations. The biggest addition is the interactive sound aspect which was designed by Liam Paton, composer and co-founder of Silent Studios. The idea was to build a dark, abstract soundscape to compliment the visuals and react to motion, location and distance. He built the software using Max/MSP and I was able to communicate with it from my openFrameworks app via OSC.

Visually, I wanted to retain the chaotic nature of the original but with a few refinements and optimisations. The main issue with the original version was the fact that the extrusions appeared to be fairly random. Each spike is achieved by extruding a vertex in the direction of its normal but the normals weren’t very smooth. This was down to the way in which the depth data from the Kinect is presented. In order to get round this I implemented a custom smoothing algorithm that took place on the GPU (the vertex normals were also calculated by making a normal map on the GPU) which allowed me to create a much more pleasing looking super optimised organised chaos.

Another addition was some fake ambient occlusion. The original piece could seem a little flat in places, so this effect was added to create what look like shadows surrounding the spikes. I achieved this by darkening the colour of certain vertices surrounding the extruded vertex. The results should be visible in the image below.

At the moment all of the mesh processing is tightly interweaved into the application. I intend to release an addon in the coming weeks that will include most of this functionality along with some simple hole filling.

Arcade were commissioned to make a visual accompaniment to Stravinsky’s masterpiece. The project was produced by the Groninger Forum for the Timeshift festival in Holland, to celebrate the 100-year anniversary of the controversial first performance of The Rite of Spring. Our response was to construct a virtual architecture from laser beams, transforming the music into a dynamic forest of sound and light.

50 lasers were installed in the auditorium, each one connected to an individual instrument. Custom-built electronics allowed them to react the musicians’ performances; the louder the musician played, the brighter the beam. At certain times mirrors would be moved or unveiled to direct the beams to different areas of the auditorium, creating new abstract forms in space to compliment the different movements of the piece.

The resulting walls of light emanating from behind the orchestra and extending through the audience formed a direct spatial visualisation of the music.

I was recently invited to Shanghai for a week to set up Cell with Keiichi Matsuda. It was for an art/music/film/food event organised by Emotion China. They had a 5×3 LCD wall erected specifically to display the piece. This made a huge difference to the usual rear projection configuration.

Traces is an interactive installation that was commissioned by The Public in West Bromwich for their Art of Motion exhibtion and produced by Nexus Interactive Arts. The piece encourages the user to adopt the roles of both performer and composer. It is an immersive, abstract mirror that offers an alternative approach to using the body to communicate emotion. Kinect cameras and custom software are used to capture and process movement and form. This data is translated into abstract generative graphics that flow and fill the space offering transient glimpses of the body. An immersive generative soundscape by David Kamp reacts to the user’s presence and motion. Traces relies on continuous movement in space in order to create shadows of activity. Once the performer stops moving, the piece disintegrates over time, and returns to darkness.

The Public is an incredible 5 storey gallery, community centre and creative hub in West Bromwich that is concerned with showcasing the work of interactive artists and local people. I’ve been in talks with the curator Graham Peet for the last year with a view to potentially contributing. A few months ago, he commissioned me to build a new piece for the “Art of Motion” exhibition. I thought this would be an ideal opportunity to work with Nexus Interactive Arts. They were interested in the piece and agreed to produce it. The producer, Beccy McCray introduced me to Berlin based sound designer and composer David Kamp who did an excellent job with the generative soundscape.

My aim was to build an installation that suited not only the theme of the show but the themes of play, discovery and creativity that already permeate the gallery spaces of The Public.

The show runs from 30th May until 9th September. I would highly recommend anyone with an interest in interactive art to take a trip to West Bromwich to visit The Public. In addition to the exhibition there are many other excellent pieces.

Bipolar is the result of a short experimental journey into visualising sound using computer vision. The initial idea was to capture a mesh of my face in realtime, and warp it using the sound buffer data coming in from the microphone as I speak. Initially I explored ofxFaceTracker but had trouble segmenting the mesh so moved to the Kinect camera. I had a rough idea of how the final result might look but it turned out quite differently.

As this intense spiky effect began to take shape I realised this would be perfect for the chaotic and dark sound of Dubstep. Thankfully I know just the guy to help here. I met the DJ and producer Sam Pool AKA SPL at the Fractal ’11 event in Colombia. He kindly offered to contribute some music to any future projects so I checked out his offerings on SoundCloud and found the perfect track in Lootin ’92 by 12th Planet and SPL. This, of course, meant I would have to perform to the music. Apologies in advance for any offence caused by my “dancing” 🙂

This was build using openFrameworks and Theo Watson’s ofxKinect addon which now offers excellent depth->RGB calibration. I’m building a mesh from this data and calculating all the face and vertex normals. Every second vertex is then extruded in the direction of it’s normal using values taken from the microphone.

The project is still at the prototype stage and needs some refactoring and optimisation. Once it is looking a little better I will release the code.

Cell is an interactive installation commissioned for the Alpha-Ville festival, a collaboration between myself and Keiichi Matsuda. It plays with the notion of the commodification of identity by mirroring the visitors in the form of randomly assigned personalities mined from online profiles. It aims to get the visitors thinking about the way in which we use social media to fabricate our second selves, and how these constructed personae define and enmesh us. As users enter the space they are assigned a random identity. Over time, tags floating in the cloud begin to move towards and stick to the users until they are represented entirely as a tangled web of data seemingly bringing together our physical and digital selves.

I first got in touch with the organisers of the festival, Estella Olivia And Carmen Salas, around May with a view to contributing. They asked if I knew of Keiichi Matsuda, and whether I would be interested in a collaboration. Coincidentally we had met up a month before and had discussed the idea of joining forces as our areas of research are very similar. We come from different fields, he architecture and film making, me new media art and interaction design. This turned out to be a perfect combination. We shared the concept and design, Keiichi focussed on the fabrication, planning the space and putting together the documentary while I happily wrote the software. Even with these distributed roles we found we were often offering suggestions and help to each other throughout the course of the project.

The concept wall

Microsoft have supported the project from the early stages. Keiichi and I were both speaking at an event in June when we met Paul Foster who was promoting the MS Kinect for Windows SDK. We discussed our project which would be utilising the Kinect camera and he was interested in helping out. He introduced us to William Coleman and since then they have supplied all the equipment and funded the studio space (thanks to Tim Williams and Tom Hogan at Lumacuostics for putting us up and all the advice).

In addition to this, Microsoft also introduced us to Simon Hamilton Ritchie who runs Brighton based agency Matchbox Mobile. These guys contributed a great deal to the project, most importantly, ofxMSKinect, an openFrameworks addon for the official Kinect SDK. One of the main advantages of using this over the hacked drivers is the auto user recognition, we no longer need to pull that annoying calibration stance which can be a big barrier in a piece such as Cell. In addition to depth/skeleton tracking, the potential for utilising the voice recognition capabilities is an exciting prospect for the interactive arts community. This will be integrated into ofxMSKinect in the coming months.

Skeletal data from 4 Kinect cameras

So on to the setup, Halfway through the project we realised that we would only be able to track 2 skeletons using a single Kinect camera. While this is fine for gaming, for a large scale interactive experience this would not be enough. So instead of 1 camera we decided to go with 4! We organised 4 Dell XPS 15 laptops each connected to a Kinect camera. The skeletal data from each client is fed to an Alienware M17x laptop through a Local Area Connection (with help from Matchbox) giving us the potential to track the skeletal data of up to 8 users in a space of around 5m x 4m. The software on the Alienware server then calculates and renders the scene which is rear projected onto a large screen using a BenQ SP840 projector.

The screen posed a bit of a challenge. We could either rent one for a ridiculous price or build our own and have complete freedom over the design. This was important to us so Keiichi put his woodwork skills to the test and made a 4.2m x 1.8m screen that can be reduced to 1.5m. Quite an achievement for a rear projection screen with no supporting beams! We used ROSCO grey screen material which was perfect for our requirements.

Keiichi and Iannish preparing the screen at the Alpha-Ville festival

We were very pleased with the reaction to Cell. The feedback from the festival goers was really positive. It was important to us that the participants were both interested in the concept and taken by the experience. Many that we spoke to seemed to engage with the piece on both levels.

If you would like any more information please visit the Cell website. If you would like to contact us regarding this piece, please email – info [at] installcell.com

I’d like to thank the following for their help in realising this piece (in order of appearance):

This was my first dip into the fantastic Arts based C++ toolkit openFrameworks. I actually made this a few months ago but haven’t had time to put the video footage together.

So here’s what’s happening:

This installation attempts to reconstruct the camera images from a collection of 3000 square sections taken from previous frames. The result is a chaotic animated grid that continually attempts to achieve order.

Each visible grid item compares its corresponding region of the camera frame with 10 randomly selected squares from the collection. The closest match is compared with the current visible square and and the closest of these two is displayed.

If you stay still for long enough, the camera image will appear to completely rebuild itself.