Toby Harris

being asked to revive rbn_esc begged the question: will the software still run, can i even remember how to handle the complexity? it was an amazing milestone back in 2006 going from two laptops linked by midi to running the complete performance off one laptop. but there was a lot to that integration, and my license of ableton live was long expired. i wondered whether there might be a simpler way, now that resolume avenue effectively had ableton’s session view – in which rbn_esc’s basic structure and audio-visual links are laid out – and was built to be an audio-visual software from the ground up.

james sheridan saw SPK-RectPack, and invited me to see the dome setup he develops the tech for: igloo, the company, could do with something like screenrunner running in the dome, and more generally james is a creative-coding guy keen to meet like-minds. well, i was impressed; it wasn’t long before screenrunner was running on the dome’s mac pro1, and not long after that james was organising a creative coding night in their mini demo dome.

happily rendering 9600x1080 and handing this over to james’s openframeworks edge-blending dome warper via syphon. the ease and painless 60fps of this blew my mind. ↩

a screenrunner client wanted an animating tiled layout. it’s surprisingly non-trivial to code, at least if you want to go beyond hard-coding a few grid layouts. thankfully, the problem is academically interesting too, and lo! there’s a paper on spatial packing of rectangles, complete with MaxBinRectPack algorithm and c++ implementation. respect to jukka jylänki.

after a bout of yet more *spark screenrunner back-end building up – libPusher, ‘event’ document packages that encapsulate the media and graphic template, the all important enqueue new – it was onto what really set this gig apart.

at the heart of the brain was the increasingly inappropriately named *spark titler, collating all the media and channelling it to the screen. it runs the screen, and gives just what you need to be responsive to the moment without breaking the visual illusion. so… *spark screenrunner?

sane control of the media and scenography needs to be partnered with the animation mechanics to handle it all gracefully. luckily, thats what i do – and what tools like quartz composer enable – and i had the best materials to work with in the form of made-by’s brand video. it’s great. watch it, and you’ll also see how perfect it was to be remade into a never-ending animation with dynamic content interspersed with the hand-animated elements.

how did joanna run the screen? with *spark titler v3: no longer a now-and-next titler, more the means for a live brand video. into an animation template go tweets, titles and all sorts of media, and the user is presented with a sane way of wrangling that media and controlling the output.

and how did those live graphics make it to the screen? i sat down and took the idea of *spark titler from sheep music and remade it as a fully fledged cocoa+quartz composer application. the idea being it can’t muck up: animation designed to gracefully transfer from state to state, participant names pre-filled in a drop-down menu, no mouse cursors on the output, text fields that commit their edits on pressing take… the little details that make-or-break a live application. oh - and it exports its title animations as quicktimes for integration with playback pro.

as part of the rhythms and visions event, a day of workshops was organised. los angeles visual artists - lava had the morning, and covered the past and present of ‘visual music’ works. mike, matthias and i had the afternoon, which we nailed the four hours precisely with a tour through the d-fuse oeuvre and a journey through our particle production process. the latter was my main contribution, and its a tricky balance to give: lots of really cool stuff – shooting, taking crops, building abstracting effects – but with what can reduce down to a sea of noodles and buttons. pretty happy though, good feedback that the thread was there and it all tied up: people got it.

there is now a mac pro in china running DFD, something that has been consuming my time for a while now. the roadshow d-fuse have been developing is our first big foray into automated dynamic content, lighting and audience interaction, and so without us being there for every gig holding it down with a hacked-up vj setup we needed something that you could just power on and the show would start. and so d-fuse/dynamic was hatched, a quicktime and quartz composer sequencer which reads in presets and its input/output functionality from a folder we can remotely update, and essentially just presents a “next” button to the on-site crew.

happiness is twelve hex bytes, generated by a pocketable custom LED fixture on detecting a bounce, transmitting that via xBee, receiving into the computer via RS232, being parsed correctly, outputting into a QC comp, doing a dance, and commanding back to the fixtures via Artnet via DMX via xBee.

there is a big d-fuse production in the works, where the brief rather wonderfully was emphasising interaction with and within the audience. as briefs often do, things have changed a lot since the heady time of working on and winning the pitch, but the core of it is still generative graphics and punter control from the club floor. and so here, courtesy of dr.mo‘s crack team of coders is an in-development iPad app talking over WiFi to a QC plugin, where my two fingers-as-proxies-for-collaborating-audience-members are sketching locally and that is being incorporated on the club’s screens.

tresor backstage, 1am, get to the final compile of here+now for the 3am performance. there is never enough time in this world, and for experimental projects on the side doubly so. the dream of just hanging out at a festival…

there’s no denying its like a flight deck of buttons, but props to vdmx’s configurability and plug-in friendliness. full playback control of both single screens and the dualhead spanning as quicktime sources, and a hardcore quartz composer patch wrapping a lot of custom openGL code, fronted by an interface builder laid out UI panel.

in barcelona on a solo d-fuse mission: particle to audio by asférico, refactored down to a dualhead screen setup. good gig: refactoring prep worked out, no nasty surprises throughout, performance felt smooth.

after a month with a heavy dose of xcode on the side, its up to newcastle clutching two fledgling applications to marry with newcastle library’s digitised photo archive, a bunch of iMacs clustered around a 1080P plasma screen, and the memories of the visiting public. it was a heavy weekend of being there with the installation by day and further coding by night, but out of this a very real and user tested photobank has been born.

abertura was the first time i really got to throw myself into a ‘particle-esque’ performance, using the d-fuse content with the live setup i’d created. as a warm-up for são paulo, it was a great one: the music and visuals really came together to give an intense show in the relatively small space of abertura’s hall, it really gave me a confidence boost.

beyond visible things like keane3D, lots of work has been going on in the background with d-fuse this past year or so. some of it pitching, some of it pushing along internal projects, and at the moment a massive commercial job under NDA: all things which don’t really get to this diary. but i’m glad to announce a little preview of something i’ve been working on a while, which is transforming d-fuse live.

it felt funny there being a quartz composer workshop, me not giving it, and the copy being beyond something even i could even have pumped-up (world exclusive! creator of the most widely viewed online video tutorial series!). but definitely a sign of progress in this world, so props to graham (centre back) and lpm. the class had a good time, and the word was spread: hopefully more interesting work will happen because of it.

this was perhaps the most inspiring moment of lpm for me. to set the background, i’ve been planning a entirely new paradigm of vj interface for years now, have a subversion repository with a project to that end, and have had many discussions with people like anton about how flightdecks-of-buttons suck and where are we in our quests to make something better. the short answer is: years in, far from anything demonstrable due to the overhead of having to write such a thing from scratch.

david, as well as having the content worked out, also had the tools worked out. one laptop as render client, one laptop as controller, linked by ethernet. nice to see this in the wild, its the approach i’m going for with *spark cinema. but going back to the photo, check the number of buttons: truly a flight deck!

KineTXT has spurred many custom plug-ins, generally either esoteric or usurped by kineme or the next major release of QC. the latest however probably deserves to see the wider light of day, and so here is a snap-shot of it having just passed a notional ‘v1.0’. its two patches designed to capture and render handwriting and doodles from a tablet, but they should be pretty useful to anyone who wishes for some form of digital graffiti in their QC compositions.

a little sneak peek of a quartz composer plug-in in development: spk-calligraphy, a set of patches for recording and playing back 2d strokes. the basic patch is equivalent to the kineme GL line structure patch, but draws the line as if it were a chisel nib at 45° and with a flow fade-out. the other two are what is going to enable a big part of the next kinetxt development: handwriting to go alongside the rendered text.

i’ve been somewhat remiss in not posting up the finished mpc screen, but thanks to a little cover work there i had the chance to go round and take some photos of it out in the wild. so here is the one i’ll always remember most as it was the first screen to be was deployed in the wild.

it might not be the most exciting photo, but this desk has seen three weeks coding a big project for a soho production house. “would you like to make something akin to “front row” to aggregate the content and services at this facility, to fit on all the 50” plasmas we’re about to get in the building?” “yes please.”

patching away… been waiting a long time as a visualist to finally have an immersive output, and here i was at 9am having had no sleep due to the slot coming forward from the evening and a non-cooperating field-of-view, camera placement, and clip plane.

a productive weekend: after yesterday’s dome-fu, spent today working out how to tether a camera to quartz composer and from that made my contribution to a friend’s wedding: a kind of digital signing-book, taking and displaying group photos atop a slideshow of the couple’s history.

voila. animating, and distorted for projection via the you-don’t-need-a-boutique-fisheye-setup-anymore technique that uses a standard projector and a spherical mirror bought from a security store. all hail paul bourke and the pbmesh patch

finally making something of a long-talked-about spark project: immersive and interactive visuals in a dome. hoping to start with some basic ‘chuck a ball about in the crowd’ type stuff, and get to a fully playable asteroids or missile command.
first test: can i tile small sprites in 3d space as if they were placed on a sphere? answer here: only sort-of. they are on a sphere, but they’re certainly not in neat rows like they should be, or aligned to the surface.

a kinetxt self-portrait, with the isight camera on me rather than our bullet cam on graffiti writers. more importantly, this is my contribution to the kinetxt event working, the interactive text messaging display ready to hand over to novak for tuesday’s installation / performance. huzzah!

being able to code custom plug-ins is really making quartz composer so much better: not just giving the ability to make different types of ‘teh pretty’, but letting qc’s patching world do what its best at - fiddling with views - and leaving the coordination and control aspects to a dedicated lump of code, like a brain sitting in the middle of the patch.
long story short, this kinetxt installation is seeming like a case study in the object-orientated / model-view-controller way.

having worked through the hillegass cocoa book, its time to start putting that to good use. and project number one was always going to be one of the big glaring omissions in quartz composer to my mind: a means of animating a string on a per-character basis.

hello jasmin and her superproduced content commissioned for the american shows. the (and i use the word loosely) car driving in is something to behold: a “green” monster truck, complete with rocket-launcher-esque hydrogen tanks racked up on the back.

guess what we were up to. if only leopard and reactivision would play nicely together: the qc osc receiver doesn’t seem to be receiving what it should, and its beyond me to port the tiger hacked-up plug-in to an official api leopard one. that is something that will hopefully change after christmas, when i finally embrace cocoa and the live cinema interface.

straight from a marina in italy to a biscuit factory in newcastle, home of the most proper name. they had organised three days “mentoring” aka professional development funded by the region, and i was the mentor. first up was quartz composer 101. so glad we waited for leopard to debut, what a change: its actually sane now!

this picture is but a snapshot of the revolution. it really feels like that. a real let-down of the geneva motor show pre-production was the inability to translate the creative agency’s after-effects rendered text animations into the live, dynamic setup. there just was no way to implement anything vaguely sophisticated without seeing the framerate drop to near zero. structure record, something driven by video sampling and seemingly tangental to text rendering, is the key to solving that problem… and so here it is solved, as if on cue for the frankfurt motor show.

if the above picture means anything to you, go check the new kineme.net site. a shell script node - the universal hammer - and structure record - hello video sampler - are my two most wanted features. icing on the cake: open source.

as shown in the ‘pun me this’ entry, the *spark titler was used in nascent form at sheep music, and the promise to tidy-up and release as open-source software has been followed through. so, please find attached: sparktitler-v1.1.zip.

the latest visuals technology development to come off the *spark anvil is a mac-native titler application, made by wrapping a quartz composer patch with some fullscreen code and interface builder bindings. props to roger bolton of quartonian for the guts of the fullscreen xcode project, shared under gpl so expect to see the titler soon once it’s been tidied up.

here is ‘re-engaging reactive graphics’ from a post or two ago, in its final projected form (or at least, one frame of one of the permutations i delivered). its on tonight too, as part of the south bank’s overture event, the re-opening of the royal festival hall. 10.30

here is a prototype/demonstration of using your own image kernel in vdmx. rather than being an effect, this is an A/B mixer that means you can use vdmx in the ‘old skool’ way, by mixing together two video streams rather than rendering the whole stack of layers. it also has controls like a DJ scratch mixer, so as well as a crossfader, you’ve got a fader for each channel, and a fader curve control.

back in the early days of visuals in brum with the most talented stef lewandowski, computers could barely hack video and so the dynamic stuff was a combination of flash and good old mixer twiddling. flash was cool for making loops, at a basic level if you had a few layers, you made them different lengths and they would loop independently pretty much always giving you a new combination with each frame… do that with a pre-rendered loop and it becomes clear you’ve only got a second or two worth of frames pretty immediately. then you could bring in text read on the fly into your graphics and allsorts.

today i programmed my first image kernel, another step on the journey of making custom technology to fulfil what i want to do artistically. and this one was with real, low-level code: its basically writing a shader for the graphics card, eek! actually not so hard at all, the code was pretty easy, it was the process of working out the maths required that took time to wrap my head back around.
osx is developing so well for video: you can write your image kernel in the appropriate quartz composer node, load that into a layer in the vj app vdmx, and then find all your kernel inputs natively displayed in the vj interface. so now not only do i have the exact kind of mixing i want processing at maximum efficiency, i’ve simplified my vdmx setup no-end with just one panel with all the controls, instead of a mish-mash of filters and blend modes spread around the different layers.

for the nlab remix of people on sunday i made a generative titler, though it didn’t quite work out how i expected as the final patch didn’t want to load in the vj app despite some earlier testing. so vjing for me was largely reduced to changing the section number directly in the qc patch with its output running fullscreen, seen on the preview above. was cool though, seeing the titles drawing together from the bag of words i made from watching the clips: just like you don’t exactly know which way the vjing of the clips is going to go, you don’t know exactly what the titles are going to imply in it…

i hereby resolve to make something seriously cool.
enough talking about the minority report interface, time to actually sketch things out and learn some serious stuff to make it happen. i reckon that given spk.av’s level of preproduction and the way some vj functions have been offloaded to ableton’s control, its at least possible - if not sane - to be able to create a custom interface to partner ableton and extend ableton’s audio to a full audiovisual show.
i have this particular vision of what i’d like to perform live cinema with, but in there are some basic design ideas that in themselves would be cool and could be used to make a three channel mixer controlled by a tablet… providing i can get a vj app to do the heavy lifting. the spk.mxr interface will effectively be the ultimate midi controller that is then patched onto a vj app.
its definitely time to keep the aspirations limited for the time being tho’ - six hours in quartz composer has told me i’m going to have to code this as a proper app using cocoa+quartz. its going to be a long journey…