A day exploring, a day presenting, and for me the time inbetween deep in code with an exacting challenge: it was fun, rewarding work, and everything we did has gone down well. I’m certainly happy with the app I made. Watch the demo above, and read more in the diary posts below.

Live performances involve complex interactions between a large number of co-present people. Performance has been defined in terms of these performer–audience dynamics (Fischer-Lichte 2014), but little is known about how they manifest. One reason for this is the empirical challenge of capturing the behaviour of performers and massed audiences. Video-based approaches typical of human interaction research elsewhere do not scale, and interest in audience response has led to diverse techniques of instrumentation being explored (eg. physiological in Silva et al. 2013, continuous report in Stevens et al. 2014). Another reason is the difficulty of interpreting the resulting data. Again, inductive discovery of phenomena as successfully practised with video data (eg. Bavelas 2016) becomes problematic when starting with numerical data sets – you cannot watch a spreadsheet, after all…

A spoken paper presented at the International Symposium on Performance Science, Reykjavík 2017. The talk is a good way to see what I got up to during my PhD… and hey, there’s no stats and lots of pretty pictures.

for history’s sake, and perhaps it will help any hackers, attached is a zip of the arduino code i had before the leap to mbed was made, v07 to today’s v18. none of the interface goodness, but has got the fast serial communication technique i came to along with keying etc.

KineTXT has spurred many custom plug-ins, generally either esoteric or usurped by kineme or the next major release of QC. the latest however probably deserves to see the wider light of day, and so here is a snap-shot of it having just passed a notional ‘v1.0’. its two patches designed to capture and render handwriting and doodles from a tablet, but they should be pretty useful to anyone who wishes for some form of digital graffiti in their QC compositions.

if you want anti-aliasing, you’ll need to leave the QC app behind unfortunately, but if you can run without the patching editor window its just three lines of code to add to the qc player sample application and voila: this plugin and all 3D geometry become anti-aliased. vade worked it all out and outlines the territory here: http://abstrakt.vade.info/?p=186.

if you want different nibs, pen-on-paper-like textures or suchlike… well i have my needs and ideas, but the source is there. share and share alike!

having worked through the hillegass cocoa book, its time to start putting that to good use. and project number one was always going to be one of the big glaring omissions in quartz composer to my mind: a means of animating a string on a per-character basis.

if you want to compete with after-effects, then you need to be able to produce the various type animations directors are used to, and you need to do so at a decent framerate. to animate say the entry of a character onto the screen, you would create the animation for one character and then iterate that operation along the string. the problem is, rendering each glyph inside the iterator is both massively expensive and massively redundant, but thats the only approach qc allows, hacks on the back of hacks apart. a much better approach would be to have a single patch that takes a string and produces a data glob of rendered characters and their sizing and spacing information, firing off just once at the beginning and feeding the result to the animation iterator: at which point you’re just moving sprites around and the gpu barely notices.

the patch is released under gplv3, and is attached below.

a massive shout to the kineme duo for leading the way with their custom plug-ins and general all-round heroic qualities. in particular their ‘structure tools’ patches were the enabler for those early text sequencing experiments.

as shown in the ‘pun me this’ entry, the *spark titler was used in nascent form at sheep music, and the promise to tidy-up and release as open-source software has been followed through. so, please find attached: sparktitler-v1.1.zip.

the titler’s interface allows you to take between two sets of title/subtitle, with the choice of four backgrounds: black / green / a quicktime movie or a folder of images. the output window will automatically go full-screen on the second monitor if it detects one is available at launch, otherwise it will remain a resizable conventional window.

it is released with the intention that it can be reused for other events without changing a single line of code: you can design the animation and incorporate quicktime movies in the design by editing the ‘GFX’ macro in the quartz composer patch, and its a matter of drag and drop replace the logo in the interface.

for those who wish to dig deeper and improve the whole package, the source is released under GPL. the xcode project provides an adequate shell for the patch, implemented with just two cocoa classes and an nib file complete with bindings between the qc patch and the interface window. the classes are required to tell the quartz composer patch where to find the resource directory of the application’s bundle (neccessary for any ‘image with movie’ nodes), and to subclass the output window so it is sent borderless to the second display if appropriate. features apart, there is certainly room for improvement, a ‘open file’ dialog instead of the raw text fields would be good, likewise solving the text field update issue.

here is a prototype/demonstration of using your own image kernel in vdmx. rather than being an effect, this is an A/B mixer that means you can use vdmx in the ‘old skool’ way, by mixing together two video streams rather than rendering the whole stack of layers. it also has controls like a DJ scratch mixer, so as well as a crossfader, you’ve got a fader for each channel, and a fader curve control.

to use, make a layer or group for the A channel and another for the B channel, and a layer at the top of the stack for your output. trigger the qc patch in the output layer, and assign the A and B layers/groups to its video input drop downs.

if you open the qc patch, you’ll see the video inputs get resized to the output res, as image kernels don’t handle different sized inputs too well, and then all the inputs are fed into an image kernel, ie a little filter written specially for the graphics card. in that there is some basic maths for applying a variable crossfade curve, and a line that adds the two inputs together. take a look, its not so hard; i have far more trouble with doing things like translating the crossfader curves into a mathematical expression than with the code itself.

so take this as a starter for ten if you’re interested. attached below.