s__k__y__n__o__i__s__ehttp://www.skynoise.net
Fri, 05 Apr 2019 01:14:37 +0000en-UShourly1https://wordpress.org/?v=4.9.8http://www.skynoise.net/wp-content/uploads/2017/03/FB_jp_icon-150x150.jpgs__k__y__n__o__i__s__ehttp://www.skynoise.net
3232skynoise/Egpnhttps://feedburner.google.comZoetroepe Software Review And Johnny DeKam Interviewhttp://feedproxy.google.com/~r/skynoise/Egpn/~3/QNgaRw5ljVw/
http://www.skynoise.net/2018/09/25/zoetroepe-software-review-and-johnny-dekam-interview/#respondTue, 25 Sep 2018 05:26:59 +0000http://www.skynoise.net/?p=5684Continue reading →]]>Software plugins are often forecasters of future fatigue – as off the shelf solutions, the ease at which they can produce satisfying results – can simultaneously dictate how quickly their techniques are likely to become widespread and our eyeballs immune to them. Zoetroepe Software‘s recent batch of plugins are attempts at ‘generative design instruments (for Final Cut Pro X, Adobe Premiere, Adobe After Effects & Apple Motion) which encourage and reward experimentation – an unsurprising approach, given they spring from the creator of VDMX (real-time video software), Johnny De Kam. (see interview below the reviews)

“My underlying design philosophy is akin to synthesizers in music, that is, visual instruments with oscillators and various visual ‘forms’ as source compositions. I tend to design these instruments to be open, allowing you the greatest flexibility to influence the output, a good dose of ‘meta’ design that can get you results quickly, and finally I strive to build systems with unique aleatoric progression, randomness and style capable of producing unexpected results… I take great joy in crafting shape-specific UV texture maps that preserve the best aspect ratio for maximum video impact.”

– Johnny De Kam

Overall, the Zoetroepe collection of plugins focus on colour controls, pattern generation and geometric transformations – but it’s the way they’ve been built, which distinguishes from a lot of more directly functional plugins out there. Let’s start with the juiciest:

“An organic, generative design plugin … which lets you explore and iterate simple or complex curving and flowing forms in 3D space, using any video source in your timeline.”

This was one of my favourite Zoetroepe plugins to use – some pretty wild shape distortions are possible, and the automated mode is fun to use, letting some of the oscillators define and keep animating some behaviours, while other parameters can be tuned to fit or juxtapose with those. An adjustible streaked motion blur option is a nice touch too, for muting video textures sometimes.

– lacks continuous rotation with some parameters, eg being able to keyframe multiple lots of 360 degrees, rather than 0-360…

– could use some presets as interesting starting points

Capable of some organic and distinctive results, with plenty of surprises rewarding exploration.

“A perpetually folding generative design instrument, manifested as a video effect plugin. With FOLD, you can explore and iterate simple or complex geometric forms in 3D space, using any video source in your timeline.”

There’s a lot of fun to be had – adding video textures to objects as easily as this. With the technical details resolved, it’s straight into compositing and animating movement, scale and rotation over time. Duplicate layers to create shadows, composite against colours, textures, backgrounds (muted? coloured? blurred?), scale large for abstraction – everything happens fast with OpenGL acceleration. Enabling the vertex distortion algorithm – animates individual vertex points in the model, offering up even more contortions on your video.

could use mirror (and other modes) for tiling..

GEODE TRANS

Using this same 3D model-based approach, playful transitions are possible by using the alpha channel to define the transparency between A and B video over time. Load one of thirteen models, then define / animate the light source position, and adjust softness to vary the transition from hard edged wipe to gentle fade. In the screenshot above, a geometric transition is revealing the orange cityscape against the wet window clip. The combinations possible through object rotation, changing the light source, and softening the object edges really help tune the dynamics of the transition to the clips.

“Our exclusive collection of 120+ patterns are tightly integrated within a trio of plugins designed for different parts of your workflow: PATTR BKG is a pattern background generator, PATTR TRANS is a transition engine, and PATTR MASK is a pattern mask composite effect. Within each plugin you can, seamlessly scale, rotate, translate and animate each pattern”

These work nicely – scale, rotation and movement options facilitate a surprising amount of difference with each seamless pattern, and all of the patterns are stored in greyscale, which enables an in-built tri-tone colour mixer to mix, or hue-morph colours over time.

Being able to use user defined textures to auto-generate the seamless patterns would be nice

As a collection of keyframe adjustable color washes, leaks, gradients and vignettes – these offer a surprising flexibility for easily generating sophisticated colour looks for existing footage. Offering more than just a set of vintage insta-filters, they’re highly customisable and even the presets generate a range of uncommon and interesting colouring options.

Requirements:

Verdict:

The Zoetroepe plugins bring some of the fun of real-time visual software – to the demands of the keyframable production timeline. Well worth a spin by anyone looking for timeline convenient ways to explore and experiment with compositing, visual styles and effects.

Interview with Johnny Dekam:

What have you been doing with video since creating VDMX?I decided to pass the VIDVOX torch to David Lublin when it became clear there were opportunities in large scale project work: concert tours, installations, media festivals, etc. My first big tour was with Sasha & John Digweed & my client list grew steadily over time. Touring as video director or visual content director is something I’ve continued to do until very recently. I’ve also kept very engaged with the art world (which is where my formal training began). Through it all, I’ve continued to create custom software, systems and experiments. I’ve been lucky in that I’ve managed to keep a very interdisciplinary practice over the years.

Was there a fulldome phase in there somewhere?Indeed there was. My first project when transitioning away from VIDVOX, circa 2004, was to create a digital planetarium for a children’s museum. That’s where I met the Elumenati, who had just started marketing their patented fulldome projection lenses. Within a year I started working for them directly, seeing what kind of real-time software we could create to drive immersive fulldome experiences. It was fascinating work, years ahead of its time in terms of VR. We mounted several live dome projects as well as a few permanent installations. Alas, the work was very niche and difficult to sustain, and concert tours kept calling me. I subsequently left do develop a full show for synth pop icon / renaissance man, Thomas Dolby.

What has interested you about the evolution of live video during that time?It was a very new idea for concert tours to incorporate realtime visual content. So called media-servers were in their infancy. The electronic music scene was very hip to it all, but outside of this, only a handful of VJ’s had managed to break out into larger concert production industry. It was a very interesting place to be in. I had spent years of my life pioneering my tools with VIDVOX, and now I was able to see the fruits of my labor work in front of massive audiences around the world. The pursuit then became more about the content itself, and advancing the ways in which we could create and manipulate it live. How it could seamlessly integrate with the lighting and set design. It has been fascinating to watch the technology continue to evolve.

I am so proud that VDMX continues to be such a powerful tool, used by so many people, on so many productions.

What lead you into exploring abstract plugins for editing and post production software?With touring, working for grammy winning acts, television, the whole thing, I found I was traveling nearly most of the year. Then, my wife and I had our daughter, and it quickly became important for me to focus on my family and get off the road.

I struggled a bit to figure out how I could pivot my career. We moved to Boston and I started doing `traditional` video editing and production for the many universities here, such as Harvard and MIT. It dawned on me that I had one true calling that I’ve always loved, and that was video software. FxFactory is based here in Boston, which if you didn’t know, can use Quartz Composer under the hood to create plugins. I love QC so it just made a lot of sense.

I saw a unique opportunity to explore some perennial creative ideas I’ve worked with, but in the plugin space. Most plugins out there are purpose built for utility and saving time. I am more interested in the artistic and generative design possibilities. The downside perhaps is that I’ve curtailed my market by doing so, as they may seem bit idiosyncratic and weird at first glance, but hopefully there are some folks that realize their true potential.

Which of your plugins do you enjoy using the most?Each one has its own unique appeal. One of my favorite strategies is to use COLR as a source generator, and then apply FLOW or FOLD to play with shape. It is endlessly fun! I’m also proud of GEODE, my equivalent to ‘basic research’ — It is deceptively simple, but ultimately powerful when you explore its potential.

What opportunities exist for 3D software today?I think there is a lot of interesting things one can do with 3D, but for years I stayed away from it because I found the mechanics so tedious. People who model and animate 3D really have a special temperament, it is not how I like to work, but often find I must. For me I am most interested in live generation and manipulation of the 3D form… to bring the immediacy of raster based instruments like VDMX to the 3D space. I would like to build visual instruments that use 3D vectors as their source.

You’ve mentioned the plugins work better in FCP X rather than the Adobe suite – because they were built using Quartz Composer, which I’ll presume FCP X integrates better. Is QC the reason behind the large number of plugins that are FCP X only?It’s not because of QC, it is because Apple allowed Motion to be a plugin and template engine for FCPX. Anyone with mograph experience can get their feet wet making plugins, even sell them if they like. FxFactory facilitates a lot of this. PixelFilmStudios is another player that really works almost exclusively with Apple Motion as their dev tool.

As for Adobe CC, part of the magic with the FxFactory ecosystem is that they ‘wrap’ plugins built natively with Quartz into a package that Premiere and After Effects can use directly, which has allowed people like me to make products without needing to work in Xcode, and be able to address both Apple’s and Adobe’s ecosystem with the same source. I think my plugins run ‘better’ in FCPX simply because FCPX is quite simply faster than Adobe. Mind you, this is a MacOSX discussion we are having here.

What are your thoughts about the role and future of QC today – within the Plugin developer community – and for coder artists in general?People have been saying for years QC is going to die. Yet it hasn’t happened nor has Apple ever indicated they plan to. I see it a bit like QuickTime. It is so fundamental and so many products rely on it that Apple would be shooting itself in the foot to actively kill it. That said, what they are doing is building more tech around Metal and making it easier to code with Swift, at the same time training young ones in Swift from the start. It’s a powerful long term strategy when you think about. Eventually there won’t really be a need for Quartz (in their mind).

There will always be a place for node-based graphical programming, so even if Apple doesn’t release ‘a new Quartz Composer’ I am absolutely certain other tools will be filling the space (as they already are). Vuo, VVVV, FxCore, Max/Jitter, Touch Designer Etc.

Do you have future Zoetroepe plugins in mind?I’m working on something completely different than plugins right now. I mentioned earlier an interest in realtime 3D instruments… and this is where I’m heading – a return to application software rather than plugins. I hope it will have a broader appeal. I’m particularly interested in the design community, who I think have much to gain by exploring instruments rather timelines and canvases. I also have a keen eye on blockchain tech, but I really can’t say anything else about that right now.

My videoclip for the fabulous and enchanting Melbourne duo, Time For Dreams.

]]>http://www.skynoise.net/2017/10/11/videoclip-for-time-for-dreams/feed/0http://www.skynoise.net/2017/10/11/videoclip-for-time-for-dreams/Mitti Video Cue Software Reviewhttp://feedproxy.google.com/~r/skynoise/Egpn/~3/kcMo1U_5634/
http://www.skynoise.net/2017/06/28/mitti-video-cue-software-review/#respondWed, 28 Jun 2017 08:40:03 +0000http://www.skynoise.net/?p=5193Continue reading →]]>Software Adventure-Time! Newest* video kid on the block = Mitti, a “modern, feature-packed but easy-to-use pro video cue playback solution for events, theatre, audiovisual shows, performances and exhibitions,” – coming from the same stable that brought us Vezer (the timeline based midi/osc/dmx sequencer) and COGE (the versatile VJ software). (By *Newest, I mean it’s been around since late 2016, but since then, Mitti has enjoyed a steady rate of notable additions and updates.)

After all the work that can go into video material for an event, playback control can sometimes be left as an afterthought – it’s not unknown to see videos being played back from video editing software and ‘live-scrubbed’, or to watch users flipping between their desktop and powerpoint / keynote / Quicktime etc. VJ software of course brings a flexibility and reliability to playback control – taking care of the basics such as fading to black, or looping upon finish of clip, cross-fading, or simply – avoiding the desktop suddenly appearing on the main projected screen. The flexibility of most VJ software is also one of it’s limitations – the strength of real-time effects and mixing, tending to make interfaces more obtuse than they need to be for users seeking simple playback. And when getting simple playback exactly right becomes important, for events, theatre, installations etc – this is where Mitti seems to be aiming at, in the ballpark of apps like Playback Pro,QLAB or perhaps Millumin, where cues and critical timing are given more priority than visual effects.

So What’s Mitti Like?

Running at it’s most minimal, with a playlist of clips, Mitti appears deceptively simple – but comes dense with custom controls at every level.

The Interface – a strength would seem to be the efforts spent in making a very clear and intuitive interface for Mitti – it comes across as clean and easy to navigate, with extended options available where they might be expected. (And for further depth – go to the control menu, choose Mitti, then Preferences, to see a very well organised array of options)

Timing – Exact timing control can be critical, and Mitti boasts low latency, a GPU playback engine, and can run from an SMPTE based internal clock, or slave to external hardware or software sources. If you know what these Mitti capacities mean, you’re possibly the intended audience: external MTC (MIDI Timecode), LTC (Linear Timecode), SMPTE offsetting, Jam-Sync.

Cues – You can create and trigger cues for videos, images, cameras (and native Blackmagic support), syphon and NDI sources – each with nuanced options, and easily add or adjust in/out points per clip. Cues can be set to loop, and the playlist can be paused between cues, until ready to start the next cue.

Nuanced control of fades, transitions – eg individual control per cue over fade in or outs, and over 30 ISF based video transition options

Screenshots:

Below, showing the various cue preference options:

Launch page:

Video output options:

Project Preferences, which include detailed options for each item on the left:

Support:

This can be important when dealing with unusual projects or computer quirks… and Imimot boast great support – “Our average first response time was only 3 hours 5 minutes in the past 7 days!”, as well as extensive FAQ / tips and support documentation:

Verdict:

Interview with Tamas Nagy, Creator of Mitti

Tamas, below in Imimot HQ, was nice enough to answer some questions about why he made Mitti….

With the ecosystem of video software that exists – what inspired you to add MITTI to it?The idea of creating Mitti is coming from Vezér feature requests – the funny thing with this is Vezér was also born by a couple of CoGe feature requests A lot of Vezér users were searching …. for an any-to-use but remote controllable video playback solution, which plays nice with Vezér, or even requested video playback functionality in Vezér. Adding video playback functions to Vezér does not sound reasonable for me, the app was not designed that way, and I wanted to leave it as a “signal processes”/show control tool instead of redesigning the whole app. After doing some research I’ve found there were no app that time on the market I could offer to Vezér users which is easy to setup and use, lightweight, and controllable by various protocols that Vezér supports. So I’ve started to create one. The original plan was to make something really basic, but once I’ve started to speak about the project to my pals and acquaintances, I’ve realised there is a need for an easy to use app on the market with pro features, and by pro features I mean timecode sync, multi-output handling, capture card support.

What are some contexts you imagine MITTI being used?Mitti is targeting the presentation, theatre, broadcast and exhibitions market, usually where reliable cue-based media playback is needed – and this is where Mitti’s current user base come from: event producer companies, theatre, visual techs of touring artists, composers working together with DAWs, etc.

What interests you about NDI?I believe NDI is the next big thing after Syphon. Now you can share frames between computers even running different operating systems without or minimal latency, using cheap network gear or already exists network infrastructure. And there are even hardwares coming with native NDI support!

An other big thing is the OSC Query. This is not strictly Mitti and Vezér related. OSC Query is a protocol – still in draft mode yet – proposed by mrRay from Vidvox to discover an OSC-enabled app’s OSC address space. As far as I know only Mitti and Vezér supporting this protocol on the market, but hopefully others will join pretty soon, since this going to be a game changer in my opinion.

Why is Mitti priced more expensive than COGE?This is a rather complex topic, but basically Mitti has been designed to a fairly different market than CoGe. Also CoGe is highly underpriced in my opinion – well, pricing things is far more complex stuff than I imagined when CoGe hit the prime time – but that is a whole different topic.

]]>http://www.skynoise.net/2017/06/28/mitti-video-cue-software-review/feed/0http://www.skynoise.net/2017/06/28/mitti-video-cue-software-review/Hospital Basement Video Dreamshttp://feedproxy.google.com/~r/skynoise/Egpn/~3/QUw1nktKK9w/
http://www.skynoise.net/2017/02/02/hospital-basement-video-dreams/#respondThu, 02 Feb 2017 06:31:40 +0000http://www.skynoise.net/?p=5196Continue reading →]]>I spent a few nights in a hospital basement last year, projecting video and controlling lights for The General Assembly – onto a room filled with paper strips, while audiences roamed between rooms for mini-sets. It was part of Melbourne Music Week and super fun – the video below shows it up nicely.
Will be doing projections for TGA again this saturday at The Toff In Town :

]]>http://www.skynoise.net/2017/02/02/hospital-basement-video-dreams/feed/0http://www.skynoise.net/2017/02/02/hospital-basement-video-dreams/Lumen Software Reviewhttp://feedproxy.google.com/~r/skynoise/Egpn/~3/jVBdKEDo-Tk/
http://www.skynoise.net/2016/11/30/lumen-software-review/#respondWed, 30 Nov 2016 03:48:41 +0000http://www.skynoise.net/?p=5042Continue reading →]]>Melbourne, as the most Nathan Barley of Australian cities, so easily lampooned for its population of bushranger bearded baristas with half-baked app ideas, makes a strong argument for being Australia’s Portland. Perfectly placed then, for reviewing Lumen – new real-time visual software coded by Jason Grlicky in downtown Portland, which tries to add some contemporary twists to the quirky history of video synthesis.

What is Lumen?

A mac based app (needing OSX 10.8 or later) for ‘creating engaging visuals in real-time’… with a ‘semi-modular design that is both playable and deep.. the perfect way to get into video synthesis.’ In other words – it’s a software based video synthesiser, with all the noodling, head-scratching experiments and moments of delightful serendipity this implies. A visual synthesiser – that can build up images from scratch, then rhythmically modify and refine them over time. It has been thoughtfully put together though, so despite the range of possibilities – it’s also very quickly ‘playable’ – and always suggesting there’s plennnnttttyyyy of room to explore.

The Lumen Interface

While the underlying principles of hardware based video synthesisers are being milked here to good effect – a lot of the merits of Lumen are in the ways they’ve managed to make these principles easily accessible with well considered interface design. It has been divided into 3 sections – a preset browser (which also features a lovely X/Y pad for interpolating between various presets), a knob panel interface, and a patch panel interface. It’s a very skeuomorphic design, but it also cleverly takes the software to places where hardware couldn’t go (more on that later).

What should be evident in those screengrabs, is that experimentation is easy- and there’s a lot of depth to explore. The extensive reference material helps a lot with the latter. And as you can see, they can’t help but organise that beautifully on their site:

Lumen Presets

Lumen comes pre-loaded with 150+ presets, so it’s immediately satisfying upon launch, to be able to jump between patches and see what kind of scope and visual flavours are possible.

… and it’s easy to copy and remix presets, or export and swap them – eg on the Lumen slack channel.

Midi, OSC + Audioreactivity

Although all are planned, only midi exists in Lumen so far, but it’s beautifully integrated. With a midi controller (or a phone/tablet app sending OSC to a midi translating app on your computer) – Lumen really comes into it’s own, and the real-time responsiveness can be admired. Once various parameters are connected via midi control, those of course can effectively be made to be audioreactive, by sending signals from audioreactively controlled parameters in other software. Native integration will be nice when it arrives though.

Video Feedback, Software Style

Decent syphon integration of course opens a whole range of possibilities…. Lumen’s output can be easily piped into software like VDMX or COGE for use as a graphic source or texture, or mapping software like madmapper. At the moment there are some limitations with aspect ratios and output sizes, but that’s apparently being resolved in a near-future update.

With the ability to import video via syphon though, Lumen can reasonably considered as an external visual effects unit. Lumen can also take in camera feeds for processing, but it’s the ability to take in a custom video feed that can make it versatile – eg video clips created for certain visual ideas, or the output of a composition in a mapping program.

This screengrab below shows the signal going into Lumen from VDMX, and also out of lumen back into VDMX. Obviously, at some point this inevitably means feedback, and all the associated fun/horror.

Requirements:

macOS 10.8 or newer (Each license activates two computers)

$129US

VERDICT

There’s an army of lovers of abstracted visuals that are going to auto-love Lumen, but it has scope too for others looking for interesting ways to add visual textures, and play with real-time visual effects on video feeds. It could feasibly have an interesting place in a non-real-time video production pipeline too. Hopefully in a few years, we’ll be awash in a variety of real-time visual synthesis apps, but for now Lumen is a delightfully designed addition to the real-time video ecosystem.

Interview with Lumen creator, Jason Grlicky

– What inspired you to develop Lumen?

I’ve always loved synthesizers, but for most of my life that was limited to audio synths. As soon as I’d heard about video synthesis, I knew I had to try it for myself! The concept of performing with a true video instrument – one that encourages real-time improvisation and exploration – really appeals to me.

Unfortunately, video synths can be really expensive, so I couldn’t get my hands on one. Despite not being able to dive in (or probably because of it), my mind wouldn’t let it go. After a couple failed prototypes, one morning about I woke up with a technical idea for how I could emulate the analog video synthesis process in software. At that point, I knew that my path was set…

– When replicating analogue processes within software – what have been some limitations / happy surprises?

There have been so many happy accidents along the way. Each week during Lumen’s development, I discovered new techniques that I didn’t think would be possible with the instrument. There are several presets that I included which involve a slit-scan effect that only works because of the specific way I implemented feedback, for instance! My jaw dropped when I accidentally stumbled on that. I can’t wait to see what people discover next.

My favorite part about the process is that the laws of physics are just suggestions. Software gives me the freedom to deviate from the hardware way of doing things in order to make it as easy as possible for users. The way that Lumen handles oscillator sync is a great example of this.

Can you describe a bit more about that freedom to deviate from hardware – in how Lumen handles oscillator sync?

In a traditional video synth oscillator, you’ll see the option to sync either to the line rate or to the vertical refresh rate, which allows you to create vertical or horizontal non-moving lines. When making Lumen, I wanted to keep the feeling of control as smooth as possible, so I made oscillator sync a knob instead of a switch. As you turn it clockwise, the scrolling lines created by the oscillator slow down, then stop, then rotate to create static vertical lines. It’s a little thing, but ultimately allows for more versatile output and more seamless live performance than has ever been possible using hardware video synths.

Were there any other hardware limitations that you were eager to exploit the absence of within software?

At every turn I was looking for ways to push beyond what hardware allows without losing the spirit of the workflow. The built-in patch browser is probably the number-one example. Being able to instantly recall any synth settings allows you to experiment faster than with a hardware synth, and having a preset library makes it easier to use advanced patching techniques.

The Snapshots XY- Pad, Undo & Redo, and the Transform/K-Scope effects are all other examples of where we took Lumen beyond what hardware can do today. Honestly, I think we’re just scratching the surface with what a software video instrument can be.

How has syphon influenced software development for you?

I had an epiphany a couple years back where I took a much more holistic view of audio equipment. After using modular synths for long enough, I realized that on a certain level, the separation between individual pieces of studio equipment is totally artificial. Each different sound source, running through effects, processed in the mixer – all of that is just part of a larger system that works together to create a part of a song. This thinking led me to create my first app, Polymer, which is all about combining multiple synths in order to play them as a single instrument.

For me, Syphon and Spout represent the exact same modular philosophy – the freedom to blend the lines between individual video tools and to treat them as part of a larger system. Being able to tap into that larger system allowed me to create a really focused video instrument instead of having to make it do everything under the sun. Thanks to technologies like Syphon, the future of video tools is a very bright place!

What are some fun Lumen + Syphon workflows you enjoy – or enjoy seeing users play with?

My favorite workflow involves setting up Syphon feedback loops. You just send Lumen’s output to another VJ app like CoGe or VDMX, put some effects on it, then use that app’s output as a camera input in Lumen. It makes for some really unpredictable and delightful results, and that’s just from the simplest possible feedback loop!

What are some things you’re excited about on the Lumen roadmap ahead?

We have so many plans for things to add and refine. I’m particularly excited about improving the ways that Lumen connects with the outside world – be that via new video input types, control protocols, or interactions with other programs. We’re working on adding audio-reactivity right now, which is going to be a really fun when it ships. Just based on what we’ve seen in development so far, I expect it to add a whole new dimension to Lumen while keeping the workflow intuitive. It’s a difficult balance to strike, but that’s our mission – never to lose sight of the immediacy of control while adding new features.

I recently animated some vintage botanical illustrations for an interactive exhibition installation at The Royal Botanic Gardens, Sydney. It was fun to collaborate with Robert Jarvis ( zeal.co ) on this – who programmed the interactivity (incorporating childrens’ webcam photos into the various creatures and plant-life storylines), as well as with D.A. Calf ( dacalf.com ) who brought the world to life so well. And a special shout-out to Luke Dearnley and Sophie Daniel who produced it.

“Bill Etra, an artist and inventor who, with a partner, created a video animation system in the early 1970s that helped make videotape a more protean and accessible medium for many avant-garde artists, died on Aug. 26 near his home in the Bronx. He was 69.

The cause was heart failure, said his wife, Rozalyn Rouse Etra. Mr. Etra had spinal stenosis for many years and was mostly bedridden when he died.

Mr. Etra and Steve Rutt created the Rutt/Etra video synthesizer, an analog device studded with knobs and dials that let a user mold video footage in real time and helped make video a more expressive art form. Among the artists who used it were Nam June Paik, regarded by many as the father of video art, and Woody and Steina Vasulka, who founded the Kitchen performance space in downtown Manhattan in 1971.”

“The dream was to create a compositional tool that would allow you to prepare visuals like a composer composes music,” Mr. Etra wrote. “I called it then and I call it now the ‘visual piano,’ because with the piano the composer can compose an entire symphony and be sure of what it will sound like. It was my belief then, and it is my belief now after 40 years of working towards this, that this will bring about a great change and great upwelling of creative work once it is accomplished.”

“Developed in 1972, the RUTT/ETRA Video Synthesizer was one of the first commercially available computerized video animation systems. It employed proprietary analog computer technology to perform real time three dimensional processing of the video image. In the first use of computer animation in a major Hollywood picture, Steve Rutt, working directly with Sidney Lumet, used the Rutt/Etra to create the animated graphic for the film’s “UBS” Television Network.”

Software based tributes to The Rutt / Etra Synthesizer.

Rutt-Etra-Izer is a WebGL emulation of the classic Rutt-Etra video synthesizer, by Felix Turner, which ‘replicates the Z-displacement, scanned-line look of the original, but does not attempt to replicate it’s full feature set’. The demo allows you to drag and drop your own images, manipulate them and save the output. Images are generated by scanning the pixels of the input image from top to bottom, with scan-line separated by the ‘Line Separation’ amount. For each line generated, the z-position of the vertices is dependent on the brightness of the pixels.

]]>http://www.skynoise.net/2016/03/30/concert-visuals-for-audego/feed/2http://www.skynoise.net/2016/03/30/concert-visuals-for-audego/Tour Visuals for Hermitudehttp://feedproxy.google.com/~r/skynoise/Egpn/~3/Vk9VNOqUnyc/
http://www.skynoise.net/2016/03/18/tour-visuals-for-hermitude/#respondFri, 18 Mar 2016 05:36:24 +0000http://www.skynoise.net/?p=4817Continue reading →]]>Hermitude Concert Visuals from jeanpoole on Vimeo.
Am glad to finally upload that edit-medley – because creating a set of concert visuals for Hermitude was one of my favourite projects last year, seeing it from drawing-board and sketch paper, through to the stage screen. Hermitude had approached (having worked together on Dr.Seuss Meets Elefant Traks at Sydney’s Graphic Festival in 2012) – about developing video for their tour promoting Dark Night, Sweet Light – and wanted a visual set that suited their music, would work well within a hectic stage lighting environment, and was diverse but felt like a coherent, consistent show.
To suit Hermitude’s fun and festive sound and their dynamic live performances – I developed an overall visual style palette to enhance that, and mapped out a visual choreography for the show. And though I was excited about making some Hermitude clips of my own, it was also an exciting opportunity to collaborate with some talented animators, coders and cinematographers. It was fantastic to be able to work with these artists to craft the Hermitude set:

Neil Sanders – a Melbourne hand-drawn illustrator and animator extraordinaire, famous for his signature organic tumblr loops…Ori Toor – another hand-drawn abstraction loop specialist, beaming pixels to us from the Middle East.Colin E. White – moodily stylised New York animator.Brad Hammond – A Melbourne 3D Unity animation ninja + coder. (And shout-out to Kejiro Takahashi from Japan, for his ongoing publishing of Unity software addons… )Stu Gibson – A Tasmanian surf + aerial cinematographer, who was very generous with his wild coastline footage (which I used to make the Bermuda Bay clip below)

It was also a pleasure to develop this visual set over time, because Luke ‘Dubs’ + Angus ‘El Gusto’ (aka Hermitude) are so down to earth and friendly, despite their relentless touring and acclaim, as are the whole Elefant Traks crew – especially their tireless manager (and collaborator) Urthboy and Luke Dearnley (Sub Bass Snarl), their wizardly tour manager (who designed a clever + efficient video rig featuring live cams – for routing and controlling their stage video feeds).

]]>http://www.skynoise.net/2015/11/06/cleverhorse-reich-dir-by-jean-poole/feed/0http://www.skynoise.net/2015/11/06/cleverhorse-reich-dir-by-jean-poole/The Act of Killing + The Look of Silencehttp://feedproxy.google.com/~r/skynoise/Egpn/~3/udPY43vjGjU/
http://www.skynoise.net/2015/08/12/the-act-of-killing-the-look-of-silence/#respondWed, 12 Aug 2015 05:57:29 +0000http://www.skynoise.net/?p=4844Continue reading →]]>I was lucky enough recently to catch a film-talk panel between director Joshua Oppenheimer and John Safran, at the Melbourne International Film festival. Having just seen the Look of Silence earlier that day, and already in awe of the brave and audacious film-making from the earlier companion film (The Act of Killing) – it was humbling and a privilege to hear about some of what went into the making of the film – and what some of its’ impacts have been since.

Given that Indonesia has not officially or publically discussed the mass killings that happened in 1965-66 (supposedly to get rid of a communist threat) – and that many of the perpetrators are entrenched in power today, it’s quite remarkable that these two films got made – prompted national discussions about them – and that the second film was given official recognition:

“On November 10, 2014, 2,000 people came to the official and public premiere of the film in Jakarta, and on December 10, 2014 – International Human Rights Day – there were 480 public screenings of the film across Indonesia. The screenings of the film in Indonesia has been sponsored by the National Human Rights Commission of Indonesia and the Jakarta Arts Council.” ( Via wikipedia)

Incredibly, after the first film – which featured the ‘surreal / defensive(?)’ boasting of one of the mass-killers – an Indonesian journalist saw the film, and persuaded their magazine to send out investigative journalists to document similar people in 60 different locations across Indonesia – and then published all of these in one go – alongside an in depth reaction to Oppenheimer’s film – which broke the silence, and allowed Indonesian media to move past the taboo of discussing these events.

Regardless of your awareness of this Indonesian mass killing, these are powerful films on many levels – well worth hunting down.

The film focuses on the perpetrators of the Indonesian killings of 1965–66 in the present day; ostensibly towards the communist community where almost a million people were killed.

Invited by Oppenheimer, Anwar recounts his experiences killing for the cameras, and makes scenes depicting their memories and feelings about the killings. The scenes are produced in the style of their favorite films: gangster,western, and musical.

The name “Anonymous” appears 49 times under 27 different crew positions in the credits. These crew members still fear revenge from the death-squad killers.

When the government of Indonesia was overthrown by the military in 1965, Anwar and his friends were promoted from small-time gangsters who sold movie theatre tickets on the black market to death squad leaders. They helped the army kill more than one million alleged communists, ethnic Chinese, and intellectuals in less than a year. As the executioner for the most notorious death squad in his city, Anwar himself killed hundreds of people with his own hands. Today, Anwar is revered as a founding father of a right-wing paramilitary organization that grew out of the death squads. The organization is so powerful that its leaders include government ministers, and they are happy to boast about everything from corruption and election rigging to acts of genocide.

The Act of Killing is about killers who have won, and the sort of society they have built.

In The Act of Killing, Anwar and his friends agree to tell us the story of the killings. But their idea of being in a movie is not to provide testimony for a documentary: they want to star in the kind of films they most love from their days scalping tickets at the cinemas. We seize this opportunity to expose how a regime that was founded on crimes against humanity, yet has never been held accountable, would project itself into history.

And so we challenge Anwar and his friends to develop fiction scenes about their experience of the killings, adapted to their favorite film genres – gangster, western, musical. They write the scripts. They play themselves. And they play their victims.

“Through Oppenheimer’s footage of perpetrators of the 1965 Indonesian genocide, a family of survivors discovers how their son was murdered, as well as the identities of the killers. The documentary focuses on the youngest son, an optometrist named Adi, who decides to break the suffocating spell of submission and terror by doing something unimaginable in a society where the murderers remain in power: he confronts the men who killed his brother and, while testing their eyesight, asks them to accept responsibility for their actions. This unprecedented film initiates and bears witness to the collapse of fifty years of silence.”

So Mexico morphed into MOFO… and now it’s late January 2015. Anyways. Here is some documentation for what happened in Hobart, the Mexican samples will have to wait a little longer.

I was in Hobart to do triple-Screen Video Projections at ‘ Faux Mo‘, which is the afterparty venue each night for the MOFO Festival, connected to the MONA gallery in Hobart, Tasmania. It tends to be eclectic – here’s the program.

I’m going to Mexico!!

Am super excited – it’ll be my first time in any of the Americas. From Nov 26 – Dec 22 I’ll be wandering through Mexico City, Oaxaca, Tijuana, as well as Cuernavaca, Metepec and a few other places in between.

Chancha Via Circuito – a favourite listen in recent years – has a new album out – Amansara (Wonderwheel Recordings). I first discovered his enchanting atmospheres and mixing on his wonderful ZZK Records mixtape (promoting his previous album Rio Arriba). His music seems to thrive best in mixtapes (see also Mixtape Cumbiero European Tour 2013 and a mixtape at Testpressing for new album ), reminding at times of early Future Sound of London and their wandering from soundscape to rhythm and back again. There’s a warmth to this music, and despite a slower tempo, there’s a momentum to it all as well. Recommendo!

Oh and a special shout-out too, for Paula Duro, who makes the enchanting artwork for Chancha (and featured in the backlayer of the collage above), as well as much of her own cool stuff. Check out her playful cosmic palette at flickr.

So I’ve been reading a lot lately. And swimming in words returns me to writing. Or at least – some words about books.

The infinite shelves at Goodreads are responsible for the bulk of the book orders above (want to swap recommendations?). I don’t know what took me so long to finally join Goodreads, I’d long been finding it tricky to get interesting book recommendations (particularly – good fiction – compared to say music or movies). Amazon has a decent catalogue, but I’ve found it unreliable for recommending new fiction of interest. And while I prefer the hand-curation of say – the Brainpickings bookshelf, the McSweeney’s Journal or DJ Rupture’s Mudd Up Book Club (includes a pretty great collection of sci-fi set in non-anglo cities), each of those are a pretty limited lense.

Anyways, I seem to have my reading for the next while sorted, which is also going to mean some more words here over time.

And if you’re not already aware of the second-hand booksellers below, this is where I found the bulk of the above:

Book Depository (my first choice – best range, generally cheapest overall to send to Australia)

BetterWorldBooks (good range – but seems deliberately deceptive in the way they offer ‘free postage ‘ – they show cheap prices after a search, but every single time, clicking on a book found in search, shows up as a much higher price, when you want to buy it.)

As real-time video software continues to evolve, we’re starting to see some really thoughtfully considered applications – such as Millumin, by Philippe Charaund, software dedicated to “create and perform audiovisual shows”. In part, Millumin is possible because of today’s easy re-routing of video between applications (thanks to software such as Syphon on mac and Spout on PC), which has enabled some developers to focus on specialty areas, and allowed others to provide ways of usefully integrating different parts of a video workflow.

Where does Millumin fit in?

While there are a lot of real-time video tools and specialities available, Millumin’s great strength is as over-arching software – and providing useful ways for co-ordinating and controlling other software ( eg triggering and manipulating clips inside VJ software, and recompositing, mapping and sequencing that video with Millumin, easily jumping between very complex compositions).

Millumin will especially be of interest to those seeking to sync media in tightly curated shows eg syncing video with important theatrical cues, conference cues, or a specific sequence of events in a music show. Aside from the time-based controls, it’s also a pretty effective piece of mapping software – which includes a built in capacity to edge blend between projectors.

In other words, Millumin provides good control over time (sequencing) and space ( compositing and mapping). It’s a unique recipe – while there are other apps that offer more advanced portions of what Millumin does – eg Vezer‘s sequencing and timeline options, or Madmapper‘s mapping controls – there’s nothing else that quite manages to do what Millumin does. QLab is probably it’s closest competitor, with the strengths and weaknesses of each meaning one or the other will suit your workflow better.

Millumin’s Workflow?

From their site guide:1. Drag-and-drop ﬁles from Finder to the Dashboard, and click on the cells to play them2. Use the Workspace toolbar to move, map, warp, mask … to rotate and scale the layers directly in the workspace.3. Change blend mode, add effects, transitions and more from the Properties Panel4. In the Library, manage your ﬁles, Syphon servers and inputs5. Create a Composition, then organize your media in time with keyframes
– includes ability to play compositions within compositions ( like nested compositions in After Effects)
– Also like AE – includes adjustible keyframes – change opacity, position, scale or rotation change over time to specific values.
– cue points can be added.
– pause on cue points6. Import this Composition into the Dashboard + switch between complex compositions easily.7. The Magic Key is [SHIFT] : maintain it to multi-select and snap items

Features?

Control of time / sequencing: Millumin’s key-framable timelines will be warmly familiar to everyone who has used video editing software, and tends to find such functions missing within VJ software. Most VJ software will show a timeline / playhead for each clip – but much more rare is a capacity to place many clips along a timeline, and easily add cue points, and easy linear arrangements. Example nice touch? Drag and drop a clip onto a timeline, then drag the end of it to auto-loop as long as you need.

Room for improvement? There are lots of little user interface quirks that could be removed / better designed. Admittedly this is partly because Millumin reminds so much of video editing and compositing software – which brings a whole bunch of fine-tuned expectations – and sets an unfair benchmark – relatively new software made by one person could hardly be expected to match the resources and foundations of established editing and compositing software.

Control of space / compositing:

Video compositors will find it a pleasure to be able to create complex compositions, and nest and even animate these comps within other comps. In this respect Millumin is the closest thing to a real-time After Effects that exists. Sequencing and switching between various comps is trivial to implement…

.. and these ‘presets’ / ‘dashboard selections’ – can be triggered from other software using midi or OSC – eg the M1-m10 presets built into a VDMX control surface window below.

Millumin can also take in as many syphon inputs as can be thrown at it – which integrates it well with VDMX’s capacity to send out many. All of these can be composited differently in Millumin’s compositions, allowing for a huge amount of flexibility and convenience. (Snap below includes sequined ninja in oyster cave footage used at recent Dark Faux Mo festival in Hobart.)

Control of space / mapping:

Millumin features great controls for multiple outputs, and features multi-screen edge blending and feathering of masks:

Room for improvement? Being able to work better with multiple projectors that have different aspect ratios to each other.

Visual Effects?

As a standalone application, Millumin has a limited range of visual effects. On the other hand – deep syphon integration means easy piping in of video from other software, for sequencing or compositing, and quartz composer integration means being able to easily add customised QC elements, effects and compositions to any of that video piped in.

Requirements:
– Mac OSX 10.6 or later. (PC version in the pipeline)

– 599€ (VAT not included) = A license for Millumin on 2 different computers. for 2 computers . Educational and rental pricing available by negotiation.

Verdict:
Millumin is very thoughtfully crafted software, with a nicely expanding feature set. And while it’s missing refinement or lacking more detailed control in a few places, it continues to develop and evolve into a fantastic and versatile tool for live video, especially with multi-screen compositing.

– Lotech (NZ) for inspirational use of Millumin @ Splore – each VJ could send a signal into the machine running Millumin, which effectively let them play on a pre-mapped structure, and for ongoing feedback about Millumin over time.

– Jem the Misfit (NZ/Aus/Ger) – for highlighting how creatively Millumin could be used for compositing.

LOGLINE: The story of legendary cult film director Alejandro Jodorowsky’s staggeringly ambitious but ultimately doomed film adaptation of the seminal science fiction novel DUNE.

SYNOPSIS: In 1974, Chilean director Alejandro Jodorowsky, whose films EL TOPO and THE HOLY MOUNTAIN launched and ultimately defined the midnight movie phenomenon, began work on his most ambitious project yet. Starring his own 12 year old son Brontis alongside Orson Welles, Mick Jagger, David Carradine and Salvador Dali, featuring music by Pink Floyd and art by some of the most provocative talents of the era, including H.R. Giger and Jean ‘Mœbius’ Giraud, Jodorowsky’s adaptation of Frank Herbert’s classic sci-fi novel DUNE was poised to change cinema forever.

“For me, Dune will be the coming of a god. I wanted to make something sacred, free, with new perspective. Open the mind!”– Alejandro Jodorowsky

Finally got to see Jodorowsky’s Dune recently (with long-time fan, and comic-book genius, Gregory Mackay, who is is great for detailing the sordid history of sci-fi illustration). It’s an incredible story of an incredibly audacious and film…. and although that feature never got completed, the documentary shows how lots of the creative energy involved was rewarded elsewhere later.

Backgrounder?

Alejandro Jodorowsky : “I ask of film what most North Americans ask of psychedelic drugs. The difference being that when one creates a psychedelic film, he need not create a film that shows the visions of a person who has taken a pill; rather, he needs to manufacture the pill.”

Holy Mountain – another quest for enlightenment, with vast and ambitious set designs… thanks to an increased budget of $1million from a Beatles business associate (John Lennon was a huge fan of El Topo).

Following the success and acclaim for Holy Mountain… his next wish was to adapt Frank Herbert’s Dune, as a psychedelic space opera, a spiritual film for transformation… And in 1974 French producer Michel Seydoux offered to finance the start of it.

and Salvador Dali (as the mad emperor of the universe, who insisted on helicopters, a burning giraffe and “an Emperor’s throne / toilet made from intersected dolphins, the tails forming the feet and the mouths to receive piss and shit separately. (He thought it terribly bad taste to mix the two.) .. and $100,000 an hour to sit on it”)

So the world’s greatest ‘psychedelic space opera’ never got made… but director Frank Pavich did a great job of teasing out the possibilities, ably assisted by the subtle storyboard animations of Syd Garon (director and animator of Wave Twisters, yo!). More importantly though, the director was able to reunite producer Seydoux with Jodorowsky (after 30 years!), and Seydoux not only co-produced Jodorowsky’s Dune, but also went on to produce Jodorowsky’s first film in over 20 years: Dance of Reality (another incredible film!).

((UPDATE: video workshop was archived on youtube, and is viewable at the bottom of this page))

David will be delivering the workshop from New York, with high bandwidth streams of both a video-conferencing computer and his VDMX video output for workshop attendees to view. ACMI will be providing laptops with VDMX pre-installed, although people are welcome to bring their own.

This is an incredible opportunity to hear about VDMX from a core developer, and get inside knowledge about real-time video manipulation.

Tickets are limited for the workshop, +pitched at a video-artist friendly $25. Book via ACMI.

(Drop a line if they sell out, we can reserve you a spot if you bring your own laptop.)

The workshop will also be youtube streamed and later archived online (See link at bottom of page):

Comments can be left by online viewers, and there will be a dedicated section at the end, where online questions will be answered by David.

“Rail trails are shared-use paths recycled from abandoned railway corridors. Rail trails link big and small country towns and meander through scenic countryside just as railways did in the past.. Railway engines have always had difficulty climbing hills. The steepest grade of a railway line is never more than 1 in 30.. no sharp rises and no sharp bends, just sweeping curves and gentle undulations.. abandoned rail lines make superb pathways for walking and riding.” – Rail Trails Australia.

The ride itself is awesome – it’s a gorgeous, mostly tree-shaded ride, with plenty of great views, though Warburton has no station, so you need to plan for an 80km round trip from Lilydale. The Warburton end makes it all worthwhile – with the cute bicycle themed Cog Bike Cafe greeting riders at the end of the ride, and just nearby… jutting out from the trailside foliage – the temple of Boinga Bob, a sprawling marvel of DIY architecture and evolving artwork installations. I’ll let the temple photos speak for themselves.

Above – a motion graphic medley made from clips I created recently for each song of Audego‘s latest album, Beneath the Static and the Low. It’s pretty gorgeous music, listen for yourself – bandcamp / soundcloud / itunes. The Audego brief: ‘retro-abstract motion graphics we can project behind us while we play’.

]]>http://www.skynoise.net/2013/11/19/audego-motion-graphics-medley/feed/0http://www.skynoise.net/2013/11/19/audego-motion-graphics-medley/Right Here, Right Now: Interactive Installations @ RMIThttp://feedproxy.google.com/~r/skynoise/Egpn/~3/asFJoeKNpqg/
http://www.skynoise.net/2013/11/18/right-here-right-now-interactive-installations-rmit/#respondMon, 18 Nov 2013 04:37:58 +0000http://www.skynoise.net/?p=4405Continue reading →]]>Over the past three semesters I’ve had the pleasure of co-steering a studio elective with Caroline Vains, for Interior Design students within the school of Architecture and Design at RMIT. Loosely – we’ve been exploring the intersection of video projection, built objects and interior design.

Most interior design students are already highly visually literate, great at quickly visualising their ideas in many ways, and unlike most video oriented people I know – are fantastic at working with materials and constructing models. As well as adapting very quickly to the world of projected video, they also bring along considerable materials testing, research and construction skills.

This semester though, we added interactivity as a requirement. In addition to learning video editing, video composition, animation, and projection mapping – they needed to think about how they would include some simple lo-fi interactivity in their projects. It’s quite satisfying to report that they responded wonderfully to that challenge, and I’m happy to share some of those efforts below.

Pictured above – the crowd-pleasing bicycle powered installation by Jimmy Liu, David Dai and Nick Hsu. (‘Team Brothers!’ ). A series of reflective gears were connected to pedals and a bicycle seat, causing the carefully mapped projections to reflect around the space. This was a nice evolution from their earlier experiments which included tight geometric mapping sequences, and three dimensional arrangements of laser-cut paint splashes.

Part of the pleasure with this studio – is seeing how different skills and ideas merge, and evolve, over time. For example, Hexin Bi used his experience from scuba diving as the inspiration for his audiovisual installation, rigorously analysing the rhythm of his underwater breath…

.. while Jacinta Birchmore explored repetitive forms and texture, beginning with the intricate model below, before iterating through other shapes and surfaces.

Together for their final piece – they continued to explore scuba diving, but took it in a new direction, creating a breath activated installation. Their structure featured a layer of styrofoam balls which obscured the projector light from shining through, unless someone blows through the mouthpiece – which scatters the balls and enables the animation to bounce around inside their mirrored space (a process accentuated by a breath-powered spinning reflector ). Guest blower: Ramesh Ayyar.

Tisha Sara Dewi, Jing Yang and Ranqi Liu created a beautifully made cone structure, to be viewed from underneath:

.. later combined to produce a quite beautiful structure (which unfortunately had a few interactive / mechanical problems).

Tahlia Landrigan produced an intricate response to the music of Nicholas Jaar…

and Fenella McGowan’s structure built from cotton buds responded to projection beautifully, as did Nikita Demetriou’s tissue-paper hangings…

Together that trio diligently combined their efforts to produce a wooden bicycle work – which featured a pair of revolving wooden cylinders, each with precision cut holes, and a bicycle wheel on top for spinning the cylinders, altering the light patterns being emitted from within.

Aside from the prowess with construction, another trait interior design students seem to share – is a flair for dynamic documentation, very comfortably playing with formats and materials to best express their projects. Below, a couple of examples by Stacy + Jimmy.

Below, the projects grouped as part of the final exhibition … (note the bicycle pedals for activating the ‘Team Brothers’ piece at the top of the page).

And finally, studio co-ordinator Caroline, wondering where the semester has gone to…

Let There be Light..

Winter in Tasmania isn’t an obvious time and place for a festival, but MONA isn’t your average museum / gallery. And so began in 2013, MONA’s DARK MOFO (Jun 13-23), an annual festival that riffs on the idea of winter solstice with pagan celebrations of light ( + art, fires, lasers, feasting, etc..). This included: the Red Queen exhibition @ MONA,/ performative chefs, Skywhale (a sculptural / sky-breasted hot air balloon by Patricia Piccini), Robin Fox laser performances, and a whole host of other light and projection related artworks…

Oh, You Mean *LIGHT*..

… All of which were made irrelevant by Ryoji Ikeda‘s 5KM HIGH BEAM OF LIGHT INTO THE SKY ( aka ‘Spectra‘).

Simultaneously over at the gallery, Ryoji exhibited his Datamatics work. Really enjoyed this more than expected. It’s a very well documented and promoted work, but none of that captures the oddly calming oceanic presence it has.

Claustrophobic, Stroboscopic, Light…

The Beam in Thine Own Eye exhibition gathered together a range of works exploring the limits of perception. Zee by Kurt Hentschlager was the most spectacular of these, an intensely stroboscopic smoke filled room, that came with pages of warnings, had medical staff on standby, and completely blurred the capacity to distinguish between what was happening in front of or behind your eyes. There’s a great interview with Kurt here. Other standouts:

Ivana Franke – “We Close Our Eyes and See A Flock of Birds” -A cyclinder shaped room, with central seating, facing out against LED covered curved walls, which proceed to strobe and flash their way through a range of sequences.

Anish Kapoor – “Imagined Monochrome” – An artwork experienced one at a time – because it involved laying down and having your *eyeballs* massaged by a professional eyeball masseuse. I missed getting an appointment for this, but apparently it was fantastic.

.. And Dark (Faux Mo).

DARK FAUX MO, the festival club – is what I was there for – projection mapping a disused double-storey theatre space each night. Performers included Miles Brown, Super Wild Horses, ZOND, My Disco, Zanzibar Chanel, Mixmasters ( who cooked soup + dumplings on stage while they DJ-ed some tracks), Andee Frost, Rainbow Connection DJs and more. It was a wild space, delightfully decorated, with lots of roving performers – so it came up great in photos. (Eg collage at the top of this post – or see flickr photo set)

“There’s a lot of wonderful possibilities for real-time visual compositing with Quartz Composer. Most existing QC learning resources though, tend to emphasise the generative graphics capabilities of QC. For those with a post-production, animation, motion graphics or VJ background – QC’s composition potential can be difficult to unleash.”

Hoping to make the transition to Quartz Composer a bit easier for the above kinda folk – I’ve gone and made a page which documents how various animation + post production techniques and processes can be recreated inside QC…

“Whether you dream of live visuals, interactive installations, Cocoa apps, dashboard widgets, or extra awesomeness for your film and motion graphics projects, Quartz Composer will enable you to develop beautiful solutions in amazingly short periods of time…”

“….To make up for all the gaps in video tutorials and forum posts scattered around the interwebs we wrote a book…”

A Quartz Composer book has been long desired by the real-time video community, given the combination of its unique capabilities and severely undercooked documentation online. Hats off to Graham and Surya for rising to that challenge, and helping expose QC’s potential for visual artists of many flavours.

These days a book inevitably also means an accompanying DVD of video tutorials (which can also be accessed online by those who buy the PDF, book code needed), and an extended support website (ILoveQC).

Who Should Read This Book?

According to the authors – Maker types /Motion graphics designers, film makers, VJs, artists, interactive programmers, and Cocoa developers. If that’s you – this book will help – “…even the unsophisticated user into creating art projects, visuals for a band or party, wild Preface screensavers, and RSS-powered trade-show kiosks. For anyone with a programming background, the material quickly opens up a new world of visual potential”.

Who shouldn’t? “Advanced Quartz Composer users looking for detailed knowledge about using GLSL and OpenCL, or creating your own plugins in Objective-C..”

How’d that work out for me?

“Coming from a non-programming background, I’ve found some of the concepts and structural logic of Quartz hard to grasp, and the engineerish manual doesn’t help much. Kineme.net and the QC mailing list – seem helpful, but also populated by mostly advanced discussions – which tends to stifle introductory questions and beginner problems. So I found myself trying to learn QC by forcing myself to explain what I was learning about it as I explored it.”

This scattered learning approach lead me to writing up these QC tutorials…

What I found myself really craving was a learning resource that broke down the structural logic of QC, and which explained some of the principles in ways that related to how I wanted to use it as a compositing tool. And this, the ILQC book mostly delivered – using deliberately plain and simple language, and making no presumptions about animation or programming knowledge. A quick glance over their contents page, gives an idea of the book’s scope:

– The examples are well chosen, and build up on skill levels as the book progresses
– The book examples and video tutorials correspond really nicely to see each other
– There’s a good emphasis on concrete examples, while explaining the principles that make it possible
– That said, found myself wanting some more explanations of underlying concepts occasionally
– Gaps? Would’ve liked some more advanced exploration of:

how ‘timelines’ and ‘queues’ can be utilised within patches

‘structure’ and ‘multiplex’ related patches

‘render in image’ and ‘rendering’

the composition process in QC, explained relative to composition software such as After Effects… giving a bit more of an explanation of how the overall 2D / 3D possibilities work, and how they could be utilised / explored in many directions..

Ok, ENESS – you had me at ‘projection mapped kinetic sculpture’. The Creation Cinema – seen above, is now installed at the Melbourne Museum as part of First Peoples, an exhibition celebrating ‘the history, culture, achievements and survival of Victoria’s Aboriginal people.’ It’s a gorgeous installation, located inside a circular room, which is in turn enclosed by intricate layers of wood. Once inside – the sublime smoothness and grace of motion immediately captivates. It’s something that animators strive for with onscreen movements, but is so much more satisfying to witness with moving physical parts. Within that darkened egg of a room, the sounds, video and slow relentless movements of the wing fragments all add up to quite sublime effect. Fantastic installation, and viewable for the next 10 years!

The slow fade out of Melbourne’s summer = an opportune time to re-spark the skynoise engines. It’s also likely I just miss writing things longer than 140 characters. Especially since I’m just about to finish reading a 3000 page novel ( The Baroque Cycle, Yo!). Regardless of the roots….. expect some fruits.. scattered across the next few months – riffs about visual culture, and likely some weirder tangents too. That’s what ma bones are saying. And sooner than that – some long overdue reviews:

Mar 2013:
At the Adelaide Festival – did live video for The Cumbia Cosmonauts at the fun pop-up venue, Barrio. The theme for the night was ‘Animal House’ – which meant there was a camel, piglets and geese nearby, as well as dog masseur doing live demonstrations on a table.

And Then It Was Now:
– Developing an audiovisual performance for Wide Open Spaces, a wonderful desert festival held out near Alice Springs in May. Longtime collaborator Suckafish P Jonez is back from Barcelona, and we’re excited to be exploring AV again. Weekly rehearsals!

– Am co-hosting a studio elective at RMIT within the design faculty, looking at video production, projection and installation – from an interior design perspective (which tends to include a lot more materials and building related research / development). It’s a fun studio, which uses mapping processes, and comic / graphic novel storytelling techniques to help inform video installations.

– Am slowly rolling out a series of updates to the skynoise.net/projects page, finally uploading documentation from a range of projects… including the snippets below, developed for 360 last year.

Above : more proof that Space Is The Place…. at least when it comes to Mexi-Australian tropical bass genres.

That’s the fruits of a few quick projection and filming sessions with the Cumbia Cosmonauts, featuring custom graphics made by the CC VJ – Martin Hadley (I especially liked his spaceship control deck!). I’d like to think if there’s ever a Mexi-Australian space program, that it looks something like this… ie has that Ed Wood in space vibe about it, maybe with styling by Lee Scratch Perry & Sun Ra.

The Cumbia Cosmonauts are a Melbourne band who are celebrated around the world with their take on Mexico’s cumbia music, and so fittingly, they release their new album, Tropical Bass Station, on the Berlin label, Chusma records, on Nov 23, 2012. The track ‘Our Journey To The Moon (And Back)’ comes from that album.

Developed and performed for the Graphic Festival – it was an audacious project – inside a tiny time frame, create 18 songs and animations to reinterpret or remix the books of Dr.Seuss for the stage. It never felt like enough time – and yet, the amazing zoo / crew at Elefant Traks pulled it together and nailed a dynamic audiovisual smorgasbord (that apparently had some of the Seuss publishing folk moved to tears!).

My role was to develop and live trigger the animations for the show, which was akin to developing a feature film in 6 or so weeks.. while liasing with around 20 different musicians… “hey man, I’ve got this new idea for a beat / I’ll get you those lyrics soon.. etc etc” – so I wasn’t surprised to find myself still rendering out clips on stage, right up to the last minute.

I’m going to put up some more animation info later, over at skynoise.net/projects, but for now, while still floating, I wanted to put out a huge thank you to:

– Jono ‘Dropbear‘ Chong + Darin Bendall, who did an amazing job, animating half of the tracks between them.
– Urthboy – who oversaw the crazy production, as well as performed throughout the show
– Unkle Ho, who helped tie together the visual production, and developed his own flash-based interactive visuals for the show, AV jamming on a wii-board to Green Eggs & Ham, with Jim from Sietta + Angus from Hermitude.
– Luke Snarl Dearnley, who did a stellar job as technical producer, keeping the whole show smooth as butter.
– Owen Field, who covered all the logistics with grace and calm…

And that list could go on and on – there were endless Elefants who who were such a pleasure to collaborate with…

Some Elefant clips:

X-Continental, a clip I did for the Herd back in 2001.Urthboy, Ozi Batla, Solo, The Tongue and L-FRESH: Cipher at the Opera House
and below, Dropbear’s fantastic animation for ‘And To Think That I Saw It on Mulberry st’, which was performed as the first track of the show, by Urthboy, Jane Tyrrell + Angus from Hermitude. Ozi Batla had just given his show-intro in an aviator costume, and hooded Urthboy came on to do a quick rap about Dr Seuss, before pulling back the hood as the lights came up, the decks started up, and MCs roamed the stage with this as backdrop:

I’d first learned of his cancer diagnosis a few months ago, after wandering once again to his youtube page, and noticing a short and simple message underneath his most recent short film:

Down With The Dawn, is Run Wrake’s usual virtuosic animation, but knowing that this 8 minute short film was his response to being diagnosed with cancer, made it quite confrontational viewing. I was shocked then, but somehow presumed he was turning things around, he was on the slow path to recovery, that although tragic, everything would be okay.

“It is with incredible sadness that I have to let you know that our darling Run passed away very suddenly at 5am on Sunday morning as an end result of his cancer. He had spent a beautiful Saturday with his two children Florence and Joe, his sister Fiona and myself. We left him at 7pm doing what he loved best- drawing and animating with peg bar and paper.
I was with him for his last moments. We love you Run.
Lisa Wrake.”

Above, hard-drive snapshot of some of my favourite RunWrake animations.

I first learned of Run Wrake around 10 years ag0, through his compilation Gas DVD, “Dinnertime”. Somehow it had laid unwatched in a pile of media for a few months, until late one evening I spied it again and lazily inserted, then pressed play. What followed was dizzying and overwhelming – that mix of exhilaration and exhaustion when discovering an artist so consistently good, so relentlessly inventive, and so utterly prolific that you’re left wondering if they exist under different laws of time and space.

Where did ‘Run Wrake’ come from?
Actually a nickname earned whilst keeping wicket particularly badly during a game of cricket aged 11. A friend was sent in for sarcastically shouting ”Run”, as the ball went thru’ my legs for four.

With so much animation under your belt, what has it taught you?
It’s taught me that I’m very lucky to have the desire and ability to scrape a living doing what I enjoy, and that you will never make a piece of work with which you are entirely satisfied.

To what extent do you storyboard your clips? Or how do you approach narrative?
”Rabbit” is the first film that I have rigorously boarded, with a view to telling a story, and I thoroughly enjoyed the discipline.

Any desire for feature films, or longer works?
Absolutely, watch this space*.

(*As of 2012: Wrake was devel­op­ing an ani­mated fea­ture, The Way to a Whole New You, with writer Neil Jaworski for BBC Films.)

One of my questions was whether Run Wrake had ever animated a skateboarder, and Run Wrake was kind enough to add a note at the end saying that he’d done an ad featuring a skater, and that he’d attached a little quicktime movie of it for me. One of those wow moments – a favourite artist sending me something they’d made?? Below, a screenshot sequence from it, which demonstrates one of his trademark ‘perpetual zoom outs’…

A glimpse at his biography (have you seen a more delightful online CV?), showed some of how this was all possible. Run Wrake had gone through the Chelsea College of Art and Design, and the Royal College of Art, before achieving a breakthrough with his 1990 student film Anyway on MTV’s Liquid Television. With Anyway, several strengths were already evident – an eagerness to playfully deconstruct form, an ability to adapt and incorporate many kinds of media and animation styles, and an incredible capacity for fluid transitions – smoothly morphing into wildly different scenarios or character transformations.

The DVD documents the development of all those strengths, as well as introducing others – a highly attuned sense of animation rhythm and pacing, and a flair for visualising sound and loops. That kinship with music was partially nurtured over time with his job as an illustrator for NME magazine, (the DVD includes a virtual gallery of these illustrations, narrated by a flying turtle-armed boy.), but is most evident across his trajectory of music videos, most notably those with long-time collaborator, Howie B.

“my first job, commissioned by an Elvis suited Jonathan Ross to make a title sequence…making Jukebox, my first animate! commission, a two year slog…meeting and working with Howie B, initially on a short film to accompany the release of his album Music For Babies, and subsequently on a series of freeform promos…presenting storyboards to Roy Lichtenstein in his New York studio for U2′s Popmart Tour visuals…and the critical acclaim for Rabbit, a short film completed in 2005.”

Less easy to understand is why Run Wrake wasn’t better known, even amongst animators. Even though he worked on U2 tours, and Rabbit won plenty of awards, it still felt that there was an animation giant walking amongst us, and not enough recognition of how much terrain his work covered. That was at least partially remedied, earlier this year, with a Run Wrake Retrospective at the Ottawa International Animation Festival, with the title referencing one of his favourite characters:

(Above, a messy example VDMX interface of mine. Click screenshot to see full version)

Here’s a review brewed since I got my review copy back in 2005 (when VDMX first turned 5, says the Vidvox software museum*). Now that it’s 2012 and we’re at Beta version 8.0.8.1, it seems as good a time as any to declare VDMX 5 ripe and ready. Let’s do this.

What is VDMX 5?

What does that even mean? The six word executive summary by @Protostarrr :
‘A hipsters version of After Effects‘ is cute, but misses a crucial difference – VDMX is software built for real-time usage – ie no waiting around for rendering, it means live adjusting, manipulating and sequencing of video clips and video parameters – during a theatre performance, while musicians play on stage, within an installation, or to create some hybrid of what might be called live cinema. Just as hiphop and electronic music producers have long been playing live with audio samples, we now have the ability to shift from a studio production mentality, towards using video samples in a live setting. This means VDMX must be capable of letting it’s users adapt and respond to any unfolding events – and the importance of having that flexibility is reflected with how Vidvox define their software:

“VDMX5 is a program that lets you assemble custom realtime video processing applications. This is an important distinction- instead of being stuck with a fixed processing engine and a static interface, it gives you the freedom to assemble not only whatever custom processing backend you desire, but it allows you a great deal of creative control over how you wish to interact with your backend.”

(Example search for ‘VDMX interface’ )

So what can VDMX 5 do?

– Trigger separate clips for playback across different projectors ( a desktop with multiple outputs, or an external graphics card for laptop is also needed)
– Mix several clips together to create layered collages and compositions (multi-blend mode options / compositing options / cross-fade options / customisable quartz transition modes)
– Map separate video layers onto physical objects (VDMX5 has basic perspective mapping functions, or can send video layers via syphon to other mapping software)
– Organise video layers into groups (which allows composition or FX parameters to be adjusted per layer or per group)
– Re-route any video layers into other layers / compositions (enables easy creation of visual feedback loops, or addition of more organic complexity with FX)
– Adjust or control any video parameter or Fx parameter easily with an onscreen slider or button – and in turn, control these by various data sources (eg mouse / midi / audio analysis from built-in laptop microphone / LFO oscillators and wave values / midi + OSC controllers / wii controller / iOS or android controller etc ), and these values can be flexibly refined by using a range of in-built math behaviours ( eg invert values, smooth values, multiply values etc).
– Build Control Surface Plug-ins – which are ways to consolidate various controls into a a customised interface ( eg have 4 meta sliders, each of which may control any number of other parameters, when activated )
– Capture camera inputs, apply effects to these. Can also record and playback camera samples in real-time.
– Capture the visual output from a window of any other application running, and re-route this through the VDMX signal chain (eg mix in a live webcast from a browser, bring in a photoshop sketching window, bring in a skype window etc )
– Record your clip-triggering and visual FX experiments to disk (Fast and reliable, records directly into a VDMX media bin for immediate re-triggering / remixing / recording and etc etc )
– Use a built in step sequencer for arranging clip-triggering or FX over time.
– Save and trigger presets in extensive ways (global, per layer, per FX chain, and per slider. And more recently, we can cut and paste parameter settings between sliders. Very useful for quickly copying refined parameter and interactivity settings from one effect to another.)
– Tightly integrate customised quartz composer patches and FX, including customised interface elements – where each of these can be controlled by the various methods described above. (It’s hard to overemphasise how useful and powerful this is).
– Use flash, text and HTML files, as well as Freeframe FX.
– New : send DMX (Artnet) data – to control / interact with lights / lighting desks… (I’m yet to play with this, but it’s a great addition. Requires a computer to DMX box such as the Enttec ODE. )

There’s much more, but you get the idea – it’s flexible, and can be adapted to suit your project by project needs. These open ended possibilities are both a strength and weakness of VDMX – it’s fantastic being able to make your own customised interface to suit a particular workflow or project, but first time users tend to find can be daunting to approach for first time users.

Below, an example of 3 layers being mapped to suit particular shapes. (The canvas controls can be enlarged for easier mapping / alignment, with pixel increment adjustments on corners, available by pressing arrow keys )

Understanding the VDMX Workflow

With the above multitude of options, getting to know the ropes is pretty important. Here’s a few learning pathways:

1. Plug N Play… aka ‘explore’ : Even within the downloadable demo software, VDMX5 comes with built-in template projects that can be accessed through the topscreen menu. These can be easily modified and used as a foundation for your own projects. Playing with each template will show some of the features and variety on offer.2. Vidvox Wiki : Extensive, detailed listing and explanation of the progam’s various parameters. Read over, then go back to step 1 and play some more.3. tutorials.vidvox.net : In-depth video tutorials from the pixelated horse’s mouth.4. VDMX forums : Over time, I’ve probably learnt more about the program here than anywhere else – as with any software of depth, the possible solutions to any particular problem posed, are multiple and varied, and am regularly learning new ways to use VDMX through the discussions here. The developers also contribute frequently, debugging problems, clarifying how various aspects work, and helping point beginners in the right direction.

Requirements :

Mac computer with an Intel processor

Mac OS X 10.6 or later

NVIDIA or ATI Graphics Card

4+ GB of RAM

$349US – Refreshingly, this licences the user to run VDMX on up to three different computers for personal use. On one level it’s a very generous licence – but on the other, it’s merely acknowledging the likely practices of most digital artists (across many workplaces, home, venues, installations, multi-screen set-ups etc). At any rate, very handy.Educational pricing = $199
There’s also a ‘Starving Artist Discount’ – ‘Put your skills to work helping out the VDMX community and you can get a license of VDMX5 for only $199 USD.’

Verdict?

While VDMX 5 is overkill for some people, and others might prefer the complexities of say of MAX/MSP or coding their own software, for me it strikes a great balance of depth and accessibility. Complex results and interfaces are possible, with relatively little mental investment. Once that initial learning has happened, it’s a very versatile tool, easily refined to suit each project (eg for this gig, let’s make the playback timeline fill the whole screen, so we can fine tune tiny little loops more easily – or let’s create 3 media bins so it’s very clear which samples to trigger for each of 3 stage characters – or let’s emphasise the FX palette here.. etc etc). VDMX 5 has evolved over many years, taking on board much user feedback, as well as introducing users to better ways of approaching video signals and introducing all manner of nuanced interface elements and processes. There is a lot of significant functionality in the program, but it’s in the nuanced details of those features, that the merits of VDMX 5 really come into play. Take it for a test drive….

For the TZU ‘Beautiful’ music video, I recently found myself out near Hanging Rock, with plastic-wrapped laptop, projector, camera, lights, and a mini-crew – filming ghost projections in the night winter rain. Despite the weather drastically mismatching the supposed forecast, slowing everything to a snail’s pace, we salvaged the situation as best we could, reworking the storyboard around some of the less exposed areas, and soldiered on until about 5am. Not the end result we’d aimed for, but am happy with what we managed in the circumstances. So it goes. Full credits/links, and a series of behind the scenes photos over at the project page.