Category Archives: Theory

It’s hard to appreciate some of the stranger complexities of working in a programming environment until you stumble on something good and strange. Strange how Matt? What a lovely question, and I’m so glad that you asked!

Time is a strange animal – our relationship to it is often changed by how we perceive the future or the past, and our experience of the now is often clouded by what we’re expecting to need to do soon or reflections of what we did some time ago. Those same ideas find their way into how we program machines, or expect operations to happen – I need some-something to happen at some time in the future. Well, that’s simple enough on the face of it, but how do we think about that when we’re programming?

Typically we start to consider this through operations that involve some form of delay. I might issue the command for an operation now, but I want the environment to wait some fixed period of time before executing those instructions. In Python we have a lovely option for using the time module to perform an operation called sleep – this seems like a lovely choice, but in fact you’ll be oh so sorry if you try this approach:

But whyyyyyyyy?!

Well, Python is blocking inside of TouchDesigner. This means that all of the Python code needs to execute before you can proceed to the next frame. So what does that mean? Well, copy and paste the code above into a text DAT and run this script.

If you keep an eye on the timeline at the bottom of the screen, you should see it pause for 1 second while the time.sleep() operation happens, then we print “oh, hello there” to the text port and we start back up again. In practice this will seem like Touch has frozen, and you’ll soon be cursing yourself for thinking that such a thing would be so easy.

So, if that doesn’t work… what does? Is there any way to delay operations in Python? What do we do?!

Well, as luck would have it there’s a lovely method called run() in the td module. That’s lovely and all, but it’s a little strange to understand how to use this method. There’s lots of interesting nuance to this method, but for now let’s just get a handle on how to use it – both from a simple standpoint, and with more complex configurations.

To get started let’s examine the same idea that we saw above. Instead of using time.sleep() we can instead use run() with an argument called delayFrames. The same operation that we looked at above, but run in a non-blocking way would look like this:

If you try copying and pasting the code above into a text DAT you should have much better results – or at least results where TouchDesigner doesn’t stop running while it waits for the Python bits to finish.

Okay… so that sure is swell and all, so what’s so complicated? Well, let’s suppose you want to pass some arguments into that script – in fact we’ll see in a moment that we sometimes have to pass arguments into that script. First things first – how does that work?

Notice how when we wrote our string we used args[some_index_value] to indicate how to use an argument. That’s great, right? I know… but why do we need that exactly? Well, as it turns out there are some interesting things to consider about running scripts. Let’s think about a situation where we have a constant CHOP whose parameter value0 we want to change each time in a for loop. How do we do that? We need to pass a new value into our script each time it runs. Let’s try something like:

What you should see is that your constant CHOP increments every second:

But that’s just the tip of the iceberg. We can run strings, whole DATs, even the contents of a table cell.

This approach isn’t great for everything… in fact, I’m always hesitant to use delay scripts too heavily – but sometimes they’re just what you need, and for that very reason they’re worth understanding.

If you’ve gotten this far and are wondering why on earth this is worth writing about – check out this post on the forum: Replicator set custom parms error. It’s a pretty solid example of how and why it’s worth having a better understanding of how delay scripts work, and how you can make them better work for you.

With a start on point lights, one of the next questions you might ask is “what about cone lights?” Well, it just so happens that there’s a way to approach a deferred pipeline for cone lights just like with point lights. This example still has a bit to go with a few lingering mis-behaviors, but it is a start for those interested in looking at complex lighting solutions for their realtime scenes.

TouchDesigner networks are notoriously difficult to read, and this doc is intended to help shed some light on the ideas explored in this initial sample tox that’s largely flat.

This approach is very similar to point lights, with the additional challenge of needing to think about lights as directional. We’ll see that the first stage and last of this process – is consistent with our Point Light example, but in the middle we need to make some changes. We can get started by again with color buffers.

Color Buffers

These four color buffers represent all of that information that we need in order to do our lighting calculations further down the line. At this point we haven’t done the lighting calculations yet – just set up all of the requisite data so we can compute our lighting in another pass.

Our four buffers represent:

position – renderselect_postition

normals – renderselect_normal

color – renderselect_color

uvs – renderselect_uv

If we look at our GLSL material we can get a better sense of how that’s accomplished.

Essentially, the idea here is that we’re encoding information about our scene in color buffers for later combination. In order to properly do this in our scene we need to know point position, normal, color, and uv. This is normally handled without any additional intervention by the programmer, but in the case of working with lots of lights we need to organize our data a little differently.

Light Attributes

Here we’ll begin to see a divergence from our previous approach.

We are still going to compute and pack data for the position, color, and falloff for our point lights like in our previous example. The difference now is that we also need to compute a look-at position for each of our lights. In addition to our falloff data we’ll need to also consider the cone angle and delta of our lights. For the time being cone angle is working, but cone delta is broken – pardon my learning in public here.

For the sake of sanity / simplicity we’ll use a piece of geometry to represent the position of our point lights – similar to the approach used for instancing. In our network we can see that this is represented by our null SOP null_lightpos. We convert this to CHOP data and use the attributes from this null (number of points) to correctly ensure that the rest of our CHOP data matches the correct number of samples / lights we have in our scene. In this case we’re using a null since we want to position the look-at points at some other position than our lights themselves. Notice that our circle has one transform SOP to describe light position, and another transform SOP to describe look-at position. In the next stage we’ll use our null_light_pos CHOP and our null_light_lookat CHOP for the lighting calculations – we’ll also end up using the results of our object CHOP null_cone_rot to be able to describe the rotation of our lights when rendering them as instances.

When it comes to the color of our lights, we can use a noise or ramp TOP to get us started. These values are ultimately just CHOP data, but it’s easier to think of them in a visual way – hence the use of a ramp or noise TOP. The attributes for our lights are packed into CHOPs where each sample represents the attributes for a different light. We’ll use a texelFetchBuffer() call in our next stage to pull the information we need from these arrays. Just to be clear, our attributes are packed in the following CHOPs:

position – null_light_pos

color – null_light_color

falloff – null_light_falloff

light cone – null_light_cone

This means that sample 0 from each of these four CHOPs all relate to the same light. We pack them in sequences of three channels, since that easily translates to a vec3 in our next fragment process.

The additional light cone attribute here is used to describe the radius of the cone and the degree of softness at the edges (again pardon the fact that this isn’t yet working).

Combining Buffers

Next up we combine our color buffers along with our CHOPs that hold the information about our lights location and properties.

What does this mean exactly? It’s here that we loop through each light to determine its contribution to the lighting in the scene, accumulate that value, and combine it with what’s in our scene already. This assemblage of our stages and lights is “deferred” so we’re only doing this set of calculations based on the actual output pixels, rather than on geometry that may or may not be visible to our camera. For loops are generally frowned on in openGL, but this is a case where we can use one to our advantage and with less overhead than if we were using light components for our scene.

Here’s a look at the GLSL that’s used to combine our various buffers:

If you look at the final pieces of our for loop you’ll find that much of this process is borrowed from the example Malcolm wrote (Thanks Malcolm!). This starting point serves as a baseline to help us get started from the position of how other lights are handled in Touch.

Representing Lights

At this point we’ve successfully completed our lighting calculations, had them accumulate in our scene, and have a slick looking render. However, we probably want to see them represented in some way. In this case we might want to see them just so we can get a sense of if our calculations and data packing is working correctly.

To this end, we can use instances and a render pass to represent our lights as spheres to help get a more accurate sense of where each light is located in our scene. If you’ve used instances before in TouchDesigner this should look very familiar. If that’s new to you, check out: Simple Instancing

Our divergence here is that rather than using spheres, we’re instead using cones to represent our lights. In a future iteration the width of the cone base should scale along with our cone angle, but for now let’s celebrate the fact that we have a way to see where our lights are coming from. You’ll notice that the rotate attributes generated from the object CHOP are used to describe the rotation of the instances. Ultimately, we probably don’t need these representations, but they sure are handy when we’re trying to get a sense of what’s happening inside of our shader.

Post Processing for Final Output

Finally we need to assemble our scene and do any final post process bits to get make things clean and tidy.

Up to this point we haven’t done any anti-aliasing, and our instances are in another render pass. To combine all of our pieces, and do take off the sharp edges we need to do a few final pieces of work. First we’ll composite our scene elements, then do an anti-aliasing pass. This is also where you might choose to do any other post process treatments like adding a glow or bloom to your render.

A bit ago I wanted to get a handle on how one might approach real time rendering with LOTS of lights. The typical openGL pipeline has some limitations here, but there’s a lot interesting potential with Deferred Lighting (also referred to as deferred shading). Making that leap, however, is no easy task and I asked Mike Walczyk for some help getting started. There’s a great starting point for this idea on the derivative forum but I wanted a 099 approach and wanted to pull it apart to better understand what was happening. With that in mind, this is a first pass at looking through using point lights in a deferred pipeline, and what those various stages look like.

TouchDesigner networks are notoriously difficult to read, and this doc is intended to help shed some light on the ideas explored in this initial sample tox that’s largely flat.

Color Buffers

These four color buffers represent all of that information that we need in order to do our lighting calculations further down the line. At this point we haven’t done the lighting calculations yet – just set up all of the requisite data so we can compute our lighting in another pass.

Our four buffers represent:

position – renderselect_postition

normals – renderselect_normal

color – renderselect_color

uvs – renderselect_uv

If we look at our GLSL material we can get a better sense of how that’s accomplished.

Essentially, the idea here is that we’re encoding information about our scene in color buffers for later combination. In order to properly do this in our scene we need to know point position, normal, color, and uv. This is normally handled without any additional intervention by the programmer, but in the case of working with lots of lights we need to organize our data a little differently.

Light Attributes

Next we’re going to compute and pack data for the position, color, and falloff for our point lights.

For the sake of sanity / simplicity we’ll use a piece of geometry to represent the position of our point lights – similar to the approach used for instancing. In our network we can see that this is represented by our Circle SOP circle1. We convert this CHOP data and use the attributes from this circle (number of points) to correctly ensure that the rest of our CHOP data matches the correct number of samples / lights we have in our scene.

When it comes to the color of our lights, we can use a noise or ramp TOP to get us started. These values are ultimately just CHOP data, but it’s easier to think of them in a visual way – hence the use of a ramp or noise TOP. The attributes for our lights are packed into CHOPs where each sample represents the attributes for a different light. We’ll use a texelFetchBuffer() call in our next stage to pull the information we need from these arrays. Just to be clear, our attributes are packed in the following CHOPs:

position – null_light_pos

color – null_light_color

falloff – null_light_falloff

This means that sample 0 from each of these three CHOPs all relate to the same light. We pack them in sequences of three channels, since that easily translates to a vec3 in our next fragment process.

Combining Buffers

Next up we combine our color buffers along with our CHOPs that hold the information about our lights location and properties.

What does this mean exactly? It’s here that we loop through each light to determine its contribution to the lighting in the scene, accumulate that value, and combine it with what’s in our scene already. This assemblage of our stages and lights is “deferred” so we’re only doing this set of calculations based on the actual output pixels, rather than on geometry that may or may not be visible to our camera. For loops are generally frowned on in openGL, but this is a case where we can use one to our advantage and with less overhead than if we were using light components for our scene.

Here’s a look at the GLSL that’s used to combine our various buffers:

Representing Lights

At this point we’ve successfully completed our lighting calculations, had them accumulate in our scene, and have a slick looking render. However, we probably want to see them represented in some way. In this case we might want to see them just so we can get a sense of if our calculations and data packing is working correctly.

To this end, we can use instances and a render pass to represent our lights as spheres to help get a more accurate sense of where each light is located in our scene. If you’ve used instances before in TouchDesigner this should look very familiar. If that’s new to you, check out: Simple Instancing

Post Processing for Final Output

Finally we need to assemble our scene and do any final post process bits to get make things clean and tidy.

Up to this point we haven’t done any anti-aliasing, and our instances are in another render pass. To combine all of our pieces, and do take off the sharp edges we need to do a few final pieces of work. First we’ll composite our scene elements, then do an anti-aliasing pass. This is also where you might choose to do any other post process treatments like adding a glow or bloom to your render.

Looking for generic advice on how to make a tox loader with cues + transitions, something that is likely a common need for most TD users dealing with a playback situation. I’ve done it for live settings before, but there are a few new pre-requisites this time: a looping playlist, A-B fade-in transitions and cueing. Matthew Ragan‘s state machine article (https://matthewragan.com/…/presets-and-cue-building-touchd…/) is useful, but since things get heavy very quickly, what is the best strategy for pre-loading TOXs while dealing with the processing load of an A to B deck situation?

I’ve been thinking about this question for a day now, and it’s a hard one. Mostly this is a difficult question as there are lots of moving parts and nuanced pieces that are largely invisible when considering this challenge from the outside. It’s also difficult as general advice is about meta-concepts that are often murkier than they may initially appear. So with that in mind, a few caveats:

Some of suggestions below come from experience building and working on distributed systems, some from single server systems. Sometimes those ideas play well together, and sometimes they don’t. Your mileage may vary here, so like any general advice please think through the implications of your decisions before committing to an idea to implement.

The ideas are free, but the problems they make won’t be. Any suggestion / solution here is going to come with trade-offs. There are no silver bullets when it comes to solving these challenges – one solution might work for the user with high end hardware but not for cheaper components; another solution may work well across all component types, but have an implementation limit.

I’ll be wrong about some things. The scope of anyone’s knowledge is limited, and the longer I work in ToiuchDesigner (and as a programmer in general) the more I find holes and gaps in my conceptual and computational frames of reference. You might well find that in your hardware configuration my suggestions don’t work, or something I suggest won’t work does. As with all advice, it’s okay to be suspicious.

A General Checklist

Plan… no really, make a Plan and Write it Down

The most crucial part of this process is the planning stage. What you make, and how you think about making it, largely depends on what you want to do and the requirements / expectations that come along with what that looks like. This often means asking a lot of seemingly stupid questions – do I need to support gifs for this tool? what happens if I need to pulse reload a file? what’s the best data structure for this? is it worth building an undo feature? and on and on and on. Write down what you’re up to – make a checklist, or a scribble on a post-it, or create a repo with a readme… doesn’t matter where you do it, just give yourself an outline to follow – otherwise you’ll get lost along or forget the features that were deal breakers.

Data Structures

These aren’t always sexy, but they’re more important than we think at first glance. How you store and recall information in your project – especially when it comes to complex cues – is going to play a central role in how your solve problems for your endeavor. Consider the following questions:

What existing tools do you like – what’s their data structure / solution?

How is your data organized – arrays, dictionaries, etc.

Do you have a readme to refer back to when you extend your project in the future?

Do you have a way to add entries?

Do you have a way to recall entries?

Do you have a way to update entries?

Do you have a way to copy entries?

Do you have a validation process in-line to ensure your entries are valid?

Do you have a means of externalizing your cues and other config data

Time

Take time to think about… time. Silly as it may seem, how you think about time is especially important when it comes to these kinds of systems. Many of the projects I work on assume that time is streamed to target machines. In this kind of configuration a controller streams time (either as a float or as timecode) to nodes on the network. This ensures that all machines share a clock – a reference to how time is moving. This isn’t perfect and streaming time often relies on physical network connections (save yourself the heartache that comes with wifi here). You can also end up with frame discrepancies of 1-3 frames depending on the network you’re using, and the traffic on it at any given point. That said, time is an essential ingredient I always think about when building playback projects. It’s also worth thinking about how your toxes or sub-components use time.

When possible, I prefer expecting time as an input to my toxes rather than setting up complex time networks inside of them. The considerations here are largely about sync and controlling cooking. CHOPs that do any interpolating almost always cook, which means that downstream ops depending on that CHOP also cook. This makes TOX optimization hard if you’re always including CHOPs with constantly cooking foot-prints. Providing time to a TOX as an expected input makes handling the logic around stopping unnecessary cooking a little easier to navigate. Providing time to your TOX elements also ensures that you’re driving your component in relationship to time provided by your controller.

How you work with time in your TOXes, and in your project in general can’t be understated as something to think carefully about. Whatever you decide in regards to time, just make sure it’s a purposeful decision, not one that catches you off guard.

Identify Your Needs

What are the essential components that you need in modular system. Are you working mostly with loading different geometry types? Different scenes? Different post process effects? There are several different approach you might use depending on what you’re really after here, so it’s good start to really dig into what you’re expecting your project to accomplish. If you’re just after an optimized render system for multiple scenes, you might check out this example.

Understand / Control Component Cooking

When building fx presets I mostly aim to have all of my elements loaded at start so I’m only selecting them during performance. This means that geometry and universal textures are loaded into memory, so changing scenes is really only about scripts that change internal paths. This also means that my expectation of any given TOX that I work on is that its children will have a CPU cook time of less than 0.05ms and preferably 0.0ms when not selected. Getting a firm handle on how cooking propagates in your networks is as close to mandatory as it gets when you want to build high performing module based systems.

Some considerations here are to make sure that you know how the selective cook type on null CHOPs works – there are up and downsides to using this method so make sure you read the wiki carefully.

Exports vs. Expressions is another important consideration here as they can often have an impact on cook time in your networks.

Careful use of Python also falls into this category. Do you have a hip tox that uses a frame start script to run 1000 lines of python? That might kill your performance – so you might need to think through another approach to achieve that effect.

Do you use script CHOPs or SOPs? Make sure that you’re being carefully with how you’re driving their parameters. Python offers an amazing extensible scripting language for Touch, but it’s worth being careful here before you rely too much on these op types cooking every frame.

Even if you’re confident that you understand how cooking works in TouchDesigner, don’t be afraid to question your assumptions here. I often find that how I thought some op behaved is in fact not how it behaves.

Plan for Scale

What’s your scale? Do you need to support an ever expanding number of external effects? Is there a limit in place? How many machines does this need to run on today? What about in 4 months? Obscura is often pushing against boundaries of scale, so when we talk about projects I almost always add a zero after any number of displays or machines that are going to be involved in a project… that way what I’m working on has a chance of being reusable in the future. If you only plan to solve today’s problem, you’ll probably run up against the limits of your solution before very long.

Shared Assets

In some cases developing a place in your project for shared assets will reap huge rewards. What do I mean? You need look no further than TouchDesigner itself to see some of this in practice. In ui/icons you’ll find a large array of moviefile in TOPs that are loaded at start and provide many of the elements that we see when developing in Touch:

Rather than loading these files on demand, they’re instead stored in this bin and can be selected into their appropriate / needed locations. Similarly, if your tox files are going to rely on a set of assets that can be centralized, consider what you might do to make that easier on yourself. Loading all of these assets on project start is going to help ensure that you minimize frame drops.

While this example is all textures, they don’t have to be. Do you have a set of model assets or SOPs that you like to use? Load them at start and then select them. Selects exist across all Op types, don’t be afraid to use them. Using shared assets can be a bit of a trudge to set up and think through, but there are often large performance gains to be found here.

Dependencies

Sometimes you have to make something that is dependent on something else. Shared assets are a kind of single example of dependencies – where a given visuals TOX wouldn’t operate correctly in a network that didn’t have our assets TOX as well. Dependencies can be frustrating to use in your project, but they can also impose structure and uniformity around what you build. Chances are the data structure for your cues will also become dependent on external files – that’s all okay. The important consideration here is to think through how these will impact your work and the organization of your project.

Use Extensions

If you haven’t started writing extensions, now is the time to start. Cue building and recalling are well suited for this kind of task, as are any number of challenges that you’re going to find. In the past I’ve used custom extensions for every external TOX. Each module has a Play(state) method where state indicates if it’s on or off. When the module is turned on it sets of a set of scripts to ensure that things are correctly set up, and when it’s turned off it cleans itself up and resets for the next Play() call. This kind of approach may or may not be right for you, but if you find yourself with a module that has all sorts of ops that need to be bypassed or reset when being activated / deactivated this might be the right kind of solution.

Develop a Standard

In that vein, cultivate a standard. Decide that every TOX is going to get 3 toggles and 6 floats as custom pars. Give every op access to your shared assets tox, or to your streamed time… whatever it is, make some rules that your modules need to adhere to across your development pipeline. This lets you standardize how you treat them and will make you all the happier in the future.

That’s all well and good Matt, but I don’t get it – why should my TOXes all have a fixed number of custom pars? Let’s consider building a data structure for cues let’s say that all of our toxes have a different number of custom pars, and they all have different names. Our data structure needs to support all of our possible externals, so we might end up with something like:

That’s a bummer. Looking at this we can tell right away that there might be problems brewing at the circle k – what happens if we mess up our tox loading / targeting and our custom pars can’t get assigned? In this set-up we’ll just fail during execution and get an error… and our TOX won’t load with the correct pars. We could swap this around and include every possible custom par type in our dictionary, only applying the value if it matches a par name, but that means some tricksy python to handle our messy implementation.

What if, instead, all of our custom TOXes had the same number of custom pars, and they shared a name space to the parent? We can rename them to whatever makes sense inside, but in the loading mechanism we’d likely reduce the number of errors we need to consider. That would change the dictionary above into something more like:

Okay, so that’s prettier… So what? If we look back at our lesson on dictionary for loops we’ll remember that the pars() call can significantly reduce the complexity of pushing dictionary items to target pars. Essentially we’re able to store the par name as the key, and the target value as the value in our dictionary we’re just happier all around. That makes our UI a little harder to wrangle, but with some careful planning we can certainly think through how to handle that challenge. Take it or leave it, but a good formal structure around how you handle and think about these things will go a long way.

Cultivate Realistic Expectations

I don’t know that I’ve ever met a community of people with such high standards of performance as TouchDesigner developers. In general we’re a group that wants 60 fps FOREVER (really we want 90, but for now we’ll settle), and when things slow down or we see frame drops be prepared for someone to tell you that you’re doing it all wrong – or that your project is trash.

Waoh is that a high bar.

Lots of things can cause frame drops, and rather than expecting that you’ll never drop below 60, it’s better to think about what your tolerance for drops or stutters is going to be. Loading TOXes on the fly, disabling / enabling containers or bases, loading video without pre-loading, loading complex models, lots of SOP operations, and so on will all cause frame drops – sometimes big, sometimes small. Establishing your tolerance threshold for these things will help you prioritize your work and architecture. You can also think about where you might hide these behaviors. Maybe you only load a subset of your TOXes for a set – between sets you always fade to black when your new modules get loaded. That way no one can see any frame drops.

The idea here is to incorporate this into your planning process – having a realistic expectation will prevent you from getting frustrated as well, or point out where you need to invest more time and energy in developing your own programming skills.

Separation is a good thing… mostly

I’d always suggest keeping the UI on another machine or in a seperate instance. It’s handier and much more scaleable if you need to fork out to other machines. It forces you to be a bit more disciplined and helps you when you need to start putting previz tools etc in. I’ve been very careful to take care of the little details in the ui too such as making sure TOPs scale with the UI (but not using expressions) and making sure that CHOPs are kept to a minimum. Only one type of UI element really needs a CHOP and that’s a slider, sometimes even they don’t need them.

I’m with Richard 100% here on all fronts. That said, be mindful of why and when you’re splitting up your processes. It might be temping to do all of your video handling in one process, that gets passed to process only for rendering 3d, before going to a process that’s for routing and mapping.

Settle down there cattle rustler.

Remember that for all the separating you’re doing, you need strict methodology for how these interchanges work, how you send messages between them, how you debug this kind of distribution, and on and on and on.

There’s a lot of good to be found how you break up parts of your project into other processes, but tread lightly and be thoughtful. Before I do this, I try to ask myself:

“What problem am I solving by adding this level of additional complexity?”

“Is there another way to solve this problem without an additional process?”

“What are the possible problems / issues this might cause?”

“Can I test this in a small way before re-factoring the whole project?”

Don’t Forget a Start up Procedures

How your project starts up matters. Regardless of your asset management process it’s important to know what you’re loading at start, and what’s only getting loaded once you need it in touch. Starting in perform mode, there are a number of bits that aren’t going to get loaded until you need them. To that end, if you have a set of shared assets you might consider writing a function to force cook them so they’re ready to be called without any frame drops. Or you might think about a way to automate your start up so you can test to make sure you have all your assets (especially if your dev computer isn’t the same as your performance / installation machine).

Logging and Errors

It’s not much fun to write a logger, but they sure are useful. When you start to chase this kind of project it’ll be important to see where things went wrong. Sometimes the default logging methods aren’t enough, or they happen to fast. A good logging methodology and format can help with that. You’re welcome to make your own, you’re also welcome to use and modify the one I made.

Unit Tests

Measure twice, cut once. When it comes to coding, unit tests are where it’s at. Simple proof of concept complete tests that aren’t baked into your project or code can help you sort out the limitations or capabilities of an idea before you really dig into the process of integrating it into your project. These aren’t always fun to make, but they let you strip down your idea to the bare bones and sort out simple mechanics first.

Build the simplest implementation of the idea. What’s working? What isn’t? What’s highly performant? What’s not? Can you make any educated guesses or speculation about what will cause problems? Give yourself some benchmarks that your test has to prove itself against before you move ahead with integrating it into your project as a solution.

Document

Even though it’s hard – DOCUMENT YOUR CODE. I know that it’s hard, even I have a hard time doing it – but it’s so so so very important to have a documentation strategy for a project like this. Once you start building pieces that depend on a particular message format, or sequence of events, any kind of breadcrumbs you can leave for yourself to find your way back to your original thoughts will be helpful.

Taking a little time to better understand the channel class provides a number of opportunities for getting a stronger handle on what’s happening in TouchDesigner. This can be especially helpful if you’re working with CHOP executes or just trying to really get a handle on what on earth CHOPs are all about.

To get started, it might be helpful to think about what’s really in a CHOP. Channel Operators are largely arrays (lists in python lingo) of numbers. These arrays can be only single values, or they might be a long set of numbers. In any given CHOP all of the channels will have the same length (we could also say that they have the same number of samples). That’s helpful to know as it might shape the way we think of channels and samples.

Before we go any further let’s stop to think through the above just a little bit more. Let’s first think about a constant CHOP with channel called ‘chan1’. We know we can write a python reference for this chop like this:

op( 'constant1' )[ 'chan1' ]

or like this:

op( 'constant1' )[ 0 ]

Just as a refresher, we should remember that the syntax here is:
op( stringNameToOperator )[ channelNameOrIndex ]

That’s just great, but what happens if we have a pattern CHOP? If we drop down a default pattern CHOP (which has 1000 samples), and we try the same expression:

op( 'pattern1' )[ 'chan1' ]

We now get a constantly changing value. What gives?! Well, we’re now looking at bit list of numbers, and we haven’t told Touch where in that list of values we want to grab an index – instead touch is moving through that index with me.time.frame-1 as the position in the array. If you’re scratching your head, that’s okay we’re going to pull this apart a little more.

Okay, what’s really hiding from us is that CHOP references have a default value that’s provided for us. While we often simplify the reference syntax to:

In single sample CHOPs we don’t usually need to worry about this third argument – if there’s only one value in the list Touch very helpfully grabs the only value there. In a multi-sample CHOP channel, however, we need more information to know what sample we’re really after. Let’s try our reference to a narrow down to a single sample in that pattern CHOP. Let’s say we want sample 499:

op( 'pattern1' )[ 'chan1' ][ 499 ]

With any luck you should now be seeing that you’re only getting a single value. Success!

But what does this have to do with the Channel Class? Well, if we take a closer look at the documentation ( Channel Class Wiki Documentation ), we might find some interesting things, for example:

Members

valid (Read Only) True if the referenced chanel value currently exists, False if it has been deleted. Identical to bool(Channel).

index (Read Only) The numeric index of the channel.

name (Read Only) The name of the channel.

owner (Read Only) The OP to which this object belongs.

vals Get or set the full list of Channel values. Modifying Channel values can only be done in Python within a Script CHOP.

Okay, that’s great, but so what? Well, let’s practice our python and see what we might find if we try out a few of these members.

We might start by adding a pattern CHOP. I’m going to change my pattern CHOP to only be 5 samples long for now – we don’t need a ton of samples to see what’s going on here. Next I’m going to set up a table DAT and try out the following bits of python:

So far that’s not terribly exciting… or is it?! The real power of these Class Members comes from CHOP executes. I’m going to make a quick little example to help pull apart what’s exciting here. Let’s add a Noise CHOP with 5 channels. I’m going to turn on time slicing so we only have single sample channels. Next I’m going to add a Math CHOP and set it to ceiling – this is going to round our values up, giving us a 1 or a 0 from our noise CHOP. Next I’ll add a null. Next I’m going to add 5 circle TOPs, and make sure they’re named circle1 – circle5.

Here’s what I want – Every time the value is true (1), I want the circle to be green, when it’s false (0) I want the circle to be red. We could set up a number of clever ways to solve this problem, but let’s imagine that it doesn’t happen too often – this might be part of a status system that we build that’s got indicator lights that help us know when we’ve lost a connection to a remote machine (this doesn’t need to be our most optimized code since it’s not going to execute all the time, and a bit of python is going to be simpler to write / read). Okay… so what do we put in our CHOP execute?! Well, before we get started it’s important to remember that our Channel class contains information that we might need – like the index of the channel. In this case we might use the channel index to figure out which circle needs updating. Okay, let’s get something started then!

Alright! That works pretty well… but what if I want to use a select and save some texture memory?? Sure. Let’s take a look at how we might do that. This time around we’ll only make two circle TOPs – one for our on state, one for our off state. We’ll add 5 select TOPs and make sure they’re named select1-select5. Now our CHOP execute should be:

Okay… I’m going to add one more example to the sample code, and rather than walk you all the way through it I’m going to describe the challenge and let you pull it apart to understand how it works – challenge by choice, if you’re into what’s going on here take it all apart, otherwise you can let it ride.

Okay… so, what I want is a little container that displays a channel’s name, an indicator if the value is > 0 or < 0, another green / red indicator that corresponds to the >< values, and finally the text for the value itself. I want to use selects when possible, or just set the background TOP for a container directly. To make all this work you’ll probably need to use .name, .index, and .vals.

Questions for the professor:
1) How can I find out which sample index in the channel is the current sample?
2) How is that number calculated? That is, what determines which sample is current?

If we’re talking about a multi sample channel let’s take a look at how we might figure that out. I mentioned this in passing above, but it’s worth taking a little longer to pull this one apart a bit. I’m going to use a constant CHOP and a trail CHOP to take a look at what’s happening here.

Let’s start with a simple reference one more time. This time around I’m going to use a pattern CHOP with 200 samples. I’m going to connect my pattern to a null (in my case this is null7). My resulting python should look like:

op( 'null7' )[ 'chan1' ]

Alright… so we’re speeding right along, and our value just keeps wrapping around. We know that our multi sample channel has an index, so for fun games and profit let’s try using me.time.frame:

op( 'null7' )[ 'chan1' ][ me.time.frame ]

Alright… well. That works some of the time, but we also get the error “Index invalid or out of range.” WTF does that even mean?! Well, remember an array or list has a specific length, when we try to grab something outside of that length we’ll seen an error. If you’re still stratching you’re head that’s okay – let’s take a look at it this way.

Now we should see an out of range error… because there is nothing in the 4th position in our list / array. Okay, Matt – so how does that relate to our error earlier? The error we were seeing earlier is because me.time.frame (in a default network) evaluates up to 600 before going back to 1. So, to fix our error we might use modulo:

op( 'null7' )[ 'chan1' ][ me.time.frame % 200 ]

Wait!!! Why 200? I’m using 200 because that’s the number of samples I have in my pattern CHOP.

Okay! Now we’re getting somewhere.
The only catch is that if we look closely we’ll see that our refence with an index, and how touch is interpreting our previous refernce are different:

refernce

value

op( ‘null7’ )[ ‘chan1’ ]

0.6331658363342285

op( ‘null7’ )[ ‘chan1’ ][ me.time.frame % 200 ]

0.6381909251213074

WHAT GIVES MAAAAAAAAAAAAAAT!?
Alright, so there’s one more thing for us to keep in mind here. me.time.frame starts sequencing at 1. That makes sense, because we don’t usually think of frame 0 in animation we think of frame 1. Okay, cool. The catch is that our list indexes from the 0 position – in programming languages 0 still represents an index position. So what we’re actually seeing here is an off by one error.

Here’s our second stop in a series about planning out part of a long term installation’s UI. We’ll focus on looking at the calibration portion of this project, and while that’s not very sexy, it’s something I frequently set up gig after gig – how you get your projection matched to your architecture can be tricky, and if you can take the time to build something reusable it’s well worth the time and effort. In this case we’ll be looking at a five sided room that uses five projectors. In this installation we don’t do any overlapping projection, so edge blending isn’t a part of what we’ll be talking about in this case study

As many of you have already found there’s a wealth of interesting examples and useful tools tucked away in the palette in touch designer. If you’re unfamiliar with this feature, it’s located on the left hand side of the interface when you open touch, and you can quickly summon it into existence with the small drawer and arrow icon:

Tucked away at the bottom of the tools list is the stoner. If you’ve never used the stoner it’s a killer tool for all your grid warping needs. It allows for key stoning and grid warping, with a healthy set of elements that make for fast and easy alterations to a given grid. You can bump points with the keyboard, you can use the mouse to scroll around, there are options for linear curves, bezier curves, persepective mapping, and bilinear mapping. It is an all around wonderful tool. The major catch is that using the tox as is runs you about 0.2 milliseconds when we’re not looking at the UI, and about 0.5 milliseconds when we are looking at the UI. That’s not bad, in fact that’s a downright snappy in the scheme of things, but it’s going to have limitations when it comes to scale, and using multiple stoners at the same time.

That’s slick. But what if there was a way to get almost the same results at a cost of 0 milliseconds for photos, and only 0.05 milliseconds when working with video? As it turns out, there’s a gem of a feature in the stoner that allows us to get just this kind of performance, and we’re going to take a look at how that works as well as how to take advantage of that feature.

Let’s start by taking a closer look at the stoner itself. We can see now that there’s a second outlet on our op. Let’s plug in a null to both outlets and see what we’re getting.

Well hello there, what is this all about?!

Our second output is a 32 bit texture made up of only red and green channels. Looking closer we can see that it’s a gradient of green in the top left corner, and red in the bottom right corner. If we pause here for a moment we can look at how we might generate a ramp like this with a GLSL Top.

If you’re following along at home, let’s start by adding a GLSL Top to our network. Next we’ll edit the pixel shader.

So what do we have here exactly? For starters we have an explicit declaration of our out vec4 (in very simple terms – our texture that we want to pass out of the main loop); a main loop where we assign values to our output texture.

What’s a vec4?

In GLSL vectors are a data type. We use vectors for all sorts of operations, and as a datatype they’re very useful to us as we often want variable with several positions. Keeping in mind that GLSL is used in pixeltown (one of the largest burrows on your GPU), it’s helpful to be able to think of variables that carry multiple values – like say information about a red, green, blue, and alpha value for a given pixel. In fact, that’s just what our vec4 is doing for us here, it represents the RGBA values we want to associate with a given pixel.

vUV is an input variable that we can use to locate the texture coordinate of a pixel. This value changes for every pixel, which is part of the reason it’s so useful to us. So what is this whole vec4( vUV.st, 0.0, 1.0) business? In GL we can fill in the values of a vec4 with a vec2 – vUV.st is our uv coordinate as a vec2. In essence what we’ve done is say that we want to use the uv coordinates to stand in for our red and green values, blue will always be 0, and our alpha will always be 1. It’s okay if that’s a wonky to wrap your head around at the moment. If you’re still scratching your head you can read more at links below

Let’s move around our stoner a little bit to see what else changes here.

That’s still not very sexy – I know, but let’s hold on for just one second. We first need to pause for a moment and think about what this might be good for. In fact, there’s a lovely operator that this plays very nicely with. The remap TOP. Say what now? The remap top can be used to warp input1 based on a map in input2. Still scratching your head? That’s okay. Let’s plugin a few other ops so we can see this in action. We’re going to rearrange our ops here just a little and add a remap TOP to the mix.

Here we can see that the red / green map is used on the second input our our remap top, and our movie file is used on the first input.

Okay. But why is this anything exciting?

Richard Burns just recently wrote about remapping, and he very succinctly nails down exactly why this is so powerful:

It’s commonly used by people who use the stoner component as it means they can do their mapping using the stoners render pipeline and then simply remove the whole mapping tool from the system leaving only the remap texture in place.

Just like Richard mentions we can use this new feature to essentially completely remove or disable the stoner in our project once we’ve made maps for all of our texture warping. This is how we’ll get our cook time down to just 0.05 milliseconds.

Let’s look at how we can use the stoner to do just this.

For starters we need to add some empty bases to our network. To keep things simple for now I’m just going to add them to the same part of the network where my stoner lives. I’m going to call them base_calibration1 and base_calibration2.

Next we’re going to take a closer look at the stoner’s custom parameters. On the Stoner page we can see that there’s now a place to put a path for a project.

Let’s start by putting in the path to our base_calibration1 component. Once we hit enter we should see that our base_calibration1 has new set of inputs and outputs:

Let’s take a quick look inside our component to see what was added.

Ah ha! Here we’ve got a set of tables that will allow the stoner UI to update correctly, and we’ve got a locked remap texture!

So, what do we do with all of this?

Let’s push around the corners of our texture in the stoner and hook up a few nulls to see what’s happening here.

You may need to toggle the “always refresh” parameter on the stoner to get your destination project to update correctly. Later on we’ll look at how to work around this problem.

So far so good. Here we can see that our base_calibration1 has been updated with the changes we made to the stoner. What happens if we change the project path now to be base_calibration2? We should see that inputs and outputs are added to our base. We should also be able to make some changes to the stoner UI and see a two different calibrations.

Voila! That’s pretty slick. Better yet if we change the path in the stoner project parameter we’ll see that the UI updates to reflect the state we left our stoner in. In essence, this means that you can use a single stoner to calibrate multiple projectors without needing multiple stoners in your network. In fact, we can even bypass or delete the stoner from our project once we’re happy with the results.

There are, of course, a few things changes that we’ll make to integrate this into our project’s pipeline but understanding how this works will be instrumental in what we build next. Before we move ahead take some time to look through how this works, read through Richard’s post as well as some of the other documentation. Like Richard mentions, this approach to locking calibration data can be used in lots of different configurations and means that you can remove a huge chunk of overhead from your projects.

Next we’ll take the lessons we’ve learned here combined with the project requirements we laid out earlier to start building out our complete UI and calibration pipeline.

WonderDome

In 2012 Dan Fine started talking to me about a project he was putting together for his MFA thesis. A fully immersive dome theatre environment for families and young audiences. The space would feature a dome for immersive projection, a sensor system for tracking performers and audience members, all built on a framework of affordable components. While some of the details of this project have changed, the ideas have stayed the same – an immersive environment that erases boundaries between the performer and the audience, in a space that can be fully activated with media – a space that is also watching those inside of it.

Fast forward a year, and in mid October of 2013 the team of designers and our performer had our first workshop weekend where we began to get some of our initial concepts up on their feet. Leading up to the workshop we assembled a 16 foot diameter test dome where we could try out some of our ideas. While the project itself has an architecture team that’s working on an portable structure, we wanted a space that roughly approximated the kind of environment we were going to be working in. This test dome will house our first iteration of projection, lighting, and sound builds, as well as the preliminary sensor system.

Both Dan and Adam have spent countless hours exploring various dome structures, their costs, and their ease of assembly. Their research ultimately landed the team on using a kit from ZipTie Domes for our test structure. ZipTie Domes has a wide variety of options for structures and kits. With a 16 foot diameter dome to build we opted to only purchase the hub pieces for this structure, and to cut and prep the struts ourselves – saving us the costs of ordering and shipping this material.

In a weekend and change we were able to prep all of the materials and assemble our structure. Once assembled we were faced with the challenge of how to skin it for our tests. In our discussion about how to cover the structure we eventually settled on using a parachute for our first tests. While this material is far from our ideal surface for our final iteration, we wanted something affordable and large enough to cover our whole dome. After a bit of searching around on the net, Dan was able to locate a local military base that had parachutes past their use period that we were able to have for free. Our only hiccup here was that the parachute was multi colored. After some paint testing we settled on treating the whole fabric with some light gray latex paint. With our dome assembled, skinned, and painted we were nearly ready for our workshop weekend.

Media

There’s healthy body of research and methodology for dome projection on the web, and while reading about a challenge prepped the team for what we were about to face it wasn’t until we go some projections up and running that we began to realize what we were really up against. Our test projectors are InFocus 3118 HD machines that are great. There are not, however, great when it comes to dome projection. One of our first realizations in getting some media up on the surface of the dome was the importance of short throw lensing. Our three HD projectors at a 16 foot distance produced a beautifully bright image, but covered less of our surface than we had hoped. That said, our three projectors gave us a perfect test environment to begin thinking about warping and edge blending in our media.

TouchDesigner

One of the discussions we’ve had in this process has been about what system is going to drive the media inside of the WonderDome. One of the most critical elements to the media team in this regard is the ability to drop in content that the system is then able to warp and edge blend dynamically. One of the challenges in the forefront of our discussions about live performance has been the importance of a flexible media system that simplifies as many challenges as possible for the designer. Traditional methods of warping and edge blending are well established practices, but their implementation often lives in the media artifact itself, meaning that the media must be rendered in a manner that is distorted in order to compensate for the surface that it will be projected onto. This method requires that the designer both build the content, and build the distortion / blending methods. One of the obstacles we’d like to overcome in this project is to build a drag and drop system that allows the designer to focus on crafting the content itself, knowing that the system will do some of the heavy lifting of distortion and blending. To solve that problem, one of the pieces of software that we were test driving as a development platform is Derivative’s TouchDesigner.

Out of the workshop weekend we were able to play both with rendering 3D models with virtual cameras as outputs, as well as with manually placing and adjusting a render on our surface. The flexibility and responsiveness of TouchDesigner as a development environment made this process relatively fast and easy. It also meant that we had a chance to see lots of different kinds of content styles (realistic images, animation, 3D rendered puppets, etc.) in the actual space. Hugely important was a discovery about the impact of movement (especially fast movement) coming from a screen that fills your entire field of view.

TouchOSC Remote

Another hugely important discovery was the implementation of a remote triggering mechanism. One of our other team members, Alex Oliszewski, and I spent a good chunk of our time talking about the implementation of a media system for the dome. As we talked through our goals for the weekend it quickly became apparent that we needed for him to have some remote control of the system from inside of the dome, while I was outside programming and making larger scale changes. The use of TouchOSC and Open Sound Control made a huge difference for us as we worked through various types of media in the system. Our quick implementation gave Alex the ability to move forward and backwards through a media stack, zoom, and translate content in the space. This allowed him the flexibility to sit away from a programming window to see his work. As a designer who rarely gets to see a production without a monitor in front of me, this was a huge step forward. The importance of having some freedom from the screen can’t be understated, and it was thrilling to have something so quickly accessible.

Lights

Adam Vachon, our lighting designer, also made some wonderful discoveries over the course of the weekend. Adam has a vested interest in interactive lighting, and to this end he’s also working in TouchDesigner to develop a cue based lighting console that can use dynamic input from sensors to drive his system. While this is a huge challenge, it’s also very exciting to see him tackling this. In many ways it really feels like he’s doing some exciting new work that addresses very real issues for theaters and performers who don’t have access to high end lighting systems. (You can see some of the progress Adam is making on his blog here)

Broad Strokes

While it’s still early in our process it’s exciting to see so many of the ideas that we’ve had take shape. It can be difficult to see a project for what it’s going to be while a team is mired in the work of grants, legal, and organization. Now that we’re starting to really get our hands dirty, the fun (and hard) work feels like it’s going to start to come fast and furiously.

Thoughts from the Participants:

From Adam Vachon

What challenges did you find that you expected?

The tracking; I knew it would hard, and it has proven to be even more so. While a simple proof-of-concept test was completed with a Kinect, a blob tracking camera may not be accurate enough to reliably track the same target continuously. More research is showing that Ultra Wide Band RFID Real Time Locations System may be the answer, but such systems are expensive. That said, I am now in communications with a rep/developer for TiMax Tracker (an UWB RFID RTLS) who might be able to help us out. Fingers crossed!

What challenges did you find that you didn’t expect?

The computers! Just getting the some of computers to work the way they were “supposed” to was a headache! That said, it is nothing more than what I should have expected in the first place. Note for the future: always test the computers before workshop weekend!

DMX addressing might also become a problem with TouchDesigner, though I need to do some more investigation on that.

How do you plan to overcome some of these challenges?

Bootcamping my macbook pro will help on the short term computer-wise, but it is definitely a final solution. I will hopefully be obtaining a “permanent” test light within the next two weeks as well, making it easier to do physical tests within the Dome.

As for TouchDesigner, more playing around, forum trolling, and attending Mary Franck’s workshop at the LDI institute in January.

What excites you the most about WonderDome?

I get a really exciting opportunity: working to develop a super flexible, super communicative lighting control system with interactivity in mind. What does that mean exactly? Live tracking of performers and audience members, and giving away some control to the audience. An idea that is becoming more an more to me as an artist is finding new ways for the audience to directly interact with a piece of art. On our current touch-all-the-screens-and-watch-magic-happen culture, interactive and immersive performance is one way for an audience to have a more meaningful experience at the theatre.

From Julie Rada

What challenges did you find that you expected?

From the performer’s perspective, I expected to wait around. One thing I have learned in working with media is to have patience. During the workshop, I knew things would be rough anyway and I was there primarily as a body in space – as proof of concept. I expected this and didn’t really find it to be a challenge but as I am trying to internally catalogue what resources or skills I am utilizing in this process, so far one of the major ones is patience. And I expect that to continue.

I expected there to be conflicts between media and lights (not the departments, the design elements themselves). There were challenge, of course, but they were significant enough to necessitate a fundamental change to the structure. That part was unexpected…

Lastly, directing audience attention in an immersive space I knew would be a challenge, mostly due to the fundamental shape of the space and audience relationship. Working with such limitations for media and lights is extremely difficult in regard to cutting the performer’s body out from the background imagery and the need to raise the performer up.

What challenges did you find that you didn’t expect?

Honestly, the issue of occlusion on all sides had not occurred to me. Of course it is obvious, but I have been thinking very abstractly about the dome (as opposed to pragmatically). I think that is my performer’s privilege: I don’t have to implement any of the technical aspects and therefore, I am a bit naive about the inherent obstacles therein.

I did not expect to feel so shy about speaking up about problem solving ideas. I was actually kind of nervous about suggesting my “rain fly” idea about the dome because I felt like 1) I had been out of the conversation for some time and I didn’t know what had already been covered and 2) every single person in the room at the time has more technical know-how than I do. I tend to be relatively savvy with how things function but I am way out of my league with this group. I was really conscious of not wanting to waste everyone’s time with my kindergarten talk if indeed that’s what it was (it wasn’t…phew!). I didn’t expect to feel insecure about this kind of communication.

How do you plan to overcome some of these challenges?

Um. Tenacity?

What excites you the most about WonderDome?

It was a bit of a revelation to think of WonderDome as a new performance platform and, indeed, it is. It is quite unique. I think working with it concretely made that more clear to me than ever before. It is exciting to be in dialogue on something that feels so original. I feel privileged to be able to contribute, and not just as a performer, but with my mind and ideas.

Notes about performer skills:

Soft skills: knowing that it isn’t about you, patience, sense of humor
Practical skills: puppeteering, possibly the ability to run some cues from a handheld device

Case Study: Vesturport’s Woyzeck

The challenge of re-imagining a classic work often lies in finding the right translation of ideas, concepts, and imagery for a modern context. Classic pieces of theatre carry many pieces of baggage to the production process: their history, the stories of their past incarnations, the lives of famous actors and actresses who performed in starring roles, the interpretation of their designers, and all the flotsam and jetsam that might be found with any single production of the piece in question. A classic work, therefore, is not just the text of the author but a historical thread that traces the line of the work from its origin to its current manifestation. The question that must be addressed in the remounting of a classic work is, why: why this classic work, why now, why does this play matter more than any other?

In 2008 Iceland’s Vesturport theatre company presented their re-imagining of Büchner’s Woyzeck, a work about class, status, and madness. Written between 1836 – 1837, Büchner’s play tells the story of Woyzeck, a lowly soldier stationed in a German town. He lives with Marie, with whom he has had a child. For extra pay Woyzeck performs odd jobs for the captain and is involved in medical experiments for the Doctor. Over the course of the play’s serialized vignettes Woyzeck’s grasp on the world begins to break apart as the result of his confrontation with an ugly world of betrayal and abuse. The end of the play is a jealous, psychologically crippled, and cuckolded Woyzeck who ruthlessly lures Marie to the pond in the woods where he kills her. There is some debate about the actual ending to Büchner’s play. While the version that is most frequently produced has a Woyzeck who is unpunished, there is some speculation that one version of the play ended with the lead character facing a trial for his crime. As a historical note, Büchner’s work is loosely based upon the true story of Johann Christian Woyzeck, a wigmaker, who murdered the window with whom he lived. Tragically, Büchner’s died in 1837 from typhus and never saw Woyzeck performed. It wasn’t, in fact, performed until 1913. In this respect, Woyzeck has always been a play that is performed outside of its original time in history. It has always been a window backwards to a different time, while simultaneously being a means for the theatre to examine the time in which it is being produced.

It therefore comes as no surprise that in 2008 a play offering a commentary on the complex social conditions of class and status opens in a country standing at the edge of a financial crisis that would come to shape the next three years of its economic standing in the world. A play about the use and misuse of power in a world where a desperate Woyzeck tries to explain to a bourgeoisie captain that the poor are “flesh and blood… wretched in this world and the next…” (Büchner) rings as a warning about what that corner of the world was soon to face.

The Response to Vesturport’s Aesthetic

From the moment of its formation, Vesturport has been a company that often appropriates material and looks to add an additional element of spectacle – early in their formation as a troupe they mounted productions of Romeo and Juliet and Titus Andronicus. This additional element of spectacle is specifically characterized by a gymnastic and aerial (contemporary circus) aesthetic. The company’s connection to a circus aesthetic is often credited to Gisli Örn Gardarsson’s, the company’s primary director, background as a gymnast (Vesturport). The use of circus as a mechanism for story telling is both compelling and engaging. Peta Tait captures this best as she talks about what circus represents:

Circus performance presents artistic and physical displays of skillful action by highly rehearsed bodies that also perform cultural ideas: of identity, spectacle, danger, transgression. Circus is performative, making and remaking itself as it happens. Its languages are imaginative, entertaining and inventive, like other art forms, but circus is dominated by bodies in action [that] can especially manipulate cultural beliefs about nature, physicality and freedom. (Tait 6)

The very nature of circus as a performance technique, therefore, brings a kind of translation to Vesturport’s work that is unlike the work of other theatre companies. They are also unique in their use of language, as their productions frequently feature translations that fit the dominant language of a given touring venue. More than a company that features the use of circus as a gimmick, Vesturport uses the body’s relationship to space as a translation of ideas into movement, just as their use of language itself is a constant flow of translation.

Vesturport’s production of Woyzeck invites the audience to play with them as “Gardarsson’s gleefully physical staging of Büchner’s masterpiece … is played out on an industrial set of gleaming pipes, green astroturf, and water-filled plexiglass tanks” (Vesturport). Melissa Wong, in writing for Theatre Journal sees a stage that “resembled a swimming pool and playground” that fills the stage with a “playful illusion.” The playful atmosphere of the production, however, is always in flux as a series of nightmarish moments of abuse are juxtaposed against scenes of slapstick comedy and aerial feats. Wong later sees a Woyzeck who “possessed a vulnerability that contrasted with the deliberately grotesque portrayals of the other characters.” Wong’s ultimate assessment of the contrasting moments of humor and spectacle is that they “served to emphasize the pathos of the play, especially at the end when the fun and frolicking faded away to reveal the broken man that Woyzeck had become.” Not all American critics, however, shared her enthusiasm for Vesturport’s production. Charles Isherwood in writing for the New York Times sees the use of circus as a distraction, writing that, “the circus is never in serious danger of being spoiled by that party-pooping Woyzeck…it’s hard to fathom what attracted these artists to Büchner’s deeply pessimistic play, since they so blithely disregard both its letter and its spirit.” Jason Best shares a similar frustration with the production, writing “by relegating Büchner’s words to second place, the production ends up more impressive as spectacle than effective as drama.” Ethan Stanislawski was frustrated by a lack of depth in Gardarsson’s production saying “this Woyzeck is as comical, manic, and intentionally reckless as it is intellectually shallow.”

Circus as an Embodied Language

Facing such sharp criticism, why does this Icelandic company use circus as a method for interrogating text? Certainly one might consider the mystique of exploring new dimensions of theatricality, or notions of engaging the whole body in performance. While these are certainly appealing suggestions, there is more to the idea of circus as a physical manifestation of idea. Facing such sharp criticism, why does this Icelandic company use circus as a method for interrogating text? Certainly one might consider the mystique of exploring new dimensions of theatricality, or notions of engaging the whole body in performance. While these are certainly appealing suggestions, there is more to the idea of circus as a physical manifestation of idea. Tait writes “… aerial acts are created by trained, muscular, bodies. These deliver a unique aesthetic that blends athleticism and artistic expression. As circus bodies, they are indicative of highly developed cultural behavior. The ways in which spectators watch performers’ bodies – broadly, socially, physical and erotically – come to the fore with the wordless performance of an aerial act.” Spivak reminds us that:

Logic allows us to jump from word to word by means of clearly indicated connections. Rhetoric must work in the silence between and around words in order to see what works and how much. The jagged relationship between rhetoric and logic, condition and effect of knowing, is a relationship by which a world is made for the agent, so that the agent can act in an ethical way, a political way, a day-to-day way; so that the agent can be alive in a human way, in the world. (Spivak 181)

Woyzeck’s challenge is fundamentally about understanding how to live in this world. A world that is unjust, frequently characterized by subjugation, and exploitative. Gardarsson uses circus to depict a world that is both ugly and beautiful. He uses circus to call our attention to these problems as embodied manifestation. The critics miss what’s happening in the production, and this is especially evident when looking at what Tait has to say the role of new circus as a medium:

New circus assumes its audience is familiar with the format of traditional live circus, and then takes its artistic inspiration from a cultural idea of circus as identity transgression and grotesque abjection, most apparent in literature [and] in cinema. Early [new circus in the 1990’s] shows reflected a trend in new circus practice to include queer sexual identities and expand social ideas of freakish bodies. Artistic representation frequently exaggerates features of traditional circus…. (Tait 123)

What Isherwood misses is that the use of garish spectacle that makes light of an ugly world is, in fact, at the very heart of what Gardarsson is trying to express. The working-poor Woyzeck who questions, and thinks, and is criticized for thinking is ruining the Captain and the Doctor’s circus-filled party. Woyzeck’s tragedy lies in his fight to survive, to be human, in the inhuman world that surrounds him – what could be more “deeply pessimistic” (as Isherwood calls it) than a vision of the world where fighting to be human drives a man to destroy the only anchor to the world (Marie) that he ever had?

Conclusions

Melissa Wong best sums up the production in seeing the tragedy in a Woyzeck “who seemed in some ways to be the most humane character in the production…the one who failed to survive.” Her assessment of Gardarsson’s use of levity is that it points “to the complicity of individuals [the audience] who, as part of society, had watched Woyzeck’s life as entertainment without fully empathizing with the depth of his existential crisis” (Wong). She also rightly points out that the use of humor in the play “enabled us to access questions that in the bleakness of their full manifestation might have been too much to bear” (Wong). Tait also reminds us that the true transformative nature of circus as a medium is not what is happening with the performer, but how the experience of viewing the performer is manifest in the viewer.

Aerial motion and emotion produce sensory encounters; a spectator fleshes culturally identifiable motion, emotionally. The action of musical power creates buoyant and light motion, which corresponds with reversible body phenomenologies in the exaltation of transcendence with and of sensory experience. The aerial body mimics the sensory motion of and within lived bodies in performance of delight, joy, exhilaration, and elation. Aerial bodies in action seem ecstatic in their fleshed liveness. (Tait 152)

Here circus functions as a mechanism for translation and confrontation in a play whose thematic elements are difficult to grapple with. Vesturport’s method and execution look to find the spaces between words, and while not perfect, strive to push the audience into a fleshed and lived experience of Büchner’s play rather than a purely intellectual theatrical exercise.

The newly devised piece that I’ve been working on here at ASU finally opened this last weekend. Named “The Fall of the House of Escher” the production explores concepts of quantum physics, choice, fate, and meaning through by combining the works of MC Escher and Edgar Allen Poe. The production has been challenging in many respects, but perhaps one of the most challenging elements that’s largely invisible to the audience is how we technically move through this production.

Early in the process the cohort of actors, designers, and directors settled on adopting a method of story telling that drew its inspiration from the Choose Your Own Adventure books that were originally published in the 1970’s. In these books the reader gets to choose what direction the protagonist takes at pivotal moments in the drama. The devising team was inspired by the idea of audience choice and audience engagement in the process of story telling. Looking for on opportunity to more deeply explore the meaning of audience agency, the group pushed forward in looking to create a work where the audience could choose what pathway to take during the performance. While Escher was not as complex as many of the inspiring materials, its structure presented some impressive design challenges.

Our production works around the idea that there are looping segments of the production. Specifically, we repeat several portions of the production in a Groundhog Day like fashion in order to draw attention to the fact that the cast is trapped in a looped reality. Inside of the looped portion of the production there are three moments when the audience can choose what pathway the protagonist (Lee) takes, with a total of four possible endings before we begin the cycle again. The production is shaped to take the audience through the choice section two times, and on the third time through the house the protagonist chooses a different pathway that takes the viewers to the end of the play. The number of internal choices in the production means that there are a total of twelve possible pathways through the play. Ironically, the production only runs for a total of six shows, meaning that at least half of the pathways through the house will be unseen.

This presents a tremendous challenge to any designers dealing with traditionally linear based story telling technologies – lights, sound, media. Conceiving of a method to navigate through twelve possible production permutations in a manner that any board operator could follow was daunting – to say the least. This was compounded by a heavy media presence in the production (70 cued moments), and the fact that the scrip was continually in development up until a week before the technical rehearsal process began. This meant that while much of the play had a rough shape, there were changes which influenced the technical portion of the show being made nearly right up until the tech process began. The consequences of this approach were manifest in three nearly sleepless weeks between the crystallization of the script and opening night – while much of the production was largely conceived and programmed, making it all work was its own hurdle.

In wrestling with how to approach this non-linear method, I spent a large amount of time trying to determine how to efficiently build a cohesive system that allowed the story to jump forwards, backwards, and sidewise in a system of interactive inputs, and pre-built content. The approach that I finally settled on was thinking of the house as a space to navigate. In other words, media cues needed to live in the respective rooms where they took place. Navigating then was a measure of moving from room to room. This ideological approach was made easier with the addition of a convention for the “choice” moments in the play when the audience chooses what direction to go. Have a space that was outside of the normal set of rooms in the house allowed for an easier visual movement from space to space, while also providing for visual feedback that for the audience to reinforce that they were in fact making a choice.

Establishing a modality for navigation grounded the media design in an approach that made the rest of the programming process easier – in that establishing a set of norms and conditions creates a paradigm that can be examined, played with, even contradicted in a way that gives the presence of the media a more cohesive aesthetic. While thinking of navigation as a room-based activity made some of the process easier, it also introduced an additional set of challenges. Each room needed a base behavior, an at rest behavior that was different from its reactions to various influences during dramatic moments of the play. Each room also had to contain all of the possible variations that existed within that particular place in the house – a room might need to contain three different types of behavior depending on where we were in the story.

I should draw attention again to the fact that this method was adopted, in part, because of the nature of the media in the show. The production team committed early on to looking for interactivity between the actors and the media, meaning that a linear asset based play-back system like Dataton’s Watchout was largely out of the picture. It was for this reason that I settled on using troikatronix Isadora for this particular project. Isadora also offered opportunities for tremendous flexibility, quartz integration, and non-traditional playback methods; methods that would prove to be essential in this process.

In building this navigation method it was first important to establish the locations in the house, and create a map of how each module touched the others in order to establish the required connections between locations. This process involved making a number of maps to help translate these movements into locations. While this may seem like a trivial step in the process, it ultimately helped solidify how the production moved, and where we were at any given moment in the various permutations of the traveling cycle. Once I had a solid sense of the process of traveling through the house I built a custom actor in Isadora to allow me to quickly navigate between locations. This custom actor allowed me to build the location actor once, and then deploy it across all scenes. Encapsulation (creating a sub-patch) played a large part in the process of this production, and this is only a small example of this particular technique.

The real lesson to come out of non-linear story telling was the importance on planning and mapping for the designer. Ultimately, the most important thing for me to know was where we were in the house / play. While this seems like an obvious statement for any designer, this challenge was compounded by the nature of our approach: a single control panel approach would have been too complicated, and likewise a single trigger (space bar, mouse click, or the like) would never have had the flexibility for this kind of a production. In the end each location in the house had its own control panel, and displayed only the cues corresponding to actions in that particular location. For media, conceptualizing the house as a physical space to be navigated through was ultimately the solution to complex questions of how to solve a problem like non-linear story telling.

In early June I was traveling with my partner, Lauren Breunig, to an aerial acrobatics festival in Denver, Colorado. Lauren is an incredibly beautiful and talented aerialist. One of the apparatuses that she performs on is what she calls “sliding trapeze.” This is essentially a trapeze bar with fabric loops instead of ropes.

Earlier this year Lauren was invited to perform at the Aerial Acrobatics Arts Festival of Denver as a performer in their “innovative” category. As an aerialist Lauren has already in many venues across the country, both on her invented apparatus as well as on more traditional circus equipment. In all of these cases she’s had to submit information about her apparatus, clearance requirements, and possible safety concerns.

So when it came time to answer some questions about rigging for the festival it seemed like old hat. One of the many things that Lauren had to submit was her height requirements for her bar provided that a truss was being suspended somewhere between 27 and 29 feet from the floor of the stage. In her case the height of the truss less critical than the height for her bar. In her case, the minimum distance from the floor to the rigging points is 15.5 feet. At this height her apparatus is high enough off of the ground that she can safely perform all of her choreography. This is also the lower limit of a height where she can jump to her bar unaided. Where this gets tricky is how one makes up the difference between the required rigging points and the height of the truss. The festival initially indicated that they would drop steel cable to make up the differences between required heights and the height of the truss, making it seems as though the performers only needed to worry about bringing their apparatus.

When we dropped off Lauren’s equipment we discovered that the realities of the rigging were slightly different than what the email correspondence had indicated would be the case. The truss had been set at a height of 27 feet, but the festival was no longer planning on dropping any cables for performers. Additionally they told us that they only had limited access to span sets and other equipment for making up the height difference. Luckily Lauren had packed some additional span sets, and had thought through some solutions that used some webbing (easily available from REI) to make up any discrepancies that might come up. This also, unfortunately, made her second guess the specs she had sent to the festival originally, and left her wondering if she had accurately determined the correct heights for her apparatus.

Memory Measurements

Having rigged and re-rigged this apparatus in numerous venues, Lauren had a strong sense of how her equipment worked with ceilings less than 20 feet. This also meant that she had didn’t have any fixed heights, and had instead lots of numbers bouncing around her head – one venue was rigged at 15.5 feet, but the ceiling was really at 17 feet; another the beams were at 22 or 23 feet, and the apparatus had been rigged at heights between 15.5 and 17 feet; and so on and so on. Additionally she typically rigs her own equipment, and has is therefore able to make specific adjustments based on what she’s seeing and feeling in a given space. For the festival, this wasn’t a possibility. So, after the miscommunication about the rigging situation and suddenly feeling insecure about the measurements she sent ahead we suddenly found ourselves talking through memories of other venues and trying to determine what height she actually needed.

Reverse engineering heights

We started by first talking through previous rigged situations – how high were the beams, how long is the apparatus, how far off the ground was she. Part of the challenge here was that this particular apparatus hangs at two different lengths because the fabric ropes stretch. This means that without a load it’s at a different distance from the floor than with a load. While this isn’t a huge difference, it’s enough to prevent her from being able to jump to her bar if it’s rigged too high or to put her in potential danger of smashing her feet if it’s rigged too low. While there were several things we knew, it was difficult to arrive at a hard and fast number with so many variables that were unknown or a range.

Drawing it out

Ultimately what helped the most was sitting down and drawing out some of the distances and heights. While this was far from perfect, it did finally give us some reference points to point to rather than just broadly talk through. A diagram goes a long way to providing a concrete representation of what you’re talking about, and it’s worth remembering the real value in this process. It meant that were were suddenly able to talk about things that we knew, only remembered, or guessed. This processes, however, still didn’t solve all of the problems Lauren was facing. We still had some questions about the wiggle room in our half-remembered figures, and making sure that she would be rigged at a height that was both safe and visually impressive. Finally, after an hour of drawing, talking, and drawing again we got to a place where we were reasonably confident about how she might proceed the next day. In thinking about this process, I realized that we could have made our lives a lot easier if we had done a little more homework before coming to the festival.

What she really needed

A Diagram

A complete drawing of the distances, apparatus, performer, rigging range, and artist-provided equipment would have made a lot of this easier. While the rigging process went without a hitch once she was in the theater, being able to send a drawing of what her apparatus looked like and how it needed to be rigged would have but as at ease and ensured that all parties were on the same page. A picture codifies concepts that might otherwise be difficult to communicate, and in our case this would have been a huge help.

A Fuller tech rider

While Lauren did send a Tech Rider with her submission, it occurred to us that a fuller tech rider would- have helped the festival, and it would have helped us. When dealing with an apparatus that she had to jump to reach, it would have been helpful for us to know exactly how high she could jump. There’s also a sweet spot that’s not too high for this apparatus, but where Lauren still needs a boost to reach the bar; this would have been another helpful range to have already known. While we have a reasonable amount of rigging materials, there’s also some equipment that we don’t have. Specifying what we plan to provide, or can provide with adequate notice would have been helpful inclusions in the conversation she was having with the festival. In hindsight, some of the statements that should have been added to her rider include:

the artist can jump for heights of

the artist needs assistance for heights

the artist will provide rigging for

the artist requires confirmation by

What does this have to do with projectors?

Let’s face it, tech riders are not the most exciting part of the production world. That said, by failing to specify what you need and what you are planning on providing it’s easy to suddenly be in a compromising position. While the consequences are different for an aerialist vs. a projectionist, the resulting slow-down in the tech process, or the need to reconfigure some portion of performance are very real concerns. The closer you are to a process or installation, the more difficult it becomes to really see all of the moving parts. Our exposure to any complicated process creates blind spots in the areas that we’ve automated, set-up once, or take for granted simply because they seem factual and straightforward. These are the privileges, and pitfalls, of working with the same equipment or apparatus for extended periods of time – we become blind to our assumptions about our process. Truly, this is the only way to work with a complicated system. At some point, some portion of the process becomes automated in our minds or in our practice in order to facilitate higher order problem solving. Once my projectors are hung and focused, I don’t think about the lensing when I’m trying to solve a programming problem.

While this may well be the case when you’re on your home turf, it’s another thing entirely to think about setting up shop somewhere new. When thinking about a new venue, it becomes imperative to look at your process with eyes divorced from your regular practice, and to instead think about how someone with unfamiliar eyes might look at your work. That isn’t to say that those eyes don’t have any experience, just that they’re fresh to your system / apparatus. In this way it might be useful to think of the tech rider as a kind of pre-flight checklist. Pilots have long known that there are simply too many things to remember when looking over a plane before take-off. Instead, they rely on check-lists to ensure that everything gets examined. Even experienced pilots rely on these checklists, and even obvious items get added to the list.

Knowing your equipment

Similarly, it’s not enough to just “know” your equipment. While intuition can be very useful, it’s also desperately important to have documentation of your actual specifications – what are the actual components of your machine, what are your software version numbers, how much power do you need, etc. There are always invisible parts of our equipment that are easy to take for granted, and it’s these elements that are truly important to think about when you’re setting up in a new venue. Total certainty may well be a pipe-dream, but it isn’t impractical to take a few additional steps to ensure that you’re ready to tackle any problems that may arise.

Packing your Bags

The real magic of this comes down to packing your bags. A solid rider, and an inventory of your system will cover most of your bases but good packing is going to save you. Finding room for that extra roll of gaff tape, or that extra power strip, or that USB mouse may mean that it takes you longer or that you travel one bag heavier but it will also mean a saved trip once you’re at the theatre. Including an inventory in your bags may seem like a pain, but it also means that you have a quick reference to know what you brought with you. It also means that when you’re in the heat of strike you know exactly what goes where. Diagrams and lists may not be the sexiest part of the work we do, but they do mean saved time and fewer headaches. At the end of the day, a few saved hours may mean a few more precious hours of sleep, or better yet a chance to grab a drink after a long day.