Overview

At some point you’ll need to split up the work of a single project into multiple processes. This happens for lots of reasons – maybe you want to break your control interface out from your output elements, or maybe you want to start up another tool you’ve built – you name it, there are lots of reasons you might want to launch another process, and if you haven’t found a reason to you… chances are you will soon.

The good news is that we can do this with a little bit of python. We need to import a few extra libraries, and we need to do a little leg work – but once we get a handle on those things we have a straightforward process on our hands.

Getting Started

First things first, start by downloading or cloning the whole repo. We’ll start by opening the process-management.toe file. You might imagine that this is the toe file that you’re launching processes from, or you might think of this as your main control toe file. You’ll notice that there’s also a toe file called other-app.toe. This is the file we’re going to launch from within TouchDesigner. At this point feel free to open up that file – you’ll see that it starts in perform mode and says that it’s some other process. Perfect. You should also notice that it says “my role is,” but nothing else. Don’t worry, it’s this way on purpose.

Process-management.toe

In this toe file you’ll see three buttons:

Launch Process

Quit Process

Quit Process ID None

Launch Process

This button will run the script in text_start_process.

So, what’s happening here? First we need to import a few other libraries that we’re going to use – os and subprocess. From there we need to identify the application we’re going to use, and the file we’re going to launch. Said another way we want to know what program we’re going to open our toe file with. You’ll see that we’re doing something a little tricksy here. In Touch, the all class has a member called binFolder – this tells us the location of the Touch Binary files, which happens to include our executable file. Rather than hard coding a path to our binary we can instead use the path provided by Touch – this has lots of advantages and should mean that your code is less likely to break from machine to machine.

So far so good. You should also see that we’re setting an environment variable with os.environ. This is an interesting place where we can actually set variables for a Touch process at start. Why do this? Well, you may find that you have a single toe file that you want to configure differently for any number of reasons. If you’re using a single toe file configuration, you might want to launch your main file to default as a controller, and a another instance of the same file in an output configuration, and maybe another instance of the same app to handle some other process. If that’s not interesting to you, you can comment out that line – but it might at least be worth thinking about before you add that pound sign.

Next we use a subprocess.Popen() call to start our process – by providing the all, and the file as arguments in a list. We can also grab our process ID (we’ll use that later) while we’re here.

Finally we’ll build a little dictionary of our attributes and put that all in storage. I’m using a dictionary in this example since you might find that you need to launch multiple instances, and having a nice way to keep them separate is handy.

Okay. To see this work, let’s make that button viewer active and click it – tada! At this point you should see another TouchDesigner process launch and this time around the name that was entered for our ROLE environment variable shows up in our second touch process: “some other process, my role is render1”

Good to know is that if you try to save your file now you’ll get an error. That’s because we’ve put a subprocess object into storage and it can’t persist between closing and opening. Your file will be saved, but our little dictionary in storage will be lost.

Quit Process

This little button kills our other-app.toe instance.

Much simpler than our last script this one first grabs our dictionary that’s in storage, grabs the subprocess object, and then kills it. Next we unstore all of the bits in storage so we can save our toe file without any warning messages.

Quit Process ID

Okay – so what happens if you want to just kill a process by it’s ID, not by storing the whole subprocess object? You can do that.

In this case we can use the os module to issue a kill call with our process id. If we look at the os documentation we’ll see that we need a pid and a sig – which is why we’re also importing signal.

Take Aways

This may or may not be useful in your current work flow, but it’s handy to know that there are ways to launch and quit another toe file. Better yet, this same idea doesn’t have to be limited to use with Touch. You might use this to lunch or control any other application your heart desires.

With a start on point lights, one of the next questions you might ask is “what about cone lights?” Well, it just so happens that there’s a way to approach a deferred pipeline for cone lights just like with point lights. This example still has a bit to go with a few lingering mis-behaviors, but it is a start for those interested in looking at complex lighting solutions for their realtime scenes.

TouchDesigner networks are notoriously difficult to read, and this doc is intended to help shed some light on the ideas explored in this initial sample tox that’s largely flat.

This approach is very similar to point lights, with the additional challenge of needing to think about lights as directional. We’ll see that the first stage and last of this process – is consistent with our Point Light example, but in the middle we need to make some changes. We can get started by again with color buffers.

Color Buffers

These four color buffers represent all of that information that we need in order to do our lighting calculations further down the line. At this point we haven’t done the lighting calculations yet – just set up all of the requisite data so we can compute our lighting in another pass.

Our four buffers represent:

position – renderselect_postition

normals – renderselect_normal

color – renderselect_color

uvs – renderselect_uv

If we look at our GLSL material we can get a better sense of how that’s accomplished.

Essentially, the idea here is that we’re encoding information about our scene in color buffers for later combination. In order to properly do this in our scene we need to know point position, normal, color, and uv. This is normally handled without any additional intervention by the programmer, but in the case of working with lots of lights we need to organize our data a little differently.

Light Attributes

Here we’ll begin to see a divergence from our previous approach.

We are still going to compute and pack data for the position, color, and falloff for our point lights like in our previous example. The difference now is that we also need to compute a look-at position for each of our lights. In addition to our falloff data we’ll need to also consider the cone angle and delta of our lights. For the time being cone angle is working, but cone delta is broken – pardon my learning in public here.

For the sake of sanity / simplicity we’ll use a piece of geometry to represent the position of our point lights – similar to the approach used for instancing. In our network we can see that this is represented by our null SOP null_lightpos. We convert this to CHOP data and use the attributes from this null (number of points) to correctly ensure that the rest of our CHOP data matches the correct number of samples / lights we have in our scene. In this case we’re using a null since we want to position the look-at points at some other position than our lights themselves. Notice that our circle has one transform SOP to describe light position, and another transform SOP to describe look-at position. In the next stage we’ll use our null_light_pos CHOP and our null_light_lookat CHOP for the lighting calculations – we’ll also end up using the results of our object CHOP null_cone_rot to be able to describe the rotation of our lights when rendering them as instances.

When it comes to the color of our lights, we can use a noise or ramp TOP to get us started. These values are ultimately just CHOP data, but it’s easier to think of them in a visual way – hence the use of a ramp or noise TOP. The attributes for our lights are packed into CHOPs where each sample represents the attributes for a different light. We’ll use a texelFetchBuffer() call in our next stage to pull the information we need from these arrays. Just to be clear, our attributes are packed in the following CHOPs:

position – null_light_pos

color – null_light_color

falloff – null_light_falloff

light cone – null_light_cone

This means that sample 0 from each of these four CHOPs all relate to the same light. We pack them in sequences of three channels, since that easily translates to a vec3 in our next fragment process.

The additional light cone attribute here is used to describe the radius of the cone and the degree of softness at the edges (again pardon the fact that this isn’t yet working).

Combining Buffers

Next up we combine our color buffers along with our CHOPs that hold the information about our lights location and properties.

What does this mean exactly? It’s here that we loop through each light to determine its contribution to the lighting in the scene, accumulate that value, and combine it with what’s in our scene already. This assemblage of our stages and lights is “deferred” so we’re only doing this set of calculations based on the actual output pixels, rather than on geometry that may or may not be visible to our camera. For loops are generally frowned on in openGL, but this is a case where we can use one to our advantage and with less overhead than if we were using light components for our scene.

Here’s a look at the GLSL that’s used to combine our various buffers:

If you look at the final pieces of our for loop you’ll find that much of this process is borrowed from the example Malcolm wrote (Thanks Malcolm!). This starting point serves as a baseline to help us get started from the position of how other lights are handled in Touch.

Representing Lights

At this point we’ve successfully completed our lighting calculations, had them accumulate in our scene, and have a slick looking render. However, we probably want to see them represented in some way. In this case we might want to see them just so we can get a sense of if our calculations and data packing is working correctly.

To this end, we can use instances and a render pass to represent our lights as spheres to help get a more accurate sense of where each light is located in our scene. If you’ve used instances before in TouchDesigner this should look very familiar. If that’s new to you, check out: Simple Instancing

Our divergence here is that rather than using spheres, we’re instead using cones to represent our lights. In a future iteration the width of the cone base should scale along with our cone angle, but for now let’s celebrate the fact that we have a way to see where our lights are coming from. You’ll notice that the rotate attributes generated from the object CHOP are used to describe the rotation of the instances. Ultimately, we probably don’t need these representations, but they sure are handy when we’re trying to get a sense of what’s happening inside of our shader.

Post Processing for Final Output

Finally we need to assemble our scene and do any final post process bits to get make things clean and tidy.

Up to this point we haven’t done any anti-aliasing, and our instances are in another render pass. To combine all of our pieces, and do take off the sharp edges we need to do a few final pieces of work. First we’ll composite our scene elements, then do an anti-aliasing pass. This is also where you might choose to do any other post process treatments like adding a glow or bloom to your render.

A bit ago I wanted to get a handle on how one might approach real time rendering with LOTS of lights. The typical openGL pipeline has some limitations here, but there’s a lot interesting potential with Deferred Lighting (also referred to as deferred shading). Making that leap, however, is no easy task and I asked Mike Walczyk for some help getting started. There’s a great starting point for this idea on the derivative forum but I wanted a 099 approach and wanted to pull it apart to better understand what was happening. With that in mind, this is a first pass at looking through using point lights in a deferred pipeline, and what those various stages look like.

TouchDesigner networks are notoriously difficult to read, and this doc is intended to help shed some light on the ideas explored in this initial sample tox that’s largely flat.

Color Buffers

These four color buffers represent all of that information that we need in order to do our lighting calculations further down the line. At this point we haven’t done the lighting calculations yet – just set up all of the requisite data so we can compute our lighting in another pass.

Our four buffers represent:

position – renderselect_postition

normals – renderselect_normal

color – renderselect_color

uvs – renderselect_uv

If we look at our GLSL material we can get a better sense of how that’s accomplished.

Essentially, the idea here is that we’re encoding information about our scene in color buffers for later combination. In order to properly do this in our scene we need to know point position, normal, color, and uv. This is normally handled without any additional intervention by the programmer, but in the case of working with lots of lights we need to organize our data a little differently.

Light Attributes

Next we’re going to compute and pack data for the position, color, and falloff for our point lights.

For the sake of sanity / simplicity we’ll use a piece of geometry to represent the position of our point lights – similar to the approach used for instancing. In our network we can see that this is represented by our Circle SOP circle1. We convert this CHOP data and use the attributes from this circle (number of points) to correctly ensure that the rest of our CHOP data matches the correct number of samples / lights we have in our scene.

When it comes to the color of our lights, we can use a noise or ramp TOP to get us started. These values are ultimately just CHOP data, but it’s easier to think of them in a visual way – hence the use of a ramp or noise TOP. The attributes for our lights are packed into CHOPs where each sample represents the attributes for a different light. We’ll use a texelFetchBuffer() call in our next stage to pull the information we need from these arrays. Just to be clear, our attributes are packed in the following CHOPs:

position – null_light_pos

color – null_light_color

falloff – null_light_falloff

This means that sample 0 from each of these three CHOPs all relate to the same light. We pack them in sequences of three channels, since that easily translates to a vec3 in our next fragment process.

Combining Buffers

Next up we combine our color buffers along with our CHOPs that hold the information about our lights location and properties.

What does this mean exactly? It’s here that we loop through each light to determine its contribution to the lighting in the scene, accumulate that value, and combine it with what’s in our scene already. This assemblage of our stages and lights is “deferred” so we’re only doing this set of calculations based on the actual output pixels, rather than on geometry that may or may not be visible to our camera. For loops are generally frowned on in openGL, but this is a case where we can use one to our advantage and with less overhead than if we were using light components for our scene.

Here’s a look at the GLSL that’s used to combine our various buffers:

Representing Lights

At this point we’ve successfully completed our lighting calculations, had them accumulate in our scene, and have a slick looking render. However, we probably want to see them represented in some way. In this case we might want to see them just so we can get a sense of if our calculations and data packing is working correctly.

To this end, we can use instances and a render pass to represent our lights as spheres to help get a more accurate sense of where each light is located in our scene. If you’ve used instances before in TouchDesigner this should look very familiar. If that’s new to you, check out: Simple Instancing

Post Processing for Final Output

Finally we need to assemble our scene and do any final post process bits to get make things clean and tidy.

Up to this point we haven’t done any anti-aliasing, and our instances are in another render pass. To combine all of our pieces, and do take off the sharp edges we need to do a few final pieces of work. First we’ll composite our scene elements, then do an anti-aliasing pass. This is also where you might choose to do any other post process treatments like adding a glow or bloom to your render.

Recently when I was teaching at LDi 2017 a participant asked if I might take some time to document how we use git in our projects at Obscura. Version control isn’t always a sexy topic, but it is a vital piece of our pipeline and process, and one well worth considering if you’re moving away from being a lone developer on projects. It’s also worth getting a handle on version control approaches if you’re looking to join a software team in general.

A Blanket Disclaimer

Git is complicated… ask anyone that’s used git. XKCD has my favorite description of working with git on a regular basis:

Which is to say that I’m not a git expert, often have issues of my own, and have certainly done the dance of copying my flies to somewhere else to ensure that I don’t loose all of my work. I don’t mean to suggest that you shouldn’t use git – you should – but rather that like all things that are new or different to you, this one comes with its own set of challenges. Okay okay… so why should I use it then Matt?! We’ll see below what makes it so powerful, and when it comes to working in teams this far outpaces any other approach – but you have to be patient, thoughtful, and considerate.

Many of you, dear readers, are probably landing here because you’re curious about how this works with TouchDesigner. In Touch, the default behavior is that everytime you save a file you get a new version – project.1.toe, project.2.toe, project.3.toe, and on and on and on. The idea of version control is similar except that it allows you to avoid the process of having additional files – you only ever have one version of your file, but you can reach back in time and find other versions. Similar to Apple’s TimeMachine idea, or Dropbox’s version retrieval. You only ever see one file in your directory, but if you realize that you need a previous one, you can reach back and fetch it.

To achieve this, git uses the convention of a commit. When you’re ready to add something to your history, you commit your changes. That is, when you have a version of your file that you want to hold onto, you specifically choose to commit it to your history along with a message about what changed. If you’re working on a team this is great – it means you can see the history of every commit, and what another developer had to say about what they were doing and why they made the change. Git also helps you differentiate between versions – in text based files you can call a command to see exactly what changed between versions and who made that change.

More than that, git also lets you build branches of your project. Let’s say you’re working on an installation. It’s working great, but you want to be able to try out some other ideas or work on some updates for the project. You don’t want to make this change to the project file that’s running, and you don’t want to you don’t want to interrupt the installation’s operation. Moving to another branch lets you work in a parallel project with all of the same file names where you can make all the changes you want, and then decide when to roll that into your actual project.

This becomes especially interesting when you’re working with multiple people. Once you’re working on a large project it often becomes important to have several folks contributing… git helps organize this distributed work, and keep you from overwriting one another’s contributions.

Further, using something like bitbucket or github means that your project is hosted on an outside server – so even if you’re machine gives up the ghost, your work isn’t interrupted.

Challenges in using TouchDesigner and Git

At this point you might be sold on the idea of using git.

YAY! Welcome to the git party!

Before you get too excited, there are some important issues to consider when it comes to using git with TouchDesigner.

Most Touch files are binary – what does that mean Matt? Well, it means that unlike a .py or .json, or .py file, your toe or tox file is actually made up of hex strings that are difficult to parse outside of the context of Touch. That makes them very difficult to diff – that is, to tell what’s different between committed versions.

Toe Files are whole projects – a toe file holds a whole project, which is great, but makes collaborating very difficult. Part of the beauty of git is that multiple people can work together at the same time without overwriting one another’s changes. That doesn’t make a whole lot of sense if you can’t see what’s different inside of a toe file, and if the whole project is stuck inside of a single file.

Using git means learning shell commands or a tool to use git – If you’re going to use git, you’ll have to learn some shell commands, or learn another tool that interfaces with git (github has a great desktop tool). That’s not terrible, but it’s not the same as using something like dropbox box or google drive.

Our Solution

Okay… so how do we make this work then?

Well, first things first, you start by thinking about toxes for your modules / component work-spaces, and your start externalizing your scripts. That’s a big change in workflow for lots of folks, and if you’re not ready for that change that’s okay. if you’re working on big projects, however, and working with other people now is the time to level up. Let’s look at a simple project build so we can see how this might work in practice.

First we need to set up our git project. I’m going to use github – it’s free to use the public version, and it’s got one of the best desktop utilities. Git has a great tutorial for their app so I’m not going to cover that here, instead we’ll look at how the shell commands work.

For starters, you’ll want to install git, I’m also going to use git bash instead of just windows command line (it’s a little easier to read, though you can use either), so if you want to follow along you’ll want to make sure that you hit that checkbox when you install.

There are lots of ways to start a new repo, but I’m going to start mine from my github account. Once I create an account, I’ll need to use the plus button in the top right corner of the page to create a new repo, then choose my settings for it:

Once I’ve created my repo online, I need to clone this to my computer. Cloning my repository means that I’m going to create a local copy where I can make and track changes, and commit to my online repo. To do this, I need to first navigate to where I want my project to live. I’ve created a dummy directory called example on my D:\ drive:

Here in this directory, I’m going to right click and choose “git bash here” to open up a git bash terminal at this directory location.

In our git bash terminal we’ll enter our first clone commands. We’ll first need to copy the URL from the clone drop down menu on the web-page:

Alright! Now we have our git project cloned into our windows directory:

From here on in we need to make sure that our git terminal is inside of our newly added directory. We can repeat the same step we used earlier – navigate into the directory, right click, and open a git bash here; or we can navigate there in our terminal window. Let’s do this right from our existing terminal window. We’ll need to change our directory to touch_git_example, the name of the newly cloned repo. We can do this with the command:

cd touch_git_example/

Now we’re ready to start working! Let’s start by creating a new folder called toxes. All of our toxes are going to go into this directory:

Next let’s open Touch and save a project file in the root of our directory. I’m going to call mine touch_git_example.toe:

Inside of Touch I’m going to start by getting rid of my project component in the root, and I’m going to create a new base that’s called base_project:

Next I’m going to set up a few more things. Inside of base_project I’m going to create a few elements:

container_display – this will be the display elements for my project

base_com – this will hold the communication elements for my project

Next I need to externalize these elements. To do this, we right click on them and choose “Save Component tox…” from the drop down menu:

I’m going to save both of these elements in the toxes directory:

Now we need to make sure both of these components point to their external tox files. We can do this by going to the common page and locating opening our toxes in the external tox parameter field. We also want to make sure that we turn off the “Save Backup of External” parameter:

Notice that these are relative, not absolute paths – this is VERY IMPORTANT. We want our paths to be relative to our project directory. This will help ensure that our externalization process doesn’t break when we move to another computer.

Finally we need to save both of our toxes one more time, and save our project one more time.

YIKES! That’s a lot of steps… what did we do exactly here Matt?

Well, first we set up our project and saved our toe file. Then we created some components that we want in our project, but that we want to be able to edit independently of one another (com and display). After we saved them both the first time we had to point them back to their external files so they open correctly. We saved them a second time to make sure that relative path parameter was saved with our toxes. Finally, we saved the whole project again to make sure that our toxes with external paths were correctly set up in the toe file.

Whew… okay, why?!

Well, at this point, unless we add another component in our base_project layer, we never save our toe file again – we only save the toxes. This also means that the work can be split up… one developer can work in base_com, and another in container_display and they won’t over-write one another’s work. Keep in mind, if we add another component in base_project we’ll need to save the toe file; we’ll also need to change the toe file if we make changes to the project (like the perform window, project settings, and the like).

Let’s go back to our git window to see how we commit all of this to git.

Back in our terminal we can add all of our new elements at once with:

git add -A

This adds all of our files as tracked elements that we’ll now keep an eye one. Next we need to commit these changes. Let’s also add a message so we know what we did:

git commit -m "Our initial commit with toe file and two components"

At this point our changes are committed, and we’re ready to push them back up to our github repo. We can do this with a push command:

git push

NICE WORK!

Now we can head back over to github to see our project:

Better yet, if we click on the commits link we’ll see our entry history of contributions:

At this point it’s time to start working. Now as you work you can create snap shots to return to. That process usually looks first adding your files with git add, then commiting your files with a log message using git commit, finally you push your changes with git push. You can retrieve the work that other team members have done with git pull.

You’ll also notice that with a history of our changes it means we can move back to any of those snapshot moments in our project. If you’ve ever had a time when you made a change that broke everything… and couldn’t figure out how to undo that change, version control is for you. This lets you move back in time to find a working version of the single module that you changed rather than breaking out in a cold sweat of pure panic.

You can also externalize scripts, glsl, channel data, geometry data, and and and. Generally speaking, you don’t put assets in your repo. Git LFS (Large File Storage) helps with some of that, but for the most part you don’t want to fill up a repo with video. We sometimes will put in a single calibration frame, but it’s important to be very careful when adding large files to your repo as that can make for big headaches.

At Obscura we built out a save process that automates a lot of the above. We also make our repos mirror our touch structure. This means that if we know where an element is in touch, we know where it is in our repo.

Like all things, there is a TON more to learn on this front, but hopefully this gives you some ideas about where to get started with a version control system and working across machines and with other developers.

I do my best to talk with lots of folks using Touch, and sometimes I get questions about my approach and perspective on projects from students approaching their thesis project. The exchange below comes from an email series between me and @desn.joshmichael.

What ways do you find interacting with audio interesting?

Audio isn’t really my wheelhouse, so this one is a little hard for me. What I can say is that I always love working with folks who love sound, and are compelled by its nature. I like working with audio engineers and artists because they see the world differently than I do, which always makes for interesting conversations and typically pushes boundaries. One of my favorite collaborations looked at how we could use video to mix sound through an array of 40+ overhead channels. Video was our gain control, so the shape of the video drove the mix of the audio. Neither the sound artist or myself could have gotten there without the other.

All that aside, I think the bigger question here is what makes interacting with audio interesting to you? At the end of the day that’s what really matters, not what I say or anyone else. What drives you to work with audio, and what do you find most compelling about it?

What role do you think audio plays in an immersive experience?

It can play lots of roles – it can be place making, or provide subtext, or context, or motivation. It can drive action, compel participants or audience members to linger and reflect, or motivate action and decision. Like any medium it can be used to great effect for many purposes. Again, it comes back to questions about what you want it to do. If it can be anything, what do you want to shape it into?

It’s easy to think of an artistic form as having a particular fixed role – it’s much more challenging to consider that the form is role-less; that its purpose is shaped by the designer / artist / engineer – conjured and manipulated in ways both unexpected and familiar. A question actors always confront is to consider the opposite intention of their character. The words in a script seem to indicate that Mary loves Paul… but what if she hates him? What if she despises Paul, but still says loving words to him? What does that do to Paul? What does the audience see and feel? Do they even have to know that Mary hates him, or is it enough that the actor knows?

Ask yourself what role you think audio should play – then ask yourself what would happen if you tried to make it do the opposite thing.

For the general public, what type of audio-visual experiences have you seen success with?

I’ve seen all manner of things succeed that should have failed, and things that should have failed succeed. Worrying about what will or won’t work is to evaluate the art while making it – don’t do that. Make work that’s compelling and interesting to you. After the dust settles you can take time to evaluate what could have been differently, what was missing, what you needed more or less of. I’ve seen so many artists fall into the trap of being critic and designer at the same time – consider what’s compelling to you, reduce it to its most essential ideas and work from there.

What types of practical applications do you think an immersive audio-visual experience could have?

Questions about practicality are always frustrating to me – they aim to reduce an expressive medium into something that can be a commodity, something to be marketed and sold. There are lots of ways to chase this question, but I almost always end up feeling miserable thinking about them. A brilliant actor I once knew used to say that an actor’s goal should be to do nothing more than to change the way people in the audience breathed. If you can make someone hold their breath in anticipation, or sigh in relief, or snort uncontrollably – you’ve done your job as an actor. As a video artist I try to think the same way. If I can make something that inspires someone to stop and linger – to pull out their phone and Instagram something, that’s success.

That might not seem practical, but for a marketing campaign maybe it is – we live in a strange time, a time when it’s often enough if you’re able to just disrupt the regularity of peoples lives, to interrupt and disrupt the regularly scheduled monotony and inspire a moment of reflection and observation.

How do you feel about audio-visual generation as the sole experience? Should there be more to it?

Do you think all immersive experiences need a message or greater purpose to be interesting? Why / Why not?

One of the most important lessons I learned in my grad program was to question if an installation / experience needed to be diegetic (explained) or not. Some experiences (educational, informational, etc) lend themselves to diegesis, others don’t. For the most part, I prefer art that isn’t. I like to trust that the viewer / audience is the epicenter of meaning making, and to trust that a human will strive to construct a narrative and meaning even if I don’t supply one.

Humans are largely biological pattern recognition machines – we strive to make programs to recognize patterns, and we thrive in situations where pattern recognition is important; we even often find patterns in data that doesn’t lend itself to actual patterns (see confirmation bias and any number of fallacies formal and informal). An observer will make meaning in the meaningless. That may be discouraging, but it’s also tremendously freeing. Make the art that’s interesting to you, and know that your audience will weave their own story around what they’re seeing… regardless of whether or not you help them get there.

Any other advice for a beginner in designing an entire experience for the first time?

Learn and use a version tracking system.

I remember my grandmother telling me that my eyes were bigger than my stomach when I was a young. That’s a colloquial way of saying that you’ve bitten off more than you can chew – that you’ve taken more food than you can eat… though I don’t know that I really understood that until I was older. One of the hard lessons I’ve learned is that it’s easy to love all of your ideas, to say “yes and” to all of the permutations of an idea. When you’re brainstorming and dreaming that’s very important – but it’s also tremendously important to revisit those ideas and reduce them to their most basic ingredients. Time and again I see artists fall into the trap of making something that tries to do everything – and they end up with something that’s generally mediocre and flat.

If you can, resist that urge.

Chase an idea to its most basic representation; cut away all the fluff and bullshit and find the thing that’s really interesting and focus on that. Cultivate your laser focus and aim to execute a small idea with precision and excellence… then encourage that idea to grow. Iterate before you abandon an idea, and iterate even when you don’t like some of the things you’re doing. Find the patience to keep exploring an idea even when you feel like it’s exhausted and boring – push though that stuck feeling, and stay attentive.

Learn and use a version tracking system.

With enough time you can make anything with anything – that’s great, but don’t fall into the trap of believing that you have to reinvent the wheel.

Be attentive to details. I can’t tell you how many times I’ve gotten lost in code trying to solve a problem that was really a bad cable, or trash connector. Cultivate a debugging and troubleshooting routine that will help you identify where a problem lies.

Think through your decisions before you commit to them – make good system diagrams, map out your work flow, make a careful and measured plan before you start coding yourself into a corner.

Unit test. Explore an idea or code based solution in isolation before you wrap it into your project – there’s no worse feeling than having to work around your own kludges.

Learn and use a version tracking system.

Play the long game – when I started my grad program I knew that everything I blogged about was eventually going to be a part of a larger portfolio piece. So before I even knew what my thesis was going to be, I knew that I had to keep writing to feed the larger project. Find ways to integrate elements of other assignments / shows / installations / explorations into your thesis. If you can, find a way to tie everything back to your larger thesis project – think about all the component pieces you’ll need for your thesis and figure out how to work on those pieces in other assignments or classes. I can’t tell you how many people I’ve seen spread themselves across all manner of disparate projects only to realize too late that none of what they’ve done is applicable to the work they really want to do.

Comment your code. You’ll thank yourself a thousand times over for keeping notes, even if they’re bad ones. Any breadcrumb is better than nothing when you’re trying to figure out what your 3 AM over caffeinated and tipsy self was trying to make work.

Don’t play it safe. If you have to choose between a bold and dangerous move and tepid one, choose the dangerous option. Experiences that result in strong feelings from participants – even if they’re negative feelings – are more useful as a critique than tepid ones. Strong responses are easier to read, interpret, and course correct from than bland reactions.

Finally – remember that failure is an option when you’re in school. When a client has payed you a ton of money for a project you’ve got lots of pressure to successfully execute an idea, which means that you’ll likely have to compromise on some artistic element. As a student it’s okay if something goes completely wrong, or is awful in every way. Your journey and process is as valuable and important as what you make – that’s hard to find out in art for trade world; so be bold and own your failures – they’re as valuable as your successes and will probably teach you much more in the long run.

Hello Matthew,
Ive been following your videos about TouchDesigner, great work! Really appreciate the content and learning resources you’ve made.

We are experimenting with TouchDesigner, so that we could perhaps use it in our dynamic / interactive installations.

Recently I have been researching usage of TD with dmx/artnet lighting and have come to a certain problem, which I just cannot solve easily. Ive been looking all over the internet, read two books about TD and still cannot figure this out. So I wanted to get in touch withou you and perhaps aks for advice (if you have time of course 🙂 )

Let me introduce the problem:

Imagine you have a 3D lighting sculpture, where there are DMX light sources all over a certain 3D space.

The lights are not uniformly spaced relatively to each other (no cubic shape or something light that), they are randomly placed in the space.

Now, I want to take a certain shape (for example a sphere) and map it as an lighting effect to the lighting fixtures. The sphere would for example had its radius increased / decreased in time, and the application should “map” which light source should light up when the sphere “crosses” it in space.

I would then somehow sample the color of the points and use that information and feed it to a DMX chop after some other operations…

It’s kinda difficult to explain, but hopefully I got it right.. 🙂

Do you know of any tricks or components I could use, so that I could “blend” 3d geometry with points in space in order to control lighting?

Im certainly able to work out how DMX works and all the other stuff, I just dont know how to achieve the effect in 3D.

(In 2D, it would be really simple. For example for a LED screen, its pretty straightforward, I would just draw a circle or whatever on a TOP and then sample it..)

Thanks a lot,
I would appreciate any tips or advice, really.. 🙂

Best regards,

Great question!

A sphere is a pretty easy place to start, and there are a few ways we can tackle this.

The big picture idea is to sort out how you can compute the distance from your grid points to the center of your sphere. By combining this with the diameter of your sphere we can then determine if a point is inside or outside of that object.

We could do this in SOP space and use a group SOP – this is the most straightforward to visualize, but also the least efficient – the grouping and transformation operations on SOPs are pretty expensive, and while this is a cool technique, you bottle-neck pretty quickly with this approach.

To do this in CHOPs what we need is to first compute the difference between our grid points and our sphere – we can do this with a math CHOP set to subtract. From there we’ll use another math CHOP to compute the length of our vector. In essence, this tells us how far away any given point is from the center of our sphere. From here we have few options – we might use a delete CHOP to remove samples that our outside of our diameter, or we might use a logic CHOP to tell us if we’re inside or outside of our sphere.

From there we should be able to pretty quickly see results.

Attached set of examples made in 099.

base_SOPs – this illustrates how this works in SOP space using groups

base_concept – here you can see how the idea works out with just a flat regular distribution of points. It’s easier to really pull apart the mechanics of this idea starting with a regular distribution first as it’s much easier to debug.

base_volume – the same ideas but applied to a 3D volume.

base_random – here you can see this process applied to a sudo random distribution of points. This is almost the same network as we looked at in base_concept, with a few adjustments to compensate for the different point density.

Looking for generic advice on how to make a tox loader with cues + transitions, something that is likely a common need for most TD users dealing with a playback situation. I’ve done it for live settings before, but there are a few new pre-requisites this time: a looping playlist, A-B fade-in transitions and cueing. Matthew Ragan‘s state machine article (https://matthewragan.com/…/presets-and-cue-building-touchd…/) is useful, but since things get heavy very quickly, what is the best strategy for pre-loading TOXs while dealing with the processing load of an A to B deck situation?

I’ve been thinking about this question for a day now, and it’s a hard one. Mostly this is a difficult question as there are lots of moving parts and nuanced pieces that are largely invisible when considering this challenge from the outside. It’s also difficult as general advice is about meta-concepts that are often murkier than they may initially appear. So with that in mind, a few caveats:

Some of suggestions below come from experience building and working on distributed systems, some from single server systems. Sometimes those ideas play well together, and sometimes they don’t. Your mileage may vary here, so like any general advice please think through the implications of your decisions before committing to an idea to implement.

The ideas are free, but the problems they make won’t be. Any suggestion / solution here is going to come with trade-offs. There are no silver bullets when it comes to solving these challenges – one solution might work for the user with high end hardware but not for cheaper components; another solution may work well across all component types, but have an implementation limit.

I’ll be wrong about some things. The scope of anyone’s knowledge is limited, and the longer I work in ToiuchDesigner (and as a programmer in general) the more I find holes and gaps in my conceptual and computational frames of reference. You might well find that in your hardware configuration my suggestions don’t work, or something I suggest won’t work does. As with all advice, it’s okay to be suspicious.

A General Checklist

Plan… no really, make a Plan and Write it Down

The most crucial part of this process is the planning stage. What you make, and how you think about making it, largely depends on what you want to do and the requirements / expectations that come along with what that looks like. This often means asking a lot of seemingly stupid questions – do I need to support gifs for this tool? what happens if I need to pulse reload a file? what’s the best data structure for this? is it worth building an undo feature? and on and on and on. Write down what you’re up to – make a checklist, or a scribble on a post-it, or create a repo with a readme… doesn’t matter where you do it, just give yourself an outline to follow – otherwise you’ll get lost along or forget the features that were deal breakers.

Data Structures

These aren’t always sexy, but they’re more important than we think at first glance. How you store and recall information in your project – especially when it comes to complex cues – is going to play a central role in how your solve problems for your endeavor. Consider the following questions:

What existing tools do you like – what’s their data structure / solution?

How is your data organized – arrays, dictionaries, etc.

Do you have a readme to refer back to when you extend your project in the future?

Do you have a way to add entries?

Do you have a way to recall entries?

Do you have a way to update entries?

Do you have a way to copy entries?

Do you have a validation process in-line to ensure your entries are valid?

Do you have a means of externalizing your cues and other config data

Time

Take time to think about… time. Silly as it may seem, how you think about time is especially important when it comes to these kinds of systems. Many of the projects I work on assume that time is streamed to target machines. In this kind of configuration a controller streams time (either as a float or as timecode) to nodes on the network. This ensures that all machines share a clock – a reference to how time is moving. This isn’t perfect and streaming time often relies on physical network connections (save yourself the heartache that comes with wifi here). You can also end up with frame discrepancies of 1-3 frames depending on the network you’re using, and the traffic on it at any given point. That said, time is an essential ingredient I always think about when building playback projects. It’s also worth thinking about how your toxes or sub-components use time.

When possible, I prefer expecting time as an input to my toxes rather than setting up complex time networks inside of them. The considerations here are largely about sync and controlling cooking. CHOPs that do any interpolating almost always cook, which means that downstream ops depending on that CHOP also cook. This makes TOX optimization hard if you’re always including CHOPs with constantly cooking foot-prints. Providing time to a TOX as an expected input makes handling the logic around stopping unnecessary cooking a little easier to navigate. Providing time to your TOX elements also ensures that you’re driving your component in relationship to time provided by your controller.

How you work with time in your TOXes, and in your project in general can’t be understated as something to think carefully about. Whatever you decide in regards to time, just make sure it’s a purposeful decision, not one that catches you off guard.

Identify Your Needs

What are the essential components that you need in modular system. Are you working mostly with loading different geometry types? Different scenes? Different post process effects? There are several different approach you might use depending on what you’re really after here, so it’s good start to really dig into what you’re expecting your project to accomplish. If you’re just after an optimized render system for multiple scenes, you might check out this example.

Understand / Control Component Cooking

When building fx presets I mostly aim to have all of my elements loaded at start so I’m only selecting them during performance. This means that geometry and universal textures are loaded into memory, so changing scenes is really only about scripts that change internal paths. This also means that my expectation of any given TOX that I work on is that its children will have a CPU cook time of less than 0.05ms and preferably 0.0ms when not selected. Getting a firm handle on how cooking propagates in your networks is as close to mandatory as it gets when you want to build high performing module based systems.

Some considerations here are to make sure that you know how the selective cook type on null CHOPs works – there are up and downsides to using this method so make sure you read the wiki carefully.

Exports vs. Expressions is another important consideration here as they can often have an impact on cook time in your networks.

Careful use of Python also falls into this category. Do you have a hip tox that uses a frame start script to run 1000 lines of python? That might kill your performance – so you might need to think through another approach to achieve that effect.

Do you use script CHOPs or SOPs? Make sure that you’re being carefully with how you’re driving their parameters. Python offers an amazing extensible scripting language for Touch, but it’s worth being careful here before you rely too much on these op types cooking every frame.

Even if you’re confident that you understand how cooking works in TouchDesigner, don’t be afraid to question your assumptions here. I often find that how I thought some op behaved is in fact not how it behaves.

Plan for Scale

What’s your scale? Do you need to support an ever expanding number of external effects? Is there a limit in place? How many machines does this need to run on today? What about in 4 months? Obscura is often pushing against boundaries of scale, so when we talk about projects I almost always add a zero after any number of displays or machines that are going to be involved in a project… that way what I’m working on has a chance of being reusable in the future. If you only plan to solve today’s problem, you’ll probably run up against the limits of your solution before very long.

Shared Assets

In some cases developing a place in your project for shared assets will reap huge rewards. What do I mean? You need look no further than TouchDesigner itself to see some of this in practice. In ui/icons you’ll find a large array of moviefile in TOPs that are loaded at start and provide many of the elements that we see when developing in Touch:

Rather than loading these files on demand, they’re instead stored in this bin and can be selected into their appropriate / needed locations. Similarly, if your tox files are going to rely on a set of assets that can be centralized, consider what you might do to make that easier on yourself. Loading all of these assets on project start is going to help ensure that you minimize frame drops.

While this example is all textures, they don’t have to be. Do you have a set of model assets or SOPs that you like to use? Load them at start and then select them. Selects exist across all Op types, don’t be afraid to use them. Using shared assets can be a bit of a trudge to set up and think through, but there are often large performance gains to be found here.

Dependencies

Sometimes you have to make something that is dependent on something else. Shared assets are a kind of single example of dependencies – where a given visuals TOX wouldn’t operate correctly in a network that didn’t have our assets TOX as well. Dependencies can be frustrating to use in your project, but they can also impose structure and uniformity around what you build. Chances are the data structure for your cues will also become dependent on external files – that’s all okay. The important consideration here is to think through how these will impact your work and the organization of your project.

Use Extensions

If you haven’t started writing extensions, now is the time to start. Cue building and recalling are well suited for this kind of task, as are any number of challenges that you’re going to find. In the past I’ve used custom extensions for every external TOX. Each module has a Play(state) method where state indicates if it’s on or off. When the module is turned on it sets of a set of scripts to ensure that things are correctly set up, and when it’s turned off it cleans itself up and resets for the next Play() call. This kind of approach may or may not be right for you, but if you find yourself with a module that has all sorts of ops that need to be bypassed or reset when being activated / deactivated this might be the right kind of solution.

Develop a Standard

In that vein, cultivate a standard. Decide that every TOX is going to get 3 toggles and 6 floats as custom pars. Give every op access to your shared assets tox, or to your streamed time… whatever it is, make some rules that your modules need to adhere to across your development pipeline. This lets you standardize how you treat them and will make you all the happier in the future.

That’s all well and good Matt, but I don’t get it – why should my TOXes all have a fixed number of custom pars? Let’s consider building a data structure for cues let’s say that all of our toxes have a different number of custom pars, and they all have different names. Our data structure needs to support all of our possible externals, so we might end up with something like:

That’s a bummer. Looking at this we can tell right away that there might be problems brewing at the circle k – what happens if we mess up our tox loading / targeting and our custom pars can’t get assigned? In this set-up we’ll just fail during execution and get an error… and our TOX won’t load with the correct pars. We could swap this around and include every possible custom par type in our dictionary, only applying the value if it matches a par name, but that means some tricksy python to handle our messy implementation.

What if, instead, all of our custom TOXes had the same number of custom pars, and they shared a name space to the parent? We can rename them to whatever makes sense inside, but in the loading mechanism we’d likely reduce the number of errors we need to consider. That would change the dictionary above into something more like:

Okay, so that’s prettier… So what? If we look back at our lesson on dictionary for loops we’ll remember that the pars() call can significantly reduce the complexity of pushing dictionary items to target pars. Essentially we’re able to store the par name as the key, and the target value as the value in our dictionary we’re just happier all around. That makes our UI a little harder to wrangle, but with some careful planning we can certainly think through how to handle that challenge. Take it or leave it, but a good formal structure around how you handle and think about these things will go a long way.

Cultivate Realistic Expectations

I don’t know that I’ve ever met a community of people with such high standards of performance as TouchDesigner developers. In general we’re a group that wants 60 fps FOREVER (really we want 90, but for now we’ll settle), and when things slow down or we see frame drops be prepared for someone to tell you that you’re doing it all wrong – or that your project is trash.

Waoh is that a high bar.

Lots of things can cause frame drops, and rather than expecting that you’ll never drop below 60, it’s better to think about what your tolerance for drops or stutters is going to be. Loading TOXes on the fly, disabling / enabling containers or bases, loading video without pre-loading, loading complex models, lots of SOP operations, and so on will all cause frame drops – sometimes big, sometimes small. Establishing your tolerance threshold for these things will help you prioritize your work and architecture. You can also think about where you might hide these behaviors. Maybe you only load a subset of your TOXes for a set – between sets you always fade to black when your new modules get loaded. That way no one can see any frame drops.

The idea here is to incorporate this into your planning process – having a realistic expectation will prevent you from getting frustrated as well, or point out where you need to invest more time and energy in developing your own programming skills.

Separation is a good thing… mostly

I’d always suggest keeping the UI on another machine or in a seperate instance. It’s handier and much more scaleable if you need to fork out to other machines. It forces you to be a bit more disciplined and helps you when you need to start putting previz tools etc in. I’ve been very careful to take care of the little details in the ui too such as making sure TOPs scale with the UI (but not using expressions) and making sure that CHOPs are kept to a minimum. Only one type of UI element really needs a CHOP and that’s a slider, sometimes even they don’t need them.

I’m with Richard 100% here on all fronts. That said, be mindful of why and when you’re splitting up your processes. It might be temping to do all of your video handling in one process, that gets passed to process only for rendering 3d, before going to a process that’s for routing and mapping.

Settle down there cattle rustler.

Remember that for all the separating you’re doing, you need strict methodology for how these interchanges work, how you send messages between them, how you debug this kind of distribution, and on and on and on.

There’s a lot of good to be found how you break up parts of your project into other processes, but tread lightly and be thoughtful. Before I do this, I try to ask myself:

“What problem am I solving by adding this level of additional complexity?”

“Is there another way to solve this problem without an additional process?”

“What are the possible problems / issues this might cause?”

“Can I test this in a small way before re-factoring the whole project?”

Don’t Forget a Start up Procedures

How your project starts up matters. Regardless of your asset management process it’s important to know what you’re loading at start, and what’s only getting loaded once you need it in touch. Starting in perform mode, there are a number of bits that aren’t going to get loaded until you need them. To that end, if you have a set of shared assets you might consider writing a function to force cook them so they’re ready to be called without any frame drops. Or you might think about a way to automate your start up so you can test to make sure you have all your assets (especially if your dev computer isn’t the same as your performance / installation machine).

Logging and Errors

It’s not much fun to write a logger, but they sure are useful. When you start to chase this kind of project it’ll be important to see where things went wrong. Sometimes the default logging methods aren’t enough, or they happen to fast. A good logging methodology and format can help with that. You’re welcome to make your own, you’re also welcome to use and modify the one I made.

Unit Tests

Measure twice, cut once. When it comes to coding, unit tests are where it’s at. Simple proof of concept complete tests that aren’t baked into your project or code can help you sort out the limitations or capabilities of an idea before you really dig into the process of integrating it into your project. These aren’t always fun to make, but they let you strip down your idea to the bare bones and sort out simple mechanics first.

Build the simplest implementation of the idea. What’s working? What isn’t? What’s highly performant? What’s not? Can you make any educated guesses or speculation about what will cause problems? Give yourself some benchmarks that your test has to prove itself against before you move ahead with integrating it into your project as a solution.

Document

Even though it’s hard – DOCUMENT YOUR CODE. I know that it’s hard, even I have a hard time doing it – but it’s so so so very important to have a documentation strategy for a project like this. Once you start building pieces that depend on a particular message format, or sequence of events, any kind of breadcrumbs you can leave for yourself to find your way back to your original thoughts will be helpful.

As a follow up to the Book of Shaders port last week, I wanted to add another resource that I read through several times when first getting my bearings with GL. GLSL 2D Tutorials – an example that’s currently up on Shader Toy https://www.shadertoy.com/view/Md23DV.

From Uğur:

by Uğur Güney. March 8, 2014.

Hi! I started learning GLSL a month ago. The speedup gained by using GPU to draw real-time graphics amazed me. If you want to learn how to write shaders, this tutorial written by a beginner can be a starting place for you.

Please fix my coding errors and grammar errors.

Getting your bearings with GLSL can be a bit of a rodeo when you’re first getting started. Uğur’s 2D tuts were a huge help to me when I was first getting started, and they often show examples that are a little more granular than The Book of Shaders.

Hopefully this set of examples will help you get started and get your gl bearings here in Touch.

When possible, I’ve copied the examples as faithfully as possible. What that means is that there may be better ways to approach some challenges – but what you’ll find here is as close to the original tutorial as I can manage.

For TouchDesigner programmers who are accustomed to the nodal environment of TD, working with straight code might feel a bit daunting – and making the transition from Patrico’s incredible resource to Touch might feel hard – it certainly was for me at first. This repo is really about helping folks make that jump.

Here you’ll find the incredible examples made by Patricio and Jen ported to the TouchDesigner environment. There are some differences here, and I’ll do my best to help provide some clarity about where those come from.

This particular set of examples is made in TouchDesigner 099. In the UI you’ll find a list of examples below the rendered shader in the left pane, on the right you’ll find the shader code and the contents of an info DAT. You can live edit the contents of the shader code, you just have to click off of the pane for the code to be updated under the hood. If you hit the escape key you can dig into the network to see how everything is organized.

Each ported shader exists as a stand alone file – making it easy to drop the pixel shader into another network. When possible I’ve tried to precisely preserve the shader from the original, though there are some cases where small alterations have been made. In the case of touch specific uniforms I’ve tried to make sure there are notes for the programmer to see what’s happening.