Category Archives: How-To

If you’re reading this, you’re probably a fan of Elburz Sorkhabi and his work. That makes two of us! Elburz and I are always trying to find ways to collaborate and get into trouble, but we’re often on other sides of the world from one another. We thought it would be fun to do some guest blog posts for each other’s readers. So today you’re in for a treat with a guest post from Elburz himself! If you like a little variety in life, give Elburz blog a read over at Elburz.io.

TouchDesigner is a feature-rich software. It can be daunting for new users what they should be learning and what they’re missing in their toolbox. I thought it would be nice to share two TouchDesigner tricks that are easy to learn and will provide you a lot of value over your career. Some of these things can even evade experienced users as they come quickly in new updates and over time it can be hard to keep track of them all. With that said, let’s dive in!

Custom parameters

Custom parameters are one of the best features to come to TouchDesigner in the last few years. I use them all the time. Sometimes I use them to wrap complex functionality inside of a component while providing easy to use controls. Otherwise I use them for more architectural elements of a project, such as creating internal APIs. The great thing is that any kind of parameter type already available in TouchDesigner can be used for your own custom components. One thing to note is that you can only add Custom Parameters to COMP operators. The first step is to make a COMP, which usually will be a Container COMP or Base COMP, and then right click on it and select Customize Component…

This will open the Component Editor. In this window you can do things like make new extensions, and more importantly for us, this is a visual way to create Custom Parameters. The next step is type a name in to the top-left string field that will be used to name the new parameter page. Then go ahead and click Add Page. I use names like Settings or Controls for my parameter pages. You can confirm everything worked by checking the parameters of your COMP and looking for your new parameter page. It’ll be blank for now.

The next step is to start adding parameters to our new parameter page. Click on the parameter page you just created on the left side of the Component Editor. Now you can enter a name for the parameter you’re about to create in the second string field. Then we’ll go ahead and use the drop down menu to choose the type of parameter you’d like to create. Like I mentioned earlier, you can create any of the existing types of parameters including pulse buttons, toggles, colour pickers, file/folder selectors, and more. The drop down menu to the right of the parameter type has numbers from 1 to 4. These numbers represent the amount of value parts for parameters such as floats and integers. Parts can be thought of as the “amount” of values, for example if you make an integer with 3 selected, you’ll get an integer parameter with 3 separate integers, similar to an RGB parameter field. For this example, let’s select Toggle, name it Toggle Button, and click Add Par. You’ll immediately see the parameter appear in your parameter window as well as the component editor.

That’s it! That’s all there is to making custom parameters. The process is the same for any type of parameter, you just have to choose what you want from the drop down menu. How to use the values is our final step here, but it’s also quite easy.
The quickest way to access the parameter values of a COMP are to use the Parameter CHOP. When you drop one inside of the COMP, by default it’ll already be setup to show you only the values of your custom parameters. For most parameter types, it isn’t any more complicated than using these CHOP channels as you would any other channels for referencing.

For the parameters that hold non-numeric data, such as the file/folder selectors or string fields, you’ll need to access the data through some simple Python scripting. In this case, we can place a Parameter DAT right next to our Parameter CHOP. In the OP parameter enter .. which will select the parent container, for the Parameters enter *, and finally turn off the Built-In button. The Parameter DAT has callbacks like many of the other Execute type of DATs in TouchDesigner. This means that it has different functions where you add your code based on triggers. To access our same toggle button via the Parameter DAT we could add this under the onValueChange() callback:

if par.name == "Togglebutton":
print(par.val)

This would parse the different customer parameters by name (par.name), find the toggle parameter by it’s scripting name (Togglename – the name you see when you click on a parameter and see it’s second name in the expanded area), and then prints it’s value (val).

Operator snippets

I’m still surprised to this day by how Operator Snippets isn’t talked about every day by new users. It’s the most helpful resource added that can help you learn how to use just about any operator. Operator Snippets is a project file that is built into TouchDesigner that contains tons and tons of examples of how to use operators. It’s similar to Max MSP’s examples per node. There are two ways to access them. The first is to click Help in the top menu and then select Operator Snippets. This will open up a new window. The left side of the screen looks similar to the OP Create Dialogue (the menu when you double click on the network background) and allows you to choose which operator you want to find examples for. Underneath the operator selection area are a handful of big buttons with different names. Click on one of these changes between the different examples for the operator you’ve selected. On the right hand side of the window is the network area with the example in action. The great thing about Operator Snippets is that the example is a live network running in real-time! You can copy the example, paste it into your project, and then tweak it to your needs. How useful it that?

Wrap up

These two tricks may sound simple but they are game changing. Being able to make your own custom parameters quickly and easily is game changing. You can make complex components that can be easily used by anyone or implemented into your own projects and controlled easily. Operator Snippets give you the greatest documentation you could ever ask for: live networks running in real-time. I bet if you spent a little bit of time just browsing the snippets, you’d find awesome little examples you’ll be eager to copy and paste into your next project. With that said, enjoy these two beginner tricks and happy programming! And if you’re interested in more content like this, check out elburz.io

Spring time is lots of things – flowers, holidays, vigorous allergies, and the TouchDesigner Spring Update. For the second year running this is the time of year that features graduate from just being in experimental to being full fledged stable release features. Wowza.

This spring we’re seeing a feature that’s flat out amazing, and likely a bit of a sleeper. Bindings. Elburz has a great shout out to bindings on a recent blog, and I wanted to take some time to dig in and step through an example of both why they’re important, the paradigm they’re built on, and why they matter.

So what are parameter bindings anyway? The Derivative wiki has a great segment describing bindings:

Bound parameters keep their values in sync and will respond to changes from either parameter. For each parameter that is a bind master, it will have one or more bind reference parameters. The bind master holds the current value. It can have exports and expressions and generally works like any other parameter, but its value can be also changed indirectly by the bind references. A bind reference holds the location of the bind master. A bind reference is in a fourth “Bind” Parameter Mode, and will show up as purple text in parameter dialogs. It can only be changed via its its bind master, its UI, and its val property.

Model View Controller – MVC

Errm. Okay, so what does that mean exactly? Bindings are based on an interface architecture paradigm called Model – View – Controller, or MVC. Wikipedia has a nice starter debrief on the idea for us, but it’s easy to understand if you’ve spent much time working in Touch either for live set, or for a client application. Let’s consider a live set to help us get our footing here. Suppose you have parameter that can be updated by multiple touch points both in your own UI and in the TD UI. We all know this game, as soon as you export a CHOP to a parameter, you can no longer change that parameter except through the exported CHOP.

Fine.

So maybe instead of exporting from a single CHOP you instead write a bunch of scripts to handle this operation – only now you’re in a real pickle. Why? Well, because your script changes the parameter, but doesn’t update all of the UI elements that reflect the state of that parameter. So you write another script to update the UI. But now you’ve managed to save your project in a state where the UI and the parameters are not aligned. As soon as you change the UI everything is in sync – so you save over a few things, commit your changes, and now everything should be great. Until you need to load a saved preset state from disk. Now you’ve gotta write another set of scripts to do all of that updating, or hunker down for a more generalized solution – which probably means a code refactor. Who wanted to make some more sweet visuals anyway? There goes your night off. There goes your margin. Sigh.

The real world example of this is light switches. If you’ve ever lived in an apartment where multiple light switches control the same light / outlet, you understand this issue intimately. How do you know if the light is on or off? Only by looking at the light, because once the states of the light switches are out of phase they perform the opposite action.

The Model – View – Controller paradigm is a design architecture that’s intended to help resolve this issue. The MVC approach decouples the control of a UI element from the data it is manipulating. You could do this in touch before, you were just on the hook for doing all the set-up. This probably meant you had a master table or storage dictionary somewhere that was updated whenever a parameter was changed, that in turn would update all the other touch points. That’s a huge hassle, but it was the only way to solve this problem. It’s also the kind of silly thing you could really have a strangely strong opinion about – and consequently be convinced that your collaborators were doing it all wrong.

Enter Bindings

Okay, so as a refresher, Bindings are a new parameter mode – that’s the little multi-colored set of dots next to any parameter. This new mode is purple – one of the many colors of awesome. At the end of all of this, we’ll take a peek at the new widgets – and the UI redesign they offer, but to get started let’s build a use case for bindings so we can get a sense of what they’re good for.

Slider

We’re going to start with a good old fashioned slider. Why a slider?! Well, this is the kind of parameter we end up using all the time, and the fundamental nature of this UI piece should be foundational enough that if we can get a handle on this one, the jump to more abstract ideas should be a little easier.

Let’s get started by first adding a slider from the Op Create dialogue. We’re just going to add a run of the mill slider for now.

adding a slider

From here we’re going to customize our slider – let’s add a page called “Settings” and then a float custom parameter that we call “Slider” for now.

customizing our slider

So far so good. The next step is where it’s gonna get a little weird, and where it’ll get different than before. From here, let’s dive into our slider. We’re going to delete our Panel CHOP, and add our own Constant CHOP.

changing slider internals

Okay. Now here’s the wild part. On our new Constant CHOP we’ll use the new bindings parameter mode to write parent().par.Slider – that’s the reference to our newly created custom parameter.

write our binding expressing directly

If you’re not into that whole writing expressions exercise, you can also do this with a little drag-and-drop action:

drag and drop binding

Okay… so why is this interesting. Well, let’s see what happens when we move our new custom parameter, or move our Constant CHOP:

binding in action

Slick. Okay. That’s fly. So if we update the our parameter or our CHOP both changes are reflected in the other operator. Now let’s make a few final changes so that when we interact with the sliders panel we update both of these values. We can do this with a Panel Execute DAT. Let’s add our DAT and modify the contents with the following:

We should now see that if we move the slider that our Slider parameter updates, and our Constant CHOP updates.

panel and parameter binding

We’re very close now. All we need to do is to update the knob component in our slider. We need to change the expression there to be:

parent().par.Slider*parent().width-me.par.panelw/2

updated panel script

There we have it. Now we can change our custom parameter, our Constant CHOP, or the slider and all three stay in sync. MAGIC.

binding all around

Widgets?

Early on I mentioned that we find this same behavior in Widgets. What exactly are widgets you ask – currently Widgets are rolling out as a huge overhaul to the TUIK interface building kit that was relatively ubiquitous in TouchDesigner networks. Widgets are more modern take on UI building, and offer significant advances to UI building approaches for Touch. There’s too much to cover about widgets here, but it’s worth pointing out that the same binding approach you see above is a fundamental element of the widget system. It allows for bidirectional control of both user interaction elements and parameters – the core principle we just explored. You can dig in and learn a little more about widgets by reading through the article on the Derivative Wiki.

Why Bind…

You might be looking at this and feeling like it’s outside of your wheelhouse, or your workflow.

A reasonable reflection.

Regardless, I’d encourage you to think about the times when you’ve wanted to both control from more than one location – the parameter itself, as well as some other control interface. If nothing else, give them a try to see where they might fit – if they’re no good for you, don’t use them… though I suspect you’ll find they have all sorts of exciting uses.

I recently had the good fortune of being able to collaborate with my partner, Zoe Sandoval, on their MFA thesis project at UC Santa Cruz – { 𝚛𝚎𝚖𝚗𝚊𝚗𝚝𝚜 } 𝚘𝚏 𝚊 { 𝚛𝚒𝚝𝚞𝚊𝚕 }Thesis work is strange, and even the best of us who have managed countless projects will struggle to find balance when our own work is on the line – there is always the temptation to add more, do more, extend the piece a little further, or add another facet for the curious investigator. Zoe had an enormous lift in front of them, and I wanted to help streamline some of the pieces that already had functioning approaches, but would have benefited from some additional attention and optimization. Specifically, how cues and states operated was an especially important area of focus. I worked closely with the artist to capture their needs around building cues / states and translate that into a straightforward approach that had room to grow as we made discoveries, and needed to iterate during the last weeks leading up to opening.

The Big Picture

{ remnants } of a { ritual } is an immersive installation comprised of projection, lighting, sound, and tangible media. Built largely with TouchDesigner, the installation required a coordinated approach for holistically transforming the space with discrete looks. The projection system included four channels of video (two walls, and a blended floor image); lighting involved one overhead DMX controlled instrument (driven by an ENTEC USB Pro), and four IoT Phillips Hue lights (driven by network commands – you can find a reusable approach on github); sound was comprised of two channels driven by another machine running QLab, which required network commands sent as OSC. The states of each of these end points, the duration of the transition, and the duration of the cue were all elements that needed to both be recorded, and recalled to create a seamless environmental experience.

Below we’re going to step though some of the larger considerations that led to the solution that was finally used for this installation, before we get there though it’s helpful to have a larger picture of what we actually needed to make. Here’s a quick run-down of some of the big ideas:

a way to convert a set of parameters to python dictionary – using custom parameters rather than building a custom UI is a fast way to create a standardized set of controls without the complexity of lots of UI building in Touch.

a reusable way to use storage in TouchDesigner to have a local copy of the parameters for fast reference – typically in these situations we want fast access to our stored data, and that largely looks like python storage; more than just dumping pieces into storage, we want to make sure that we’re thoughtfully managing a data structure that has a considered and generalized approach.

a way to write those collections of parameters to file – JSON in this case. This ensures that our preset data doesn’t live in our toe file and is more easily transportable or editable without having TouchDesigner open. Saving cues to file means that we don’t have to save the project when we make changes, and it also means that we have a portable version of our states / cues. This has lots of valuable applications, and is generally something you end up wanting in lots of situations.

a way to read those JSON files and put their values back into storage – it’s one thing to write these values to file, but it’s equally important to have a way to get the contents of our file back into storage.

a way to set the parameters on a COMP with the data structure we’ve been moving around – it’s great that we’ve captured all of these parameters, but what do we do with this data once we’ve captured it? The need here is thinking through what to do with that data once you have it captured.

Cuing needs

One of the most challenging, and most important steps in the process of considering a cuing system is to identify the granularity and scope of your intended control. To this end, I worked closely with the artist to both understand their design intentions, as well their needed degrees of control. For example, the composition of the projection meant that the blended floor projection was treated as a single input image source; similarly, the walls were a single image that spanned multiple projectors. In these cases, rather than thinking of managing all four projectors it was sufficient to only think in terms of the whole compositions that were being pulled. In addition to the images, it was important to the artist to be able to control the opacity of the images (in the case of creating a fade-in / out) as well as some image adjustments (black level, brightness, contrast, HSV Offset). Lighting, and sound had their own sets of controls – and we also needed to capture a name for the cue that was easily identifiable.

As lovely as it would be to suggest that we knew all of these control handles ahead of time, the truth is that we discovered which controls were necessary through a series of iterative passes – each time adding or removing controls that were either necessary or too granular. Another temptation in these instances is to imagine that you’ll be able to figure out your cuing / control needs on your feet – while that may be the case in some situations, it’s tremendously valuable to instead do a bit of planning about what you’ll want to control or adjust. You certainly can’t predict everything, and it’s a fool’s errand to imagine that you’re going to use a waterfall model for these kinds of projects. A more reasonable approach is to make a plan, test your ideas, make adjustments, test, rinse, repeat. An agile approach emphasizes smaller incremental changes that accumulate over time – this requires a little more patience, and a willingness to refactor more frequently, but has huge advantages when wrestling with complex ideas.

Custom Pars

In the past I would have set myself to the task of handling all of these controls in custom built UI elements – if was was creating an interface for a client and had sufficient time to address all of the UI / UX elements I might have taken that approach here, but since there was considerable time pressure it was instead easier (and faster) to think about working with custom parameters. Built in operators have their own set of parameters, and Touch now allows users to customize Component operators with all of the same parameters you find on other ops. This customization technique can be used to build control handles that might otherwise not need complete UI elements, and can be further extended by using the Parameter COMP – effectively creating a UI element out of the work you’ve already done while specifying the custom parameters. The other worthwhile consideration to call out here is your ability to essentially copy parameters from other objects. Consider the black level, contrast, and brightness pars included above. One approach would be to create each par individually, and set their min, max, and default values. It would, however, be much faster if we could just copy the attributes from the existing level TOP. Luckily we can do just that with a small trick.

We start by creating a base Comp (or any Component object), right clicking on the operator, and selecting customize component.

This opens the customize component dialogue where we can make alterations to our COMP. Start by adding a new page to your COMP and notice how this now shows up on the components parameters pages:

For now let’s also add a level TOP so we can see how this works. From your level TOP click and drag a parameter over to the customize component dialogue – dragging specifically to the Parameter column on the customize component dialogue:

This process faithfully captures the source parameter’s attributes – type, min, max, and default vals without you needing to input them manually. In this style of workflow the approach is to first start by building your operator networks so you know what ops you will want to control. Rather than designing the UI and later adding operator chain, you instead start with the operator chain, and only expose the parameters you’ll need / want to control. In this process you may find that you need more or fewer control handles, and this style of working easily accommodates that kind of workflow.

Capturing Pars

Creating a component with all of your parameters is part of the battle, how to meaningfully extract those values is another kettle of fish. When possible it’s always handy to take advantage of the features of a programming language or environment. In this case, I wanted to do two things – first I wanted to be able to stash cues locally in the project for fast retrieval, second I wanted to have a way to write those cues to disk so they were’t embedded in a toe or tox file. I like JSON as a data format for these kinds of pieces, and the Python equivalent to JSON is dictionaries. Fast in TD access means storage. So here we have an outline for our big picture ideas – capture the custom parameters, stash them locally in storage, and write them to disk.

One approach here would be to artisanaly capture each parameter in my hipster data structure – and while we could do that, time fighting with these types of ideas has taught me that a more generalized approach will likely be more useful, even if it takes a little longer to get it right. So what does that look like exactly?

To get started, let’s create a simple set of custom parameters on a base COMP. I’m going to use the trick we learned above to move over a handful of parameters form a level TOP: Black Level, Contrast, and Opacity:

To create a dictionary out of these pars I could write something very specific to my approach that might look something like this snippet:

At first glance that may seem like an excellent solution, but as time goes on this approach will let us down in a number of ways. I wont’ bother to detail all of them, but it is worth capturing a few of the biggest offenders here. This approach is not easily expanded – if I want to add more pars, I have to add them directly to the method itself. For a handful, this might be fine, but over ten and it will start to get very messy. This approach also requires duplicate work – the key name for values means that I need to manually verify if the key name and parameter name match (we don’t have to do this, but we’ll see later how this saves us a good chunk of work), if I misspell a word here I’ll be very sorry for it later. The scope of this approach is very narrow – very. In this case the target operator variable is set inside of the method, meaning that this approach will only ever work for this operator, at this level in the network. All of that and more largely mean that while I can use this method very narrowly, I can use this approach, but I’m going to be sorry for it in the long run.

Instead of the rather arduous process above, we might consider a more generalized approach to solving this problem. Lucky for us, we can use the pars() method to solve this problem. The pars() method is very handy for this kind of work, the major catch being that pars() will return all of the parameters on a given object. That’s all well and good, but what I really wanted here was to capture only custom parameters on a specific page, and to be able to ignore some parameters (I didn’t, for example, need / want to capture the “save cue” parameter). What might his kind of approach look like, let’s take a look at the snippet below.

Abstract Reusable code segment

What exactly is happening here? First off, this one is full of documentation so our future selves will know what’s happening – in fact this is probably more docstring than code. The big picture idea is rather than thinking about this problem one parameter at a time, we instead what to think of entire custom pages of parameters. Chances are we want to re-use this, so it’s been made to be fairly general – we pass in an operator, the name of the page we want to convert to a python dictionary, the name of our newly made preset, and a list of any parameters we might want to skip over. Once we pass all of those pieces into our function, what we get back is a dictionary full of those parameters.

Capture to Storage

Simply converting the page of parameters to a dictionary doesn’t do much for us – while it is a very neat trick, it’s really about what we do with these values once we have them in a data structure. In our case, I want to put them into storage. Why storage? We certainly could put these values into a table – though there are some gotchas there. Table values in TouchDesigner are always stored as strings – we might think of this as text. That matters because computers are notoriously picky about data, and find the challenge of differentiating between words, whole numbers, and numbers with decimal values very difficult. Programmers refer to words as strings, whole numbers as integers or ints, and numbers with decimal values as floats. Keeping all of our parameter values in a table DAT means they’re all converted to strings. Touch mostly does an excellent job of making this invisible to you, but when it goes wrong it tends to go wrong in ways that can be difficult to debug. Using storage places our values in a python dictionary where our data types are preserved – not converted to strings. If you’re only working with a handful of cues and a handful of parameters this probably doesn’t matter – but if you’re thinking about 20 or more parameters it doesn’t take many cues before working in native data types will make a big difference. For reference, an early iteration of the cuing system for this would have needed the equivalent of a table DAT with over 1000 rows to accommodate the stored parameters. These things add up quickly, more quickly than you first imagine that they might.

Okay, so what’s an example of a simple and reusable function we might use to get a dictionary into storage:

Write to file

Similar to the above, we likely want a simple way to write our stored cues to disk in the same format we’re using internally. Python dictionaries and JSON are nearly interchangeable data structures and for our needs we can think of them as being the same thing. We do need to import the JSON module to get this to work correctly, but otherwise this is a straightforward function to write.

What you end up with will look like this:

Reading from file

We’re close now to having a complete approach for working with cues / states. Our next puzzle piece here would be a way to read our JSON from disk, and replace what we have in storage with the file’s contents. This means that whatever is in the file can be used to replace what we have in storage.

What you end up with here might look like this:

Loading pars – does it work

This part is the most tricky. Here the the big idea is to create a duplicate operator that has all of the same custom parameters in our preset maker. Why? Well, that would mean that all of the parameter names match – so which would make loading parameters significantly easier and more straightforward. The other trick here is to remove any of the ignored pars from our ignore list – thinking back this is to ensure that we don’t use any of the parameters that we don’t want / need outside of recording them. We can start this process by making a copy of our operator that’s being used to capture our parameters and then deleting the pars we don’t need. Next we need to write a little script to handle moving around all of the values. That should look something like this:

Making a Module

All of this is a good bit of work, and if you’ve been following along, you probably now have a handful of text DATs doing all of this work. For the sake of keeping tidy, we can instead put all of this into a single DAT that we can use as a python module. Wrapping all of these pieces together will give us something like this:

TD JSON – another alternative

There’s another alternative to this approach – which is the new TD JSON elements that are in the TouchDesigner. You can read about them on Derivative’s wiki here. These tools are a promising alternative, and you can do very similar pieces here. In particular we might use something like pageToJSONDict()to do what we’re after. That might look something like this:

That’s slick, but what we get back is almost 75 lines worth of JSON. This feels a little overkill to me for what we’re after here – there’s lots of potential here, but it might be a little more than we need for this actual implementation. Maybe not though, depending on how you want to change this process, it might be just perfect.

Safety Rails

There are some pieces missing in the approach above that I ended up including in the actual implementation for the artist – I’m not going dig into some of these pieces, but it is worth calling attention to some of the other elements that were included. The biggest pieces that needed to be addressed were how we handle failure, duplicates, provided confirmation on an operation, or warned the user about possibly unintended operations.

The artist, for example, wanted to both have the UI flash and to get a message printed to the text port when a preset was successfully saved. The artist also wanted to make sure that a preset wasn’t automatically overwritten – instead they wanted to see a pop up message warning that a preset was going to be overwritten, allowing the user confirm or cancel that operation.

That may seem unnecessary for a tool you build for yourself… until it’s 2am and you haven’t slept, or you’re working fast, or there’s a crit in 10 minutes and you want to make one more adjustment, and and and, or or or. Handling these edge cases can not only add piece of mind, but also ensure you keep your project on the straight and narrow.

Additionally, how you handle failure in these situations is also important to plan – we never want these pieces to fail, but having a gracefully solution for how to handle these moments are tremendously important to both work through and plan. If nothing else, it’s elegantly handling the failure and printing a message – better still is if give yourself a clue about what went wrong. A few breadcrumbs can go a long way towards helping you find the trail you lost. In my opinion, failing code gets a bad wrap – it’s something we grumble over, not something we celebrate. The truth of the matter, however, is that failures are how we make projects better. Being able to identify where things went wrong is how you can improve on your process. It’s a little thing, but if you can shift (even if only slightly) how you feel about a failing process, it will allow you some room to embrace iterative process more easily.

Conclusions

Managing states / cues is tricky. It’s easy to consider this a rather trivial problem, and it isn’t until you really take time to think carefully about what you’re trying to achieve that you uncover the degree of complexity in the questions around how you manage the flow of information in your network. You wont get it right the first time, but chances are you didn’t ride a bike without a few falls, and you probably didn’t learn to play that instrument without getting a few scales wrong. It’s okay to get it wrong, it’s okay to refactor your code, it’s okay to go back to the drawing board as you search to find what’s right – that’s part of the process, it’s part of what will ultimately make for a better implementation.

No matter what, hang in there… keep practicing, leave yourself breadcrumbs – you’ll get there, even if it takes you longer than you want.

Happy programming.

Zoe Sandoval’s { remnants } of a { ritual }

You can see { remnants } of a { ritual } and the work of the DANM MFA Cohort through May 12th at UC Santa Cruz.

TouchDesigner Version

099 2018.26750

OS Support

Windows 10

macOS

Summary

Working with git and TouchDesigner isn’t always an easy process, but it’s often an essential part of the process of tracking your work and collaborating with others. It also encourages you to begin thinking about how to make your projects and components more modular, portable, and reuseable. Those aren’t always easy practices to embrace, but they make a big difference in the amount of time you invest in future projects. It’s often hard to plan for the gig in six months when you’re worried about the gig on Friday – and we all have those sprints or last minute changes.

It’s also worth remember that no framework will ever be perfect – all of these things change and evolve over time, and that’s the very idea behind externalizing pieces of your project’s code-base. An assembly of concise individually maintainable tools is often more maintainable than rube golbergian contraption – and while it’s certainly less cool, it does make it easier to make deadlines.

So, what does all this have to do with saving external tox files? TOX files are the modules of TouchDesigner – they’re component operators that can be saved as individual files and dropped into any network. These custom operators are made out of other operators. In 099 they can be set to be private if you have a pro license – keeping prying eyes away from your work (if you’re worried about that).

That makes these components excellent candidates for externalization, but it takes a little extra work to keep them saved and sycned. In a perfect world we would use the same saving mechanism that’s employed to save our TOE file to also save any external file, or better yet, to ask us if we want to externalize a file. That, in fact, is the aim of this TOX.

Supported File Types

.tox

.py

.glsl

.json

In addition to externalizing tox files, it’s often helpful to also externalize any files that can be dffed in git – that is any files you can compare meaningfully. When it comes to your version control tool, this means that you can track the changes you or a team member have made from one commit to another. Being able to see what changed over time can help you determine why one version works and another does not. Practically speaking, this usually comes in the form of python files, glsl, or json files. This little tool supports the above file types, and goes a little further.

“What’s further mean?” You ask – and I’m so glad you did. Furhter means that if you change this file outside of touch – say in a text editor like Sublime or Visual Studio Code, this TOX module will watch to see if that file has changed, and if it has pulse reload the operator that’s referencing that file. Better still, if it’s an extension, the parent() operator will have its extensions reinitialized. There’s a little set-up and convention required there, but well worth it if you happen to use extension on a regular basis.

Parameters

Extension Flag

The Extension Flag is the tag you will add to any text DAT that you’re using as an extension. This ensures that we can easily identify which text DATs are being used as externally edited extensions, and reload both the contents of the DAT, as well reinitialize the extensions for the parent() operator. You can use any descriptor here that you like – I happen to think that something like EXT works well.

Log to Texport

If you want to track when and where your external files are being saved, or if you’re worried that something might be going wrong, you can turn on the Logtotextport parmeter to see the results of each save operation logged for easy view and tracking.

Default Color

The default color is set as a read-only parameter used to reset the network worksheet background color. This is used in conjunction with the following two parameters to provide visual indicators for when a save or load operation has happened.

BG Color

This is the color that the network background will flash when you externalize a TOX – it’s the visual indicator that your tox has been sucessfully saved.

Save Color

This is the color that the network background will flash save a text based file in an external editor – it’s the visual indicator that your file has been reloaded.

EXT Color

This is the color used to set the node color of your newly externalized tox – this can help ensure that at a glance you can tell which operators have been externalized.

Version

The version number for this tool.

Operation

reinitextensions.pulse()

If you want to use this in conjunction with extensions, you’ll need to follow a few conventions:

The text DAT that references an extension needs to be inside of the COMP uses it as an extension. For example – let’s say you have a text DAT that holds an extension called Project, this needs to live inside of the COMP that is using it as an extension.

The file you’re editing needs to end in .py. This might seem obvious, but it’s important that the file you’re editing is a python file. There are a number of checks that happen to make sure that we don’t just reinit COMPs willy nilly, and this is one of those safety measures.

The text DAT holding the extension needs to be tagged EXT – or whatever Extension Flag you’ve set in the parameters for the TOX. This makes sure that we don’t just reinit the extensions of our parent every-time any .py file is saved, but only if the that file belongs is being read by a textDAT that’s marked as being an extension.

ctrl+s

The way you’ll use this tox is just as if you were working as you might normally. Only, when you hit ctrl + s, if you’re inside of a COMP that hasn’t been saved externally, you’ll be asked if you want to externalize that module. If you select yes you’ll next be asked where you want to save that module. This module will then create a folder that has the same name as your component, and save the tox inside of that folder (the tox will also have the same name as the component). Better yet, this module will auto-populate the path to the external tox with the location you’ve selected. When you press ctrl + s again it will warn you that you’re about to over-write your tox. If you confirm that you want to replace your tox, it will save the updated version right where your previous tox was located.

Using a text editor

If you’re using a text editor for supported externalized files, than work as you normally might. When you save your file in your text editor Touch will automatically reload the file in Touch. If your text DAT is tagged EXT it will also reinit the extensions of the text DAT’s parent().

Suggested Workflow

Externalization Only

Create a directory for your project

Open TouchDesigner and save your .TOE file in your new directory this is an important step – saving your project makes sure that the member project.folder correct points to your .TOE file.

Drop the base_save.tox from touchdesigner-save-external\release into your network – I’d recommend doing this at the root of your project, or in a place in your project specifically designed to hold other tools. I like to create a base called tools where I keep all the things that I use for development, or that any machine might need (meaning when you’re thinking on a single .TOE file that’s configured based on a machine’s role)

Create a new folder in your project folder called td-modules (this is my suggestion, though you can use any name you like). Navigate into this folder and compete the save process.

Check finder (macOS) or explorer (windows) to see that in td-moduels you now have a new directory for your tox, and inside of that directory is your saved tox file.

Notice that the color of your tox has changed so you know that it’s externalized.

Continue to work and save. Note that when you use ctrl+s both your project and your tox are saved. If you happen to create an external .TOX inside of a tox that’s already externalized, you’ll be prompted to save both the parent() and the current COMP or just the current COMP.

Using Git

Create a new repo

Clone / Initialize your repo locally

Open TouchDesigner and save your .TOE file in your repo

Drop the base_save.tox from touchdesigner-save-external\release into your network – I’d recommend doing this at the root of your project, or in a place in your project specifically designed to hold other tools. I like to create a base called tools where I keep all the things that I use for development, or that any machine might need (meaning when you’re thinking on a single .TOE file that’s configured based on a machine’s role)

Create a new folder in your project folder called td-modules (this is my suggestion, though you can use any name you like). Navigate into this folder and compete the save process.

Check finder (macOS) or explorer (windows) to see that in td-moduels you now have a new directory for your tox, and inside of that directory is your saved tox file.

Notice that the color of your tox has changed so you know that it’s externalized.

Continue to work and save. Note that when you use ctrl+s both your project and your tox are saved. If you happen to create an external .TOX inside of a tox that’s already externalized, you’ll be prompted to save both the parent() and the current COMP or just the current COMP.

Commit and push your work.

External Text based files

Start by following the instructions above to set up your project with the base_save.tox

Create a folder in your project for scripts or modules.

Start by following the instructions above to set up your project with the base_save.tox

Create a folder in your project for scripts or modules.

Add a new text DAT to your network, right click and save externally.

Set path to your external file in your text DAT and turn on the load on start parameter

Start by following the instructions above to set up your project with the base_save.tox

Create a folder in your project for scripts or modules.

Add a new text DAT to your network, right click and save externally.

Set path to your external file in your text DAT and turn on the load on start parameter.

Now open your text file in your external editor and work directly with your text file. When you save your file you should see the background of TouchDesigner flash, and the contents of your text DAT reload.

External Extensions

Start by following the instructions above to set up your project with the base_save.tox

Follow the instructions above for externalizing a python file – this time, make sure you save your .py file inside of your tox’s folder, and make sure that the text DAT is inside of the component that will use the extensions.

Tag your text DAT with EXT or whatever extension flag you’ve chosen.

Set up a simple extension.

Start by following the instructions above to set up your project with the base_save.tox

Follow the instructions above for externalizing a python file – this time, make sure you save your .py file inside of your tox’s folder, and make sure that the text DAT is inside of the component that will use the extensions.

Tag your text DAT with EXT or whatever extension flag you’ve chosen.

Set up a simple extension.

Now open your extension in your external editor and work directly with your .py file. When you save your file you should see the background of TouchDesigner flash, the contents of your text DAT reload, and your extension will be reinitialized.

Additional Considerations and Suggestions

At this point, you might have guess that this kind of approach works best in well structured projects. Some suggestions for organization and approach:

Think about Order and Structure – while I’ve structured projects lots of different ways, it’s worth finding a file structure that you like and sticking with it. That might be a deeply nested structure (watch out that’ll bite you if you get too deep – at least on windows), or it might be something more flat. Regardless, think about a structure and stay with it.

Make Small Simple Tools – to the best of your ability, try to make sure your modules are independent islands. That’s not always possible, but if you can think carefully about creating dependencies, you’ll be happier for it. Use custom parameters on your components to keep modules independent from one another. Use select operators, or In’s and Out’s to build connenctions.

Reuse that TOX – while this approach is fancy and fun, especially when working with git, it’s also about making your future self happier. Thank carefully about how you might make something re-usable and portable to another project. THe more you can think through how to make pieces that can easily move from project to project the more time you can spend on the fun stuff… not on the pieces that are fussy and take lots of time.

An Example Project

In the folder called sample_project open the Sample_project.toe to see how this might work.

Credits

Inspired by the work of:

Anton Heestand and Willy NolanI’ve had the great fortune of working with both of these find developers. I regularly use an externalization tool authored by these two developers, and this TOX is partially inspired by their work. Many thanks for a tool that keeps on working and makes using GIT with TouchDesigner something that’s reasonable.

Thinking about how to approach re-usability isn’t a new topic here, in fact there’s been plenty of discussion about how to re-use components saved as tox files, how to build out modular pieces, and how to think about using Extensions for building components you want to re-use with special functions.

That’s wonderful and exciting, but for any of us that have built deployed projects have quickly started to think about how to build a standard project that any given installation is just a variation of… a flavor, if you will, of a standard project build. Much of that customization can be handled by a proper configuration process, but there are some outliers in every project… that special method that breaks our beautiful project paradigm with some feature that’s only useful for a single client or application.

What if we could separate our functions into multiple classes – those that are universal to every project we build, and another that’s specific to just the single job we’re working on? Could that help us find a way to preserve a canonical framework with beautiful abstractions while also making space for developing the one-off solutions? What if we needed a solution to handle the above in conjunction with sending messages over a network? Finally, what if we paired this with some thoughts about how we handle switch-case statements in Python? Could we find a way to help solve this problem more elegantly so we could work smarter, not harder?

Well, that’s exactly what we’re going to take a look at here.

First a little disclaimer, this approach might not be right for everyone, and there’s a strong assumption here that you’ve got some solid Python under your belt before you tackle this process / working style. If you need to tool up a little bit before you dig in, that’s okay. Take a look at the Python posts to help get situated then come back to really dig in.

Getting Set-up

In order to see this approach really sing we need to do a few things to get set-up. We’ll need a few operators in place to see how this works, so before we dig into the python let’s get our network in order.

First let’s create a new base:

Next, inside of our new base let’s set up a typical AB Deck of TOPs with a constant CHOP to swap between them:

Above we have two moviefilein TOPS connected to a switch TOP that’s terminated in a null TOP. We also have a constant CHOP terminated in a null whose chan1 value is exported to the index parameter of our switch TOP.

Let’s also use the new Layout TOP in another TOP chain:

Here we have a single layout TOP that’s set-up with an export table. If you’ve never used DAT Exports before you might quickly check out the article on the wiki to see how that works. The dime tour of that ideal is that we use a table DAT to export vals to another operator. This is a powerful approach for setting parameters, and certainly worth knowing more about / exploring.

Whew. Okay, now it’s time to set up our extensions. Let’s start by creating a textDAT called messageParserEXT, generalEXT, and one called jobEXT.

The Message Parser

A quick note about our parser. The idea here is that a control machine is going to pass along a message to a set of other machines also running on the network. We’re omitting the process of sending and receiving a JSON blob over UPD, but that would be the idea. The control machine passes a JSON message over the network to render nodes who in turn need to decode the message and perform some action. We want a generalized approach to sending those blobs, and we want both the values and the control messages to be embedded in that JSON blob. In our very simple example our JSON blob has only two keys, messagekind and vals:

In this example, I want the messagekind key to be the same as a method name in our General or Specific classes.

Pero, like why?!

Before we get too far ahead of ourselves, let’s first copy and past the code below into our messageParserEXT text DAT, add our General and Specific Classes, and finish setting up our Extensions.

The General Code Bits

In our generalEXT we’re going to create a General class. This works hand in hand with our parser. The parser is going to be our helper class to handle how we pass around commands. The General class is going to handle anything that we need to have persist between projects. The examples here are not representative of the kind of code you’d have your project, instead they’re just here to help us see what’s happening in this approach.

The Specific Code Bits

Here in our Specific class we have the operations that are specific to this single job – or maybe they’re experimental features that we’re not ready to roll into our General class just yet, regardless, these are methods that don’t yet have a place in our canonical code base. For now let’s copy this code block into our jobEXT text DAT.

At this point we’re just about ready to pull apart what on earth is happening. First let’s make sure our extension is correctly set-up. Let’s go back up a level and configure our base component to have the correct path to our newly created extension:

Wait wait wait… that’s only one extension? What gives? Part of what we’re seeing here is inheritance. Our Specific class inherits from our General class, which inherits form our MessageParser. If you’re scratching your head, consider that a null TOP is also a TOP is also an OP. In the same way we’re taking advantage of Python’s Object oriented nature so we can treat a Specific class as a special kind of General operation that’s related to sending messages between our objects. All of his leads me to believe that we should really talk about object oriented programming… but that’s for another post.

Alright… ALMOST THERE! Finally, let’s head back inside of our base and create three buttons. Lets also create a panel execute for each button:

Our first panel execute DAT needs to be set up to watch the state panel value, and to run on Value Change:

If we make our button viewer active, and click out button we should see our constant1 CHOP update, and our switch TOP change:

AHHHHHHHHH!

WHAT JUST HAPPENED?!

The Black Magic

The secret here is that our messagekind key in our panel execute DAT matches an existing method name in our General class. Our ProcessMessage() method accepts a dictionary then extracts the key for messagekind. Next it checks to see if that string matches an existing method in either our General or Specific classes. If it matches, it then calls that method, and passes along the same JSON message blob (which happens to contain our vals) to the correct method for execution.

In this example the messagekind key was Change_switch(). The parser recognized that Change_switch was a valid method for our parent() object, and then called that method and passed along the message JSON blob. If we take a look at the Change_switch() method we can see that it extracts the vals key from the JSON blob, then changes the constant CHOP’s value0 parameter to match the incoming val.

This kind of approach let’s you separate out your experimental or job specific methods from your tried and true methods making it easier in the long run to move from job to job without having to crawl through your extensions to see what can be tossed or what needs to be kept. What’s better still is that this imposes minimal restrictions on how we work – I don’t need to call a separate extension, or create complex branching if-else trees to get the results I want – you’ll also see that in the MessageParser we have a system for managing elegant failure with our simple if hasattr() check – this step ensure that we log that something went wrong, but don’t just throw an error. You’d probably want to also print the key for the method that wasn’t successfully called, but that’s up to you in terms of how you want to approach this challenge.

Next see if you can successfully format a message to call the Image_order() method with another panel execute.

What happens if you call a method that doesn’t exist? Don’t forget to check your text port for this one.

If you’re really getting stuck you might check the link to the repo below so you can see exactly how I’ve set this process up.

If you got this far, here are some other questions to ponder:

How would you use this in production?

What problems does this solve for you… does it solve any problems?

Are there other places you could apply this same idea in your projects?

At the end of the day this kind of methodology is really looking to help us stop writing the same code bits and bobs, and instead to figure out how to build soft modules for our code so we can work smarter not harder.

Hang onto your socks programmers, we’re about to dive deep. What are we up to here today? Well, we’re going to look into switch statement alternatives in Python (if you don’t know what a switch statement is don’t worry we’ll cover that bit), how you might use that in a practical real-world situation, and why that’s even an idea worth considering. With that in mind let’s dig-in and start to pull apart what Switch Statements are, and why you should care.

From 20,000 feet, switch-case statements are an approach to handling different situations by way of a look-up table rather than with a series of if-else statements. If you’re furrowing your brow consider situations when you may have encountered complex if-else statements where once change breaks everything… for so so much longer than you might want. Also consider what happens if you want to extend that if-else ladder into something more complicated… maybe you want to call different functions or methods based on input conditions, maybe you need to control a remote machine and suddenly you’re scratching your head as you ponder how on earth you’re going to handle complex logic statements across a network. Maybe you’re just after a better code-segmentation solution. Or maybe you’ve run into a function so long you’re starting to loose cycles to long execution times. These are just a few of the situations you might find yourself in and a switch statement might just be the right tool to help – except that there are no switch-case statements in Python.

What gives?!

While there aren’t any switch-case statements, we can use dictionary mappings to get to a similar result… a result so powerful we’re really in for a treat. Before we get there though, we need to look at the situation we’re trying to avoid.

So what exactly is that situation? Let’s consider a problem where we want to only call one function and then let that code block handle all of the various permutations of our actions. That might look like our worst case solution below.

Worst

To get started, what do we have above? We have a single function called switcher() that takes three arguments – the name of the function we want to call, and two values. In this example we have four different math operations, and we want to be able to access any of the four as well as pass in two values and get a result just by calling a single function. That doesn’t seem so hideous on the face of it, so why is this the worstapproach?

This example probably isn’t so terrible, but what it does do is bury all the functional mathematical portions of our code inside of a single function. It means we can’t add and test a new element without possibly breaking our whole functional code block, we can only access these operations from within switcher(), and if we decide to add additional operations in the future our code block will just continue to accrue lines of code. It’s a naive approach (naive in the programming sense – as in the first brute force solution you might think of), but it doesn’t give us much room for modularity or growth that doesn’t also come with some unfortunate side effects.

Okay… fine… so what’s a good solution then?

Good

A good solution segments our functions into their own blocks. This allows us to develop functions outside of our switcher() function, call them independently, and have a little more flexible modularity. You might well be thinking that this seems like a LOT more lines… can we really say this is better?! Sure. The additional lines are worth it if we also get some more handles on what we’re doing. It also means we probably save some serious debugging time by being able to isolate where a problem is happening. In our worst case approach we’re stuck with a single function that if it breaks, none of our functions work… and if our logic got sufficiently complex we might be sifting through a whole heap of code before we can really track down what’s happening. Here at least there’s a better chance that a problem is going to be isolated to a single function block – that alone is a HUGE help.

All that said, we’re still not really getting to switch-case statements… we’re still stuck in if-else hell where we’ll have to evaluate our incoming string against potentially all of the possible options before we actually execute our actual code block. At four functions this isn’t so bad, but if we had hundreds we might really be kicking ourselves.

So how can we do better?

Better

Better is to remember that the contents of a python dictionary can be any data type – in fact they can even be function names, or Python objects. How does that help use? Well, it means we can look up what function we want to call on the fly, call it, and even pass in variables. In the example above our switcher() function holds a dictionary of all the possible functions at our disposal – when we call our switcher we pass in the name of the function with the variables that will in turn get passed to the function. Above our active_function variable becomes the variable that’s fetched from our dictionary, which we in turn pass our incoming variables along to.

That’s great in a lot of ways, but especially in that it gets us away from long complicated if-else trees. We can also use this as a mechanism for handling short-hand names for our methods, or multiple assignments – we might want two different keys to access the same function (maybe “mult” and “Multiple” both call the same function for example).

So far this is far away a better approach, so how might we make this better still?

Best

We might take this one step further and start to consider how we might address accepting an arbitrary number of vals. Above we have a simple way to tackle this – probably not what you’d end up with in production, but something that should hopefully get you thinking. Here the variable vals becomes a list that can be any number of values. In the case of both our Add() and Subtract() functions we loop through all of the values – adding each val, or subtracting each val respectively. In the case of our Multiply() and Divide() functions we limit these operations to only two values for the sake of our example. What’s interesting here is that we can return can think about error handling based on the array of values that’s coming into our function.

The above is great, of course, but it’s really just the beginning of the puzzle. Where this really starts to become interesting is how you might think of integrating this approach in your python extensions.

Or if vals is a a dictionary in it’s own right rather than a simple list.

Or if you can send a command like this over the network.

Or if you can start to think about how to build out blocks of code that are specific to a single job, and universal blocks that apply to all of your projects.

Next we’ll start to pull apart some of those very ideas and see where this concept really gets exciting and creates spaces for building tools that persists right alongside the tools that you have to build for a single job.

In the meantime, experiment with some Python style switch statements to see if you can get a handle on what’s happening here, and how you might take better advantage of this method.

It’s hard to appreciate some of the stranger complexities of working in a programming environment until you stumble on something good and strange. Strange how Matt? What a lovely question, and I’m so glad that you asked!

Time is a strange animal – our relationship to it is often changed by how we perceive the future or the past, and our experience of the now is often clouded by what we’re expecting to need to do soon or reflections of what we did some time ago. Those same ideas find their way into how we program machines, or expect operations to happen – I need some-something to happen at some time in the future. Well, that’s simple enough on the face of it, but how do we think about that when we’re programming?

Typically we start to consider this through operations that involve some form of delay. I might issue the command for an operation now, but I want the environment to wait some fixed period of time before executing those instructions. In Python we have a lovely option for using the time module to perform an operation called sleep – this seems like a lovely choice, but in fact you’ll be oh so sorry if you try this approach:

But whyyyyyyyy?!

Well, Python is blocking inside of TouchDesigner. This means that all of the Python code needs to execute before you can proceed to the next frame. So what does that mean? Well, copy and paste the code above into a text DAT and run this script.

If you keep an eye on the timeline at the bottom of the screen, you should see it pause for 1 second while the time.sleep() operation happens, then we print “oh, hello there” to the text port and we start back up again. In practice this will seem like Touch has frozen, and you’ll soon be cursing yourself for thinking that such a thing would be so easy.

So, if that doesn’t work… what does? Is there any way to delay operations in Python? What do we do?!

Well, as luck would have it there’s a lovely method called run() in the td module. That’s lovely and all, but it’s a little strange to understand how to use this method. There’s lots of interesting nuance to this method, but for now let’s just get a handle on how to use it – both from a simple standpoint, and with more complex configurations.

To get started let’s examine the same idea that we saw above. Instead of using time.sleep() we can instead use run() with an argument called delayFrames. The same operation that we looked at above, but run in a non-blocking way would look like this:

If you try copying and pasting the code above into a text DAT you should have much better results – or at least results where TouchDesigner doesn’t stop running while it waits for the Python bits to finish.

Okay… so that sure is swell and all, so what’s so complicated? Well, let’s suppose you want to pass some arguments into that script – in fact we’ll see in a moment that we sometimes have to pass arguments into that script. First things first – how does that work?

Notice how when we wrote our string we used args[some_index_value] to indicate how to use an argument. That’s great, right? I know… but why do we need that exactly? Well, as it turns out there are some interesting things to consider about running scripts. Let’s think about a situation where we have a constant CHOP whose parameter value0 we want to change each time in a for loop. How do we do that? We need to pass a new value into our script each time it runs. Let’s try something like:

What you should see is that your constant CHOP increments every second:

But that’s just the tip of the iceberg. We can run strings, whole DATs, even the contents of a table cell.

This approach isn’t great for everything… in fact, I’m always hesitant to use delay scripts too heavily – but sometimes they’re just what you need, and for that very reason they’re worth understanding.

If you’ve gotten this far and are wondering why on earth this is worth writing about – check out this post on the forum: Replicator set custom parms error. It’s a pretty solid example of how and why it’s worth having a better understanding of how delay scripts work, and how you can make them better work for you.

Programming is a strange practice. It’s not uncommon that in order to make what’s really interesting, or what you promised the client, or what’s driving a part of your project you have to build another tool.

You want to detect motion, so you need to build out a means of comparing frames, and then determining where the most change has occurred. You want to make visuals that react to audio, but first you need to build out the process for finding meaningful patterns in the audio. And on and on and on.

And so it goes that I’ve been thinking about finding dominant color in an image. There are lots of ways to do this, and one approach is to use a technique called KMeans clustering. This approach isn’t without its faults, but it is interesting and relatively straightforward to implement. The catch is that it’s not fast enough for a realtime application – at least not if you’re using Python. So what can we do? Well, we can still use KMeans clustering, but we need to understand how to use multi-threading in python so we don’t block our main thread in TouchDesigner.

The project / tool / example below does just that – it’s a mechanism for finding dominant color in an image with an approach that uses a background thread for processing that request.

TouchDesigner Dominant Color

An approach for finding dominant color in an image using KMeans clustering with scikit learn and openCV. The approach here is built for realtime applications using TouchDesigner and python multi-threading.

TouchDesigner Version

099
Build 2018.22800

Python Dependencies

numpy

scipy

sklearn

cv2

Overview

A tool for finding Dominant Color with openCV.

Here we find an attempt at locating dominant colors from a source image with openCV and KMeans clustering. The large idea is to sample colors from a source image build averages from clustered samples and return a best estimation of dominant color. While this works well, it’s not perfect, and in this class you’ll find a number of helper methods to resolve some of the shortcomings of this process.

Procedurally, you’ll find that that the process starts by saving out a small resolution version of the sampled file. This is then hadned over to openCV for some preliminary analysis before being again handed over to sklearn (sci-kit learn) for the KMeans portion of the process. While there is a built-in function for KMeans sorting in openCV the sklearn method is a little less cumbersome and has better reference documentation for building functionality. After the clustering process each resulting sample is processed to find its luminance. Luminance values outside of the set bounds are discarded before assembling a final array of pixel values to be used.

It’s worth noting that this method relies on a number of additional python libraries. These can all be pip installed, and the recommended build approach here would be to use Python35. In the developer’s experience this produces the least number of errors and issues – and boy did the developer stumble along the way here.

Other considerations you’ll find below are that this extension supports a multi-threaded approach to finding results.

Using this Module

To use this module there are a few essential elements to keep in mind.

Getting Python in Order

If you haven’t worked with external Python Libraries inside of Touch yet, please take a moment to familiarize yourself with the process. You can read more about it on the Derivative Wiki – Importing Modules

Before you can run this module you’ll need to ensure that your Python environment is correctly set-up. I’d recommend that you install Python 3.5+ as that matches the Python installation in Touch. In building out this tool I ran into some wobbly pieces that largely centered around installing sklearn using Python 3.6 – so take it from someone whose already ran into some issues, you’ll encounter the fewest challenges / configuration issues if you start there. Sklearn (the primary external library used by this module) requires both scipy and numpy – if you have pip installed the process is straightforward. From a command prompt you can run each of these commands consecutively:

pip install numpypip install scipypip install sklearn

Once you’ve installed the libraries above, you can confirm that they’re available in python by invoking python in your command prompt, and then importing the libraries one by one. Testing to make sure you’ve correctly installed your libraries in a Python only environment first, will help ensure that any debugging you need to do in TouchDesigner is more straightforward.

Working with TouchDesigner

Python | Importing Modules

If you haven’t imported external libraries in TouchDesigner before there’s an additional step you’ll need to take care of – adding your external site-packages path to TouchDesigner. You can do this with a regular text DAT and by modifying the example below:

Copy and paste the above into your text DAT, and modify mypath to be a string that points do your Python externals site-packages directory.

If that sounds a little out of your depth, you can use a helper feature on the Dominant Color module. On the Python page, navigate to your Python Externals directory. It should likely be a path like: C:\Program Files\Python35\Lib\site-packages

Your path may be different, especially if when you installed Python you didn’t use the checkbox to install for all users. After navigating to your externals directory, pulse the Check imports parameter. If you don’t see a pop-up window then sklearnwas successfully imported. If you do see a pop-up window then something is not quite right, and you’ll need to do a bit of leg-work to get your Python pieces in order before you can use the module.

Using the Dominant Color

With all of your Python elements in order, you’re ready to start using this module.

The process for finding dominant color uses a KMeans clustering algorithm for grouping similar values. Luckily we don’t need to know all of the statistics that goes into that mechanism in order to take full advantage of the approach, but it is important to know that we need to be mindful a few elements. For this to work efficiently, we’ll need to save our image out to an external file. For this to work you need to make sure that this module has a cache for saving temporary images. The process will verify that the directory you’ve pointed it to exists before saving out a file, and will create a directory if one doesn’t yet exist. That’s mostly sanity checking to ensure that you don’t have to loose time trying to figure out why your file isn’t saving.

Give that this process happens in another thread, it’s also important to consider that this functions based on a still image, not on a moving one. While it would be slick to have a fast operation for finding KMeans clusters in video, that’s not what this tool does. Instead the assumption here is that you’re using a single frame of reference content, not video. You point this module to a target source, by dropping a TOP onto the Source Image parameter.

Next you’ll need to define the number of clusters you want to look for. Here the term clusters is akin to what’s the target number of dominant colors you’re looking to find – the top 3, the top 10, the top 20? It’s up to you, but keep in mind that more clusters takes longer to produce a result. You’re also likely to want to bound your results with some luminance measure – for example, you probably don’t want colors that are too dark, or too light. The luminance bounds parameters are for luminance measures that are normalized as 0 to 1. Clusters within bounds, then, tells you how many clusters were returned from the process that fell within your specified regions. This is, essentially, a way to know how many swatches work within the brightness ranges you’ve set.

The output ramp from this process can be interpolated and smooth, or Nearest Pixel swatches. You can also choose to output a ramp that’s any length. You might, for example, want a gradient that’s spread over 100 or 1000 pixels rather than just the discrete samples. You can set the number of output pixels with the ramp width parameter.

On the otherside of that equation, you might just want only the samples that came out of the process. In the Output Image parameter, if you choose clusters from the drop down menu you’ll get only the valid samples that fell within your specified luminance bounds.

Finally, to run the operation pulse Find Colors. As an operational note, this process would normally block / lock-up TouchDesigner. To avoid that unsavory circumstance, this module runs the KMeans clustering process in another thread. It’s slightly slower than if it ran in the main thread, but the benefit is that Touch will continue running. You’ll notice that Image Process Status parameter displays Processing while the separate thread is running. Once the result has been returned you’ll Ready displayed in the parameter.

Multi-threading is no easy task to wrap your head around, and there are plenty of pit-falls when it comes to using it in in Touch. Below we have three simple examples of seeing how that works. It’s not the most thrilling topic in the world, until it’s something that you need – desperately – then it might just save your project.

Three Examples

text_pyThreads
A simple example of creating a function that runs in another thread. As noted in the forum reference post it’s important to consider what operations can potentially create race conditions in Touch. The suggested consideration here is to avoid the use of any operations that will interact with a touch Object. Looking more closely at this example we can see here that we’re using only Pythonic approaches to editing an external file. In our first example we use a simple text file approach to ensure that we have the simplest possible exploration of a concept.

text_pyThreads_openCV
In the opeCV example we look at how one might consider taking the approach of working with image processing through the openCV library. While this example only creates a random red circle, with a little imagination we might see how this would be useful for doing an external image processing pass – finding image features, identifying colors, etc. Running this as a for loop makes it easy to see how this process can block TouchDesigner’s main thread, and why it would be useful to have a means of executing this function in a way that minimizes impact on the running project.

text_pyThreads_queue
While the example pyThreads_opneCV is an excellent start, that doesn’t help us if we need to know when an outside operation has completed. The use of Queue helps resolve this issue. A Queue object can be placed into storage and act as an interchange between threads. It’s marked as a thread safe operation, and in our case is used to help track when we’re working function is “Processing” or “Ready”. You’ll notice that the complications here are that we have a less than ideal need to use an execute DAT to run a frame start script to check for our completed status each frame. This is less than ideal but a reasonable solution for an otherwise blocking operation. Notice that the execute DAT will disable the operation of the frame start script after it’s “Ready”. This kind of approach helps to ensure that the execute DAT only runs the frame start script when necessary and not every frame.

Forum Example

An example from the forum used to sort out the essential pieces of working with multiple threads, queues, and how to approach this issue without crashing Touch. Many thanks to the original authors for their work helping to shed some light on this murkey part of working in TouchDesigner.

Overview

At some point you’ll need to split up the work of a single project into multiple processes. This happens for lots of reasons – maybe you want to break your control interface out from your output elements, or maybe you want to start up another tool you’ve built – you name it, there are lots of reasons you might want to launch another process, and if you haven’t found a reason to you… chances are you will soon.

The good news is that we can do this with a little bit of python. We need to import a few extra libraries, and we need to do a little leg work – but once we get a handle on those things we have a straightforward process on our hands.

Getting Started

First things first, start by downloading or cloning the whole repo. We’ll start by opening the process-management.toe file. You might imagine that this is the toe file that you’re launching processes from, or you might think of this as your main control toe file. You’ll notice that there’s also a toe file called other-app.toe. This is the file we’re going to launch from within TouchDesigner. At this point feel free to open up that file – you’ll see that it starts in perform mode and says that it’s some other process. Perfect. You should also notice that it says “my role is,” but nothing else. Don’t worry, it’s this way on purpose.

Process-management.toe

In this toe file you’ll see three buttons:

Launch Process

Quit Process

Quit Process ID None

Launch Process

This button will run the script in text_start_process.

So, what’s happening here? First we need to import a few other libraries that we’re going to use – os and subprocess. From there we need to identify the application we’re going to use, and the file we’re going to launch. Said another way we want to know what program we’re going to open our toe file with. You’ll see that we’re doing something a little tricksy here. In Touch, the all class has a member called binFolder – this tells us the location of the Touch Binary files, which happens to include our executable file. Rather than hard coding a path to our binary we can instead use the path provided by Touch – this has lots of advantages and should mean that your code is less likely to break from machine to machine.

So far so good. You should also see that we’re setting an environment variable with os.environ. This is an interesting place where we can actually set variables for a Touch process at start. Why do this? Well, you may find that you have a single toe file that you want to configure differently for any number of reasons. If you’re using a single toe file configuration, you might want to launch your main file to default as a controller, and a another instance of the same file in an output configuration, and maybe another instance of the same app to handle some other process. If that’s not interesting to you, you can comment out that line – but it might at least be worth thinking about before you add that pound sign.

Next we use a subprocess.Popen() call to start our process – by providing the all, and the file as arguments in a list. We can also grab our process ID (we’ll use that later) while we’re here.

Finally we’ll build a little dictionary of our attributes and put that all in storage. I’m using a dictionary in this example since you might find that you need to launch multiple instances, and having a nice way to keep them separate is handy.

Okay. To see this work, let’s make that button viewer active and click it – tada! At this point you should see another TouchDesigner process launch and this time around the name that was entered for our ROLE environment variable shows up in our second touch process: “some other process, my role is render1”

Good to know is that if you try to save your file now you’ll get an error. That’s because we’ve put a subprocess object into storage and it can’t persist between closing and opening. Your file will be saved, but our little dictionary in storage will be lost.

Quit Process

This little button kills our other-app.toe instance.

Much simpler than our last script this one first grabs our dictionary that’s in storage, grabs the subprocess object, and then kills it. Next we unstore all of the bits in storage so we can save our toe file without any warning messages.

Quit Process ID

Okay – so what happens if you want to just kill a process by it’s ID, not by storing the whole subprocess object? You can do that.

In this case we can use the os module to issue a kill call with our process id. If we look at the os documentation we’ll see that we need a pid and a sig – which is why we’re also importing signal.

Take Aways

This may or may not be useful in your current work flow, but it’s handy to know that there are ways to launch and quit another toe file. Better yet, this same idea doesn’t have to be limited to use with Touch. You might use this to lunch or control any other application your heart desires.