The new capability of loading user-provided GLSL shaders into Processing’s P2D and P3D renderers opens up the possibility of customizing all the rendering operations in Processing, as well as of creating interactive graphics that would be very hard or impossible to generate otherwise. For OpenGL web applications, WebGL supports (only) programmable pipelines through GLSL shaders, and this has motivated the creation of online repositories of shader effects that can be run directly from inside the web browsers, as long as they support WebGL. Sites like the GLSL sandbox or Shader Toy hold large collections of shader effects that can be edited and controlled interactively through the browser. This new post will explain how to integrate GLSL shaders from the GLSL sandbox and Shader Toy websites into a Processing sketch.Update: With the release of Processing 2.0 final, some of the contents in this post are outdated, please check this tutorial for a detailed description of the finalized shader API.

Post-processing filters

The 2.0a7 release includes several shader examples under the OpenGL/Shaders category. A typical application of shaders is the use of image-postprocessing filters, such as blur, edge detection, emboss, etc. The example OpenGL/Shaders/EdgeDetect shows how to apply a simple edge detection filter on an image. The sketch code is fairly simple: a PShader object is created in setup() from loading the fragment shader file containing the filter implementation. The shader type is TEXTURED, since this shader will be applied to the rendering of textured geometry (see the previous post about the shader types). Also note that no vertex shader is specified, since we don’t need to modify the default vertex stage of our shader. The new shader is set with the shader() function, and can be disabled anytime by calling the resetShader(type) function. Since all the draw() function contains is an image() call (which is just a wrapper for beginShape(QUADS)/endShape()) the only geometry that this sketch is sending down to the GPU is a simple textured rectangle, which will be then handled by the TEXTURED shader we loaded in the setup of the sketch:

In some other situations, the entire rendering output of the sketch needs to be post-processed by a shader. Consider the OpenGL/Shaders/FishEye example. The shader in this sketch applies the inverse of the angular fish-eye transformation, so that when its output is projected onto a spherical surface, such as a planetarium dome, it will look undistorted. In this case, the fish-eye shader is also a post-processing filter that operates on textures. So one way to get a TEXTURED shader like this one to be applied on the sketch output is to do the rendering into an offscreen PGraphics surface, which will store the output as a texture that we can then pass through the shader, as we did for the PImage in the edge detection filter before:

This method involves doing the rendering into an offscreen PGraphics object. This is in fact the most efficient way to apply post-processing filters on the sketch output, but Processing also includes a filter(PShader flt) function that accepts a PShader of type TEXTURED and automatically applies it on everything that has been drawn up to that point, without the need of creating an extra PGraphics object or changing the default TEXTURED shader. This method might be a bit slower that the first approach since, under the scenes, grabs the contents of the screen, copies it into a temporary PGraphics object, etc, but its advantage is the simplicity of the resulting code:

Note that loadShader(filename) is equivalent to loadShader(TEXTURED, filename).

Using shaders from GLSL sandbox and Shader Toy

GLSL sandbox from Mr.doob and others, and Inigo Quilez‘s Shader Toy are both web-based interfaces to edit and run GLSL shaders directly on browsers supporting WebGL. Most of these effects can be run in a Processing sketch with minor modifications to the GLSL shader code.

Let’s start with the Deform effect from Shader Toy. It applies a dynamic transformation on the uv coordinates of the input texture in order to achieve a classic “demoscene” look. The sketch code to run this effect in Processing is very similar to the one from the edge detection example discussed before, but with a couple of additions. First, the input texture needs to repeat itself when the uv coordinates run outside the [0, 1] range, and this is achieved, for the moment, by calling textureWrap(Texture.REPEAT) from the main renderer object before loading the texture. Since this function is not part of the PApplet API, it needs the additional casting on the g object. Secondly, the shader uses a few uniforms variables to control the parameters of the visualization: the resolution of the output screen, the mouse position (which makes the effect interactive), and the time expressed in seconds.

The fragment shader needs some minor modifications though. The texture sampler is called tex0 in the original GLSL code from Shader Toy, but as it was mentioned in the Part 1 of these series of posts, Processing expects the shaders used as TEXTURED type to have a sampler called textureSampler. So the modified fragment shader should look as follows:

Another effect from Shader Toy is Monjori. This one is entirely procedural and doesn’t use any texture as input. It basically goes through each pixel in the screen and does some clever math to create an psychedelic tunnel-like structure. Even though it doesn’t require any input vertex data to operate on, is still needs a stream of pixels covering the screen so the fragment shader is executed once for each pixel. This stream can ge triggered just by drawing a rectangle from (0,0) to (width, height):

Since the rectangle is a unlit, non-textured piece of geometry, the shader type to specify in this case is FLAT. The GLSL shader code can be used without any modifications from the original in Shader Toy.

Many effects from the GLSL sandbox also work in this way, using a well-know technique in the demoscene called ray marching distance fields (presentation, tutorial, list of distance functions), for example this one. The fragment shader code can be used unmodified in the Processing sketch since it doesn’t define any uniform that is part of the interface with the renderer. The shader expects a few additional uniform variables to control the animation and interaction:

The renderer in this case can be either P2D or P3D. Since the entire geometry is generated by the fragment shader, including camera movements and lights, the camera and any other transformations handled by the renderer in Processing are ignored. The only required element for this kind of effects to work is to draw a quad covering the entire output area.

9 responses to “Shaders in Processing 2.0 – Part 2”

Not really, there have been several shader editor projects over the past few years, but all I know of have gone unsupported/abandoned… FXComposer from NVidia is still there, but it is Windows-only. One option is just to use XCode, which has GLSL syntax highlighting. I’ve also seen this Shader Studio app, but apparently only runs on iOS (I haven’t tried it myself). After doing some search, I found a couple of references to this Kick.JS shader editor.

Hello, there were a few changes in the shader API between the alphas on which these posts were based on, and the beta. I will now write a brief post describing the differences.

Most importantly, the PShader.FLAT, PShader.TEXTURED constants are not needed any longer, Processing will try to determine the type of shader by analyzing the code. Take a look at the shader examples included in 2.0b3 to see how the PShader objects work in the beta.

Hi Andres, Very nice work on GLSL shaders integration.
I’m trying to play around with it now on Processing 2.0b7 but I have a few questions: When I try to set variables of the shaders within processing it does not seem to change anything. To be more precise, on the landscape example (where you set “time”, “resolution” and “mouse” var in processing) it does not seem to react at the mouse at all and if I comment those lines the shaders works exactly the same, any idea why?

Hello, what happens is that in 2.0b7 Processing automatically sets the values of the uniforms resolution, mouse and time, so you don’t need to do it explicitly from the sketch. If you do, your values will be overwritten by Processing. You can always add your own mouse, resolution and time uniforms, just call give them different names from “mouse”, “resolution”, etc.
However, don’t take this as written in stone. We are still discussing the details of the shader API with the rest of the processing team, so this behavior/naming conventions might change in the upcoming releases.

Cool, I’ve made a small experiment with shaders here http://blog.goodthink.biz/letthedogout/ (there are more shaders appearing towards the end of the video) it’s working very well so far even with a basic graphic card (NVIDIA GeForce 9400M) cheers!

Hello, your tutorials helped me a lot until know but I have no idea how to solve following problem:
The Goal is to create a fragment shader which fades out the whole window to a specific color with a specific spead for each color channel.
(see: http://wiki.processing.org/w/Fading_the_screen_to_black/any_color)
The problem with above source is just that the time to calculate the values is way to high growing exponentially to the windwo size.
Now I have used following simple processing code (2.0b6):

At the beginning it fades out perfectly but it gets stuck somewhere near the specified goal with low values for sp.
The problem is also described in the snippet mentioned above.
Now for me it seems to be a problem with the precision of the color values saved in the sampler2D but I have no idea how to increase it’s precision.

I think the problem is due to the fact that even color values are handled as floats inside the shader, then they are clamped to the RGBA type where each component is in fact an unsigned byte (with 255 possible values). So, if the change in color is too small and is not registered in the 1/255 step, then it won’t result in a visible difference in color. One way to get around this is to use an auxiliar float variable on the CPU side to accumulate the exponential decrease, and pass it as an uniform to the shader. Something like this:

Notice that I’m using filter(fadeout) instead of image(get(), 0, 0) followed by shader(fadeout). Both approaches are basically identical, but using filter(PShader) is more efficient. This example fades to black, but you can easily adapted to fade into a chosen color. I hope this helps.