Additional Requirements

Note: The examples in this article include the following projects: ChannelScrambler, CheckerFill, GaussianBlur, GrainBlend, HardLightBlend, MultiplyBlend, and ScreenBlend. The SWF content for the examples are displayed in this article. To examine the examples more closely, download the sample files and open the files in the Pixel Bender Toolkit and Flex Builder.

Pixel Bender is a graphics processing engine supported by Adobe Flash Player 10, Adobe After Effects, and (soon) Adobe Photoshop. The language is based on fragment shader languages, such as OpenGL Shading Language (GLSL), used to optimize pixel drawing operations in 3D rendering. In Flex, you can use Pixel Bender programs to create filters, blends, area fills, and line fills.

Pixel Bender effects can be applied to any display object, including images, vector graphics, and even digital video. The execution speed is extremely fast; effects that would have taken seconds per frame to execute in ActionScript can now be achieved in real time (see Figure 1)

Figure 1. Pixel Bender effects can be applied to any display object.

Pixel Bender programs, called kernels, are written and compiled using the Adobe Pixel Bender Toolkit. The compiled bytecode produced by the toolkit can be loaded into a Shader object and used by your SWF content. The Pixel Bender Toolkit is installed automatically when you install Flash CS4 Professional.

Documentation for using Pixel Bender in Adobe Flex Builder is available in the Programming ActionScript 3.0 chapter Working with Pixel Bender shaders and in the ActionScript Component and Language Reference in the Shader class section. This documentation contains a detailed description of the objects you can use with Pixel Bender in Flex and Flash. Documentation on the Pixel Bender language is available from the Help menu of the Pixel Bender Toolkit.

If you want to see what's possible, as well as learn some new tricks, I recommend reviewing examples of Pixel Bender programs. In addition to the examples provided in this article, you can find a public repository of kernels hosted by Adobe at the Pixel Bender Exchange. The authors of these kernel programs have kindly agreed to share them with the wider Flash developer community. Perhaps your kernel will be the next one posted? There is also a forum for discussing Pixel Bender programming. Also visit the Pixel Bender Technology Center.

Pixel Bender language overview

Before jumping into kernel coding, take a brief look at the Pixel Bender language. You probably won't become an expert in the language just from reading this article, but the following discussion should help you understand programs written in the Pixel Bender language. For far more detail, refer to the Adobe Pixel Bender Language 1.0 Tutorial and Reference Guide, available from the Help menu of the Pixel Bender Toolkit application.

Pixel Bender uses procedural syntax similar to languages such as C, Java, and ActionScript. It includes built-in data types and functions targeted at image processing. If you are already familiar with ActionScript, then learning to write Pixel Bender kernels should be reasonably straightforward. The main differences in syntax between ActionScript and Pixel Bender language include:

Type declarations for variables go in front of the variable name, instead of after it. The var keyword is not used. Thus, instead of declaring an integer value with:

var foo:int;

you use the syntax:

int foo;

New objects cannot be created at runtime. Thus, the new keyword is not supported (or needed).

Pixel Bender uses the data type, float, instead of the data type, Number, for representing real numbers. The term float stands for floating-point (which just means that the decimal point can "float" to any position in the number).

It is important to remember that when you type a floating point number in a Pixel Bender program, you must include a decimal point. Otherwise, the Pixel Bender compiler will treat the number as an integer rather than a float value. Unlike ActionScript, Pixel Bender does not do any implicit data conversion. So if, for example, you type 1 instead of 1.0, you will get an abstruse error message from the Pixel Bender Toolkit.

Pixel Bender includes built-in vector data types. A vector type can be recognized by the number appended to the base type name. For example, a float3 type is a vector containing three elements of type float.

The following statement illustrates a typical use of a vector data type in Pixel Bender:

float4 rgbaPixel = float4( 1.0, 0.3, 0.2, 0.8 );

The statement above declares a new vector variable named rgbaPixel and assigns it a color value. The expression, float4( 1.0, 0.3, 0.2, 0.8 ), defines a literal float4 vector constant, which is assigned to the rgbaPixel variable.

Pixel Bender supports vector swizzling.

The members of a vector can be accessed using dot notation and one of three sets of elements names: r,g,b,a; x,y,z,w; or s,t,p,q. Swizzling lets you rearrange the elements simply by reordering the element names. For example, the following statement swaps the red and green color channels of a pixel vector when assigning the pixel value to another variable:

pixel4 mixedUp = rgbaPixel.grba;

You can also repeat channels:

pixel4 allRed = rgbaPixel.rrra;

and drop channels:

pixel2 redAndBlue = rgbaPixel.rb;

Note: The choice of which set of element names to use with a vector variable is up to you. For example, myVector.r is the same as myVector.x. A good practice is to use the rgba set for colors, and the xyzw or stpq sets for positions. You cannot mix names from more than one set in the same reference.

Pixel Bender also supports built-in matrix data types. A matrix data type is similar to a vector, but contains a two dimensional array of numbers. The float3×3 matrix type, for example, contains three vectors of three elements.

Pixel Bender in Flash Player does not support loops, custom functions (other than the evaluatePixel() function), or arrays.

Note: When developing Pixel Bender kernels for Flash Player or AIR, be sure to enable the Turn on Flash Player Warnings and Errors option (under the Build menu of the toolkit window). With this option enabled, the compiler will inform you immediately when you are using unsupported Pixel Bender language features. (Otherwise, the toolkit won't report the errors until you try to export the kernel for Flash Player.)

Kernel walkthrough

The typical Pixel bender kernel performs the following tasks:

Samples a pixel from the input image

Applies a calculation to the sampled pixel color

Assigns the modified value to the output pixel

The following simple Pixel Bender kernel does each of these tasks. The program defines a kernel named ChannelScrambler:

The kernel declares an input image, named inputImage, and an output pixel, named outPixel. In the evaluatePixel() function, the pixel at the output coordinates currently being processed is accessed using two built-in functions, sampleNearest() and outCoord(). Next, the sampled pixel is assigned to the outPixel variable, using swizzling to reorder the color channels.

This kernel produces the following result shown in Figure 2.

Figure 2. The result of the Channel Sampler.

The required elements of a Pixel Bender program include the languageVersion tag:

<languageVersion: 1.0;>

and the kernel name declaration:

kernel ChannelScrambler{...}

Inside the kernel, there must be a single output declaration, such as output pixel4 outPixel, and an evaluatePixel() function. A kernel may have any number of inputs, including no inputs at all. However, the way you use a kernel in Flex creates additional requirements on the number of inputs. A shader used as a blend requires two inputs, a shader used as a filter requires one input, and a shader used as a fill does not require any inputs at all.

A Pixel Bender kernel is run once for every pixel in the output image. No state information is saved between runs, so it isn't possible to collect an average pixel value by accumulating the pixels sampled in each run. Each run of the kernel must compute all the information it needs—although you can pre-compute information in ActionScript and pass it in as an input or parameter.

Sampling

To access the pixel values in an input image, you must use the sampling functions. Sampling is accomplished with the following built-in functions:

sampleNearest() returns a vector containing the channel values of the pixel closest to the specified coordinates. (There is also a sampleLinear() function that behaves slightly differently.)

outCoord() returns the coordinates of the current output pixel.

As Pixel Bender processes an image, it executes the kernel for every pixel in the output image. The outCoord() function returns the coordinates of the current pixel. The Pixel Bender coordinate system is similar to that used by Flash Player and AIR. The origin is registered at the top, left corner. Positive values increase to the right and down. Pixels are always square.

You are not limited to sampling the pixel directly at the outCoord() position. For example, you could use the following sampling statement to sample the pixel that is 10 pixels right and 5 pixels down from the current pixel:

The expression, outCoord() + float2( 10.0, 5.0 ), adds a two-element vector to the vector of coordinates produced by the outCoord() function. An equivalent way to code the same expression is: float2(outCoord().x + 10.0, outCoord().y + 5.0).

If the sample coordinates go beyond the bounds of the input image, then a color vector containing all zeros is returned. For example, if the input is of type image4, then sampling an exterior pixel will return a completely transparent black pixel. If the input is of type image3, then it will return a black pixel (without an alpha channel). The Pixel Bender coordinate space theoretically extends to infinity. There are, of course, practical limits on the range of coordinates that can be expressed, as well as limits on the usefulness of sampling data that does not exist.

Working with pixels

Once you have a vector representing a pixel, there are a number of ways to work with the color values. For example, if you have a pixel4 variable named pix, you can address the individual color channels, or combinations of channels in the following ways:

Red channel: pix.r or pix[0]

Green channel: pix.g or pix[1]

Blue channel: pix.b or pix[2]

Alpha channel: pix.a or pix[3]

Red and alpha channels: pix.ra

All the color channels with red and blue swapped and no alpha channel: pix.bgr

A single color channel is a 32-bit floating-point number, normally between 0.0 (black) and 1.0 (white). The output color values can be outside this range, but they won't change the rendered appearance. In other words, pixel3(-1.0, -1.0, -1.0) is just as black as pixel3(0.0, 0.0, 0.0) when rendered as a bitmap. (However, if you run multiple filters on the same image, the difference can be significant since the second filter will see (-1.0, -1.0, -1.0) not (0.0, 0.0, 0.0) when sampling that pixel.)

You can perform arithmetic on a pixel vector with either scalar or vector values. When using scalar values, the operation is applied to each channel. For example, the following operation will divide the value of each channel in half (including the alpha channel):

pixel4 pix = sampleNearest( inputImage, outCoord());
pix = pix / 2.0;

The same operation could also be written using vectors:

pix = pix * pixel4( 0.5, 0.5, 0.5, 0.5 );

Although Pixel Bender images have 32-bits per channel, graphics in Flash Player and AIR only have 8-bits per channel. When a kernel is run, the input image data is converted to 32-bits per channel and then converted back to 8-bits per channel when kernel execution is complete.

Defining inputs

Inputs are declared with the input keyword:

input image4 sourceImage;

Inputs can be declared using the data types: image1, image2, image3, or image4. The different inputs in a kernel can have different numbers of channels. The number of channels in the output image produced by the shader is determined by the data type of the output pixel, not by the data types of the inputs.

You can declare more inputs than are required for the way a kernel is used in Flex. However, the Flash Player or AIR runtime will only automatically assign image data to the required inputs. You must assign an image to the extra inputs before assigning the kernel as a blend, filter or fill. For example, if your filter kernel used an additional image to create a textured effect, you would have to assign the texture as an input to the kernel before assigning the ShaderFilter containing the kernel to the filter array of a display object (more on how to do that later in this article).

Defining parameters

In addition to input images, you can supply other values to a kernel as parameters. Parameters are declared with the parameter keyword and can be any data type, except image (or region—which you can't use in kernels written for Flash Player or AIR anyway). You can declare metadata for parameters to specify the default, minimum, and maximum values. You can also supply a description. The metadata is declared between angle braces (<>) and can be accessed in ActionScript code. It is always a good idea to define a reasonable default value for a parameter.

The following parameter statement declares a float3 parameter with metadata:

To access parameter values in ActionScript code, you use the data property of the ActionScript Shader object containing the kernel. The current, minimum and maximum values of the above parameter can be accessed in ActionScript with the following statements (assuming that myShader is a Shader object containing a kernel with this parameter):

Since the parameter is a float3 vector type, the ActionScript arrays returned will contain three elements. If the parameter was a scalar type, such as float, then the arrays returned would contain a single element.

Exporting and loading a kernel

Use the Export Kernel Filter for Flash Player command from the Pixel Bender Toolkit to compile and export the kernel for use in the Flash Player and AIR runtimes. Kernels are exported with a file extension .pbj (see Figure 3).

Figure 3. Exporting the kernal from the Pixel Bender Toolkit

To load a Pixel Bender kernel in a Flex application, you must either embed or load the compiled kernel.

The Embed tag instructs the ActionScript compiler to embed the Pixel Bender kernel when it creates the SWF file. You must include the MIME type declaration, as shown in the following example:

To use the kernel, create an instance of the class, in this case, ChannelScramblerFilter. The following code uses an embedded kernel to create new Shader and ShaderFilter objects, which are applied it to an Image instance:

Using the Embed tag is typically the simplest method of loading Pixel Bender kernels, but you can also load kernels at runtime. The following example shows how to use the URLLoader class to load a kernel:

Using Pixel Bender kernels in AIR applications

You can use Pixel Bender kernels in an AIR application exactly as you would in a browser-targeted application. If you load a kernel dynamically, the kernel must be included in the application package. In Flex Builder, kernel files in the source directory are typically included automatically when you export the AIR file. (If you embed the kernel, it is already included in the application SWF file, so it doesn't need to be added to the AIR package.)

The examples in this article are all targeted at the browser so that you can view the examples live. To convert one of the examples to an AIR application, you just need to create an application descriptor. For example, the following shows a minimal application descriptor for the ChannelScrambler example:

Note that the namespace specified in the xmlns attribute must target AIR 1.5. If you target 1.0 or 1.1, only Flash Player 9 APIs will be available to the application.

Using blends

A blend combines the colors in the display object to which the blend is applied with the colors below the object on the Stage. The Flash Player API supports several built-in blends, defined in the BlendMode class. As a learning exercise, we duplicate a couple of the built-in blends. Then we create a blend that can't be easily achieved using the built-in options.

To apply a blend to a display object, create a Shader object with the loaded kernel bytecode and assign it to the blendShader property of the display object. A blend kernel must have two inputs. The first input is the foreground display object (whose blendShader property is set). The second input is whatever is behind the foreground object. If you use additional inputs, perhaps for creating masks or textures, you must assign an image in the form of a BitmapData object to these inputs yourself before applying the blend.

Multiply

In a multiply blend, each color in the foreground object is multiplied by the color of the background object. This blend darkens the result, except in the scenario where one of the images is pure white as shown in Figure 4.

Figure 4. Pixel Bender effects can be applied to any display object.

The following kernel declares two inputs, named foreground and background, and an output, named result. In the evaluatePixel() function, the pixel at the current coordinate is sampled in each image using the sampleNearest() function. The pixels are then multiplied together.

A screen blend inverts the colors, multiplies them together, and then inverts the result. This has the opposite effect as applying the multiply blend. The screen blend lightens the result, except in the scenario where one of the images is black (see Figure 5).

As you can see, this kernel is almost identical to the multiply kernel. Only the mathematical operation used to produce the result has changed. This example uses the same ActionScript code, except it is loading and applying a different shader.

Hard light

A hard light blend is a combination of the multiply and screen blends. If the foreground pixel is lighter than 50% gray, then a screen blend is performed. Otherwise, a multiply blend is performed (see Figure 6).

Figure 6. A hard light blend is a combination of the multiply and screen blends.

The hard light blend is a bit more complicated than the previous two. First, the gray level of the pixel in the foreground image is calculated by averaging the color channels. Then, an if statement is used to select either a multiply or a screen blend operation.

Again, this example uses the same ActionScript code, but it loads and applies a different shader.

Perlin grain

Now for something that is a bit more difficult to achieve with the built-in blends. The next filter uses a noise texture and sin() functions to generate a wood grain or marbling effect. The effect depends on the characteristics of the noise texture and generally looks best with Perlin-type noise (see Figure 7).

Figure 7. A hard light blend is a combination of the multiply and screen blends.

The shader works by sampling the pixels values from the noise image. Instead of using the noise pixels directly in the image, the shader feeds the noise value into a series of sin() functions. The background is multiplied by the result. A turbulence parameter is used to control the curviness of the resulting effect:

The example uses a slider to control the shader turbulence parameter. The minimum, maximum and starting values of the slider are set according to the parameter metadata. Then, the updateFilter() method is used to change the parameter value whenever a change event is dispatched by the Slider object. Because a shader object is cloned when you set the blendShader property of a display object, you cannot simply change the parameter value of the original Shader object. You must also reassign the updated Shader object to the blendShader property.

For simplicity, this example uses a bitmap for the noise texture. You can also use the perlinNoise() function of the BitmapData class to create a suitable texture.

Using filters

Shaders used as filters are applied to a single image. In addition to creating a Shader object, as we did for blends, you must also create a ShaderFilter object, passing in the Shader containing the kernel:

The ShaderFilter object "wraps" the shader and allows you to use the shader like a built-in filter, by adding it to the filters array of a display object:

displayObject.filters = [ shaderFilter ];

The object to which a shader is applied is automatically set as the first input of the kernel. If a filter kernel takes additional images as inputs, these must be set before the filter is assigned to a display object.

We've already looked at a simple filter, ChannelScrambler, so let's go straight to a more complex example, the Gaussian blur.

Gaussian blur

A Gaussian blur is an example of a convolution filter. (Convolution is a fancy word for a filter that computes a weighted average of nearby pixels.) Although the Flash Player API includes a built-in class for creating convolution filters of arbitrary size, programming a Gaussian blur in Pixel Bender is a good exercise that demonstrates several important aspects of Pixel Bender (see Figure 8).

General-purpose convolution filters aren't easy to achieve in Pixel Bender because the Flash Player and AIR runtimes do not support loops in kernel code. So, instead of using a for loop, we have to write an individual program statement to sample each pixel in the neighborhood. This example kernel creates a Gaussian blur that can have a sampling radius between 1 and 6 (corresponding to convolution matrices ranging in size between 3 × 3 to 13 × 13).

The filter takes advantage of the fact that a Gaussian blur is separable, which simply means that you can perform the operation in two passes. One pass blurs the image horizontally, and the other pass blurs the image vertically. This saves several calculations, since for each final pixel, only about 4 times the radius pixels have to be sampled, and weighted and averaged, rather than 2 times the radius2 pixels. For example, at the largest radius supported by this filter, 26 input pixels are sampled for each final output pixel in both the vertical and horizontal passes combined. If the filter computed the blur in a single pass, 169 input pixels would have to be sampled for each output pixel. The visual and mathematical results are identical.

To overcome the lack of a for loop, the kernel treats each integer radius value separately. For each allowed radius value, the two pixels located at that distance to either side of the current pixel are sampled. The Gaussian weights are the same for both pixels, so they are added together. Once all the necessary pixels are sampled, the weight and scale factors are applied.

The following kernel code is used for the horizontal pass (a similar kernel is used for the vertical pass):

Both kernels must be applied as filters for the complete effect. It does not matter which order the filters are applied in. The ActionScript code shown below uses a slider to control the radius parameter:

The example sets the radius of each filter to the same value. As with blends, the shader filters must be reassigned to the display object after changing parameter values.

Using fills

To use a kernel as an area fill, create a Shader object containing the kernel bytecode and pass that to the beginShaderFill() function of the display object graphics property when drawing the object. Like bitmap fills, shader fills are registered to the origin of the display object. This registration can be adjusted by using a translation matrix.

Shaders used as area or line fills are not automatically assigned an input image. (If you need an image as part of the fill algorithm, you must explicitly assign an image to the input before calling the beginShaderFill() method.)

Checker fill

The next fill example creates a checker pattern. The size and color of the checker squares are controlled by kernel parameters (see Figure 9).

Figure 9. The size and color of the checker squares are controlled by kernel parameters.

The checker algorithm uses the modulo function to compare the current pixel position to multiples of the checker size:

float vertical = mod(position.x, checkerSize * 2.0);

This modulo function returns the remainder produced by dividing the x coordinate by checkerSize multiplied by 2. If, for example, checkerSize is equal to 10, then we get the pattern 0-19, 0-19, 0-19,... as x increases across the image. So whenever the result is less than the checker size, the kernel draws color A, otherwise it draws color B. That creates stripes. To produce the checker pattern, we have to apply the technique in both the horizontal and vertical directions, like this:

The ActionScript code used for this example is slightly more complex than the previous examples, both because more parameters are used for the kernel and because the parameters themselves are more complex.

In the example, the init() function is called when at the applicationComplete event. This function creates the Shader object and uses the metadata of the kernel parameters to set up the initial values for the controls. It then calls the drawShape() function, which draws a circle using a shader fill.

To update the fill, the drawShape() function is called again whenever a change event is dispatched by one of the controls. The function sets the kernel parameters based on the current control values, clears the current graphics, and redraws the shape:

In the earlier examples we incorporated a slider to set the kernel parameters. The color picker controls used in this example are not quite as straightforward. Parameter values are accessed in ActionScript as an array. In the case of a scalar parameter, such as checkerSize, the array holds a single value. For vector types such as colors, the array contains an element for each element of the vector. Since the colorA and colorB parameters are of type pixel4, there are four values in the array, one for each channel in the pixel. A color in ActionScript, on the other hand, is represented as a single 32-bit uint value containing all the channel information. In addition, the alpha channel is the first channel in a uint color, but it is the last channel in the Shader object parameter array. The example functions, vectorToColor() and colorToVector(), translate the colors between the two forms so that we can transfer color selections between the ColorPicker objects and the shader parameters.

The vectorToColor() function works by multiplying each color channel returned by the Pixel Bender kernel by 256 (hexadecimal value 0xff). The resulting value is then shifted to the corresponding argb position within the 32-bit uint value using the bitwise left shift operator (<<). Finally, the four channels are combined into a single uint with the bitwise OR operator ( | ) and returned.

The colorToVector() function performs the inverse operation. For each channel, the function does a bitwise right shift operation (>>), masks out the bits that belong to any other channel with the bitwise AND operator (&) and divides the result by 256 (0xff) to scale the value between 0 and 1 (decimal). The results are assigned to the appropriate element of an array, in the order expected by Pixel Bender, and then returned.