Matrix Concatenation: Once multiple transformation matrices are pre-multiplied into one, the resulting matrix can be used for vertex transformation. This is faster than multiplying multiple matrices to each vertex.

Inverse Matrix: Can be used to transform spaces in reverse order.

Background
One day, an art director who I used to work with said, “programmers always seem to pursue super-realistic 3D graphics, but most gamers get excited about non-realistic graphics with a great style.” I thought about this for a while, and now I completely agree. It is always fun to see that programmers try to find some comfort in mathematical correctness, but highly successful games often put more stress on artistic styles. Street Fighter 4, Team Fortress and Journey would be some great examples of this.

A recent trend in 3D graphics was mostly realistic graphics, but occasionally non-realistic techniques were introduced to fulfill our artistic needs. Toon shading technique which will be covered in this chapter is also one of those. Toon is a short form or cartoon. If you read a comic book, you would probably notice that the shading of an object is very abruptly changing while real-world objects have very smooth shading. The shading technique in comic books is toon shading. Still not sure what it is? Then please take a look at Figure 6.1.

Figure 6.1 Toon shader we are going to implement in this chapter

Let’s take a close look at Figure 6.1. How is it different from ordinary diffuse lighting? With diffuse lighting, the shading changes very smoothly across the surface, but, with toon shading, it changes suddenly after not changing for a while at all. In other words, it changes discretely as if you are walking down stair steps. Now, let’s turn this observation into a graph for a better understanding.

Figure 6.2 Difference between diffuse lighting and toon shading

It makes much more sense, right? No? Um, then how about a comparison table?

Diffuse Lighting

Toon Shading

0

0

0 ~ 0.2

0.2

0.2 ~ 0.4

0.4

0.4 ~ 0.6

0.6

0.6 ~ 0.8

0.8

0.8 ~ 1

1

Table 6.1 Comparison between diffuse lighting and toon shading

This table should give you a much better idea. Table 6.1 shows that we simply need to turn every range into its upper bound in order to convert diffuse lighting to toon shading. That sounds super easy, so let’s go to RenderMonkey, right away!

Initial Step-by-Step Setup
After launching RenderMonkey, add a new DirectX effect and change the name from Default_DirectX_Effect to ToonShader. You see a matrix, named matViewProjection, right? Please delete it.

Figure 6.1 was showing a teapot model, which is often used in 3D graphics because its curvature is really good for demonstrating shader techniques. Therefore, we will use it in this chapter, as well. Inside RenderMonkey, find Model in Workspace panel. Then, right-click on it and select Change Model > Teapot.3ds.

To do toon shading, we should calculate diffuse lighting first, right? So, we need the light position and normal information as we did in Chapter 4. First, we will declare a variable for the light position. Right-click on ToonShader and select Add Variable > Float > Float4. A new variable will be added. Now change the name to gWorldLightPosition and set the value to (500, 500, -500, 1). Now, it’s time to retrieve normal information from the vertex buffer. It is as simple as double-clicking on Stream Mapping and add NORMAL field. Make sure to set the data type to FLOAT3 and Index to 0.

Let’s look at Figure 6.1, again. The teapot is green, right? There are different ways of coloring the teapot, but we will simply use a global variable to specify the color. Right-click on ToonShader and select Add Variable > Float > Float3. Once a new variable is added, change the name to gSurfaceColor. Now, double-click on the variable and change the value to (0, 1, 0). You haven’t forgot that colors are represented as a percentage values between 0 and 1, right?

Now we are going to add some matrices. Until now, we always declared three separate matrices for world, view and projection matrices, but we will do something a bit different here to make things faster. If you concatenate all the matrices into one, you can just multiply the resulting matrix to a vector instead of multiplying the matrices separately. For example, we used to multiply world, view and project matrices to a vector in order. But we can just multiply these three matrices together to get another matrix, and multiply it to the vector to produce the same result. Performance-wise, this is faster because multiplying one matrix is faster than multiplying three.

For the reasons we just discussed, we will concatenate matrices here. Therefore, we only need one global variable to hold the concatenated matrix in the end. Right-click on ToonShader and select Add Variable > Matrix > Float(4x4). Then, change the variable name to gWorldViewProjectionMatrix, and right-click on it to select Variable Semantic > WorldViewProjection.

Multiplying a matrix only once sounds really great, but don’t we still need a world matrix to calculate diffuse lighting? Light position is defined in the world space, so to find a light vector, we need a vertex position in the world space. It is the same story with the normal vector, as well. So yeah, passing a world matrix to do these multiplication twice is something we should do. But if you think a bit more, you can do the same thing with only one matrix multiplication.

The reason why we transformed the vertex position and normal vector into the world space was to match the spaces between all operands. You remember that the calculation is wrong if any of these is in a different space, right? The light position is already defined in the world space, so “why not transforming other information into the world?” was our logic. But here is another way. You can also transform the light position to the local space (of the object being drawn), while leaving the vertex position and normal vector unchanged in the local space. This also produces correct calculations since all the operands are in a same space. Even better, this way is faster because we transform only one vector instead of two.

Then how do we transform something from world to local space? Simply by multiplying the inverse matrix of world matrix. To add the inverse matrix to RenderMonkey, right-click on ToonShader and select Add Variable > Matrix > Float(4x4). Then, change the variable name to gInvWorldMatrix. Finally, right-click on the variable and select Variable Semantic > WorldInverse to finish all the setup!

Figure 6.3 RenderMonkey project after the initial setup

Vertex Shader
I will show you the full source code first, and provide line-by-line explanation after.

The output structure is nothing special, either. The vertex shader will calculate diffuse lighting and pass the result to this structure. If you are not sure what I am talking about here, please review Chapter 4 again.

I believe these were all the variables we added to RenderMonkey, so now you can move onto the vertex shader function.

Vertex Shader Function
First, we will perform the most important role of any vertex shader: transforming the vertex position to the projection space. Since world, view and projection matrices were merged into one already, this can be done with one line of code.

After the above code, the light and normal vectors are both in the local space, so we can calculate a dot product of these two to calculate diffuse lighting correctly.

Output.mDiffuse = dot(-lightDir, normalize(Input.mNormal));

Do you see that Input.mNormal is normalized here? Usually normalized normals are stored in a vertex buffer, but we are calling normalize() again just in case.

Now simply return Output.

return( Output );}

There was nothing hard with the vertex shader function because we already knew everything from Chapter 4. The only difference was that we used the local space in this chapter. But I believe it’s not a hard idea to understand. Then, let’s take a look at pixel shader.

Pixel Shader
As in Vertex Shader, I’ll show you the full source code first.

First, let’s declare global variables and input structure of the pixel shader. The surface color is declared as a global variable, and the diffuse lighting, calculated in the vertex shader, is passed to PS_INPUT.

float3 gSurfaceColor;struct PS_INPUT{ float3 mDiffuse : TEXCOORD1;};

Now it’s time to look at pixel shader function. First we clamp out meaningless negative values from mDiffuse.

Now we will divide diffuse into 5 discrete steps so that each step’s width become 0.2. We can use ceil() function to do this. ceil() function ceils the input parameter to the nearest integer, but what we need is ceiling to the nearest multiple of 0.2. The following code solves our problem:

diffuse = ceil(diffuse * 5) / 5.0f;

Let’s look at the above formula more closely. diffuse is between [0, 1], so multiplying 5 results in [0, 5]. When ceil() is applied here, the result will become one of these: 0, 1, 2, 3, 4 or 5. Now dividing the result by 5 will give us one of these values: 0, 0.2, 0.4, 0.6, 0.8 or 1. This is what we are looking for right? Figure 6.2 and Table 6.1 say so!

The last thing to do in pixel shader is multiplying the surface color as shown below.

return float4( gSurfaceColor * diffuse.xyz, 1);}

Now press F5 to compile vertex and pixel shaders separately and see the preview window. You will see a teapot as shown in Figure 6.1. What? The background color is different? Then right-click inside the preview window and select Clear Color to change it.

(Optional) DirectX Framework
This is an optional section for readers who want to use shaders in a C++ DirectX framework.

First, make a copy of the framework used in Chapter 3 and save it into a new folder. Then, we will save the shader and 3D model that we used in RenderMonkey so that they can be used in the DirectX Framework.

From Workspace panel, find ToonShader and right-click on it. A pop-up menu will appear.

From the pop-up menu, select Export > FX Exporter.

Browse to the folder we just created and save it as ToonShader.fx.

From Workspace panel, find Model and right-click on it. A pop-up menu will appear.

From the pop-up menu, select Save > Geometry Saver.

Browse to the folder we just created and save it as Teapot.x.

Now open the solution file in Visual C++.

First, we will find global variables we don’t need anymore. Find all the instances of gpTextureMappingShader, and change them to gpToonShader. Also change all instances of gpSphere variable to gpTeapot. There’s a texture variable, gpEarthDM, too. Since we don’t use any texture in this chapter, please remove all instances of gpEarthDM variable from the code.

What new global variables did we add to shader? There were two: light position and surface color. Add the following code:

Next is RenderScene(), which actually draws the scene. We need to pass two new matrices: a concatenated world/view/projection matrix and inverse of world matrix. First, to find the inverse matrix, add the following lines below the code calculating the world matrix.

D3DXMatrixTranspose() function used in above code finds the transpose matrix. The reason why we find the transpose matrix instead of the inverse matrix is because the world matrix is an orthogonal matrix. The inverse and transpose of an orthogonal matrix are same.

Now it’s time to multiply world, view and projection matrices together. To do so, we use D3DXMatrixMultiply() function like this:

Why do different objects have different colors? It is
because they absorb and reflect different spectrums of incoming light. For
example, a black surface looks black because it absorbs all the spectrums, and
a white surface is white because it reflects everything. Likewise, a red
surface reflects red spectrum while absorbing the others.

Then how do we represent this absorption property in shader?
If a surface has only one uniform color across its surface, we can just use a
global variable for this, right? However, most surfaces have more complicated
patterns than this. This means that each pixel needs to have a different color.
So let’s think it this way. We are going to “draw” an image on a surface, and
this image defines which color will be reflected on each pixel. Once this image
is saved as a texture, we can look it up inside a pixel shader and apply it to
the lighting result.

Do you remember that we calculated diffuse and specular light
separately in Chapter 4? Then do you think we need to multiply this
texture to the sum of diffuse and specular light or not? As mentioned earlier,
the reason why we can recognize an object is mainly due to diffuse lighting.
(On the other hand, the specular light adds a tight highlight to it.)
Therefore, it is good enough to apply the texture to the diffuse lighting result
only. Since this texture is only applied to diffuse lighting, we will call it
diffuse map.

Then how about specular lighting? It is okay to use the same
diffuse map for specular lighting, too, but usually different textures are used
for specular lighting for the following two reasons. First, some surfaces
reflect different spectrums for diffuse and specular lighting. Second, specular
maps are often used to turn off specularity on certain pixels without affecting
diffuse light, which is more globally applied. One great example is our face.
When you observe specular light on a human face, do you expect to see a smooth
highlight without any noise as shown in Chapter 4? If you look into
a mirror, you will see the forehead and nose have more specularity.
Furthermore, you will see not all the part of your nose or forehead has visible
specularity. It is because your face is not perfect due to pores and facial
hair. With specular mapping, this effect can be simulated to some degree. So,
we will be using two separate diffuse and specular maps in this chapter.

There is another thing that affects the color of an object:
the light color. If you cast a red light on a white object, the object looks
reddish, right? The color of light can be easily defined as a global variable.

Initial Step-by-Step Setup
First, make a copy of the RenderMonkey project used in the last chapter, and save it inside another folder. If you forgot to save it in the last chapter, you can use samples\04_lighting\lighting.rfx file from the accompanying code samples.

Now open this file inside RenderMonkey, and change the effect name to SpecularMapping. Once the name is changed, it’s time to add images that will be used as diffuse and specular maps. Right-click on the effect name, and select Add Texture > Add 2D Texture > Fieldstone.tga file from the pop-up menu. Now you will see a texture, named Fieldstone. Change the name to DiffuseMap.

Then right-click on Pass 0 and select Add Texture Object > DiffuseMap. You will see a texture object, named Texture0. Change the name to DiffuseSampler.

We need to add a specular map now, but I was not able to find a good candidate from the RenderMonkey installation folder. So I handmade a specular map and included with the accompanying code samples. Let’s take a look at how this specular map looks like first, side by side with the diffuse map.

Figure 5.1 Diffuse map (left) and specular map (right)

Do you see the seams between stone bricks are colored in black in the specular map? These seams won’t reflect any specular light. (Still, there is diffuse light as you can guess from the diffuse map.) One thing to note here is that textures do not always contain color information, as you can see with the specular map in Figure 5.1. The information stored in a specular map is not color; instead, it defines the amount of specular light that will be reflected from each pixel. Similarly, it is very common to use a texture map if there are variables that need to be controlled on each pixel. You will see this technique again later when we are implementing normal mapping.

Tip: Use textures for the variables that need to be controlled per pixel.

Then, find Samples\05_DiffuseSpecularMapping\Fieldstone_SM.tga file from the accompanying code samples and add it to the RenderMonkey project. To do so, you can simply drag and drop the file onto the effect name. Also change the texture name to SpecularMap, and texture object name to SpecularSampler.

Next, we will add the light color. Right-click on the effect name and select Add Variable > Float > Float3. The variable name will be gLightColor. Now double-click on the variable to assign some value. To make it emit bluish light, we will use (0.7, 0.7, 1.0) for the light color.

Last up is Stream Mapping. Unlike in Chapter 4, a texture is used here, so UV coordinates are needed. Double-click on Stream Mapping to add TEXCOORD0. The data type is float2, of course.
Once all the above steps are finished, your RenderMonkey Workspace should look like Figure 5.2.

Figure 5.2 RenderMonkey project after the initial setup

Vertex Shader
Now let’s look at vertex shader. I will show you the full source code first, and explain newly added code.

Was there any new global variables that needs be added? Yes, a light color and two texture samplers. Texture samplers are used in pixel shader, so they don’t need to be declared here. Then what about the light color? It can simply be multiplied inside pixel shader, right? So, nothing to be added here!

Then how about the input/output structure? There was one thing to add: the UV coordinates. It will be used by pixel shader to sample textures. Let’s add the following line to both input and output structures of the vertex shader.

float2 mUV: TEXCOORD0;

We also need to add only one line to vertex shader function. The below line passes the UV coordinates to pixel shader.

Output.mUV = Input.mUV;

Pretty simple, right? That’s it for vertex shader.

Pixel Shader
As we did earlier, full pixel shader code is listed first, followed by explanation on newly added code.

Now, it’s time to sample the diffuse map. Add the following line to the top of the pixel shader function.

float4 albedo = tex2D(DiffuseSampler, Input.mUV);

The sampling result, albedo, is the color which current pixel reflects. I said we need to multiply this to diffuse lighting amount and light color, right? Change the previous code, which was calculating diffuse, to this:

float3 diffuse = gLightColor * albedo.rgb * saturate(Input.mDiffuse);

Now press F5 twice to compile the vertex and pixel shaders, separately. Then look at the preview window.

Figure 5.3 Result with diffuse map only

You should be able to see the brick wall texture, as well as the bluish light. However, the specular lighting does not look right. Even the seams have specular light! It’s probably because we haven’t applied the specular map yet. Then let’s add the specular map.

Tip: How to Rotate an Object in Preview WindowTo cast specular light in the seams like Figure 5.3, you will need to rotate the object. To rotate an object in Preview window, move the mouse around while holding down left mouse button inside the window. If you want translation or scaling instead of rotation, click on the second right icon from the toolbar to switch to Overloaded Camera Mode.

Add the following code, which samples the specular map, below where we did pow() on specular.

float4 specularIntensity = tex2D(SpecularSampler, Input.mUV);

We need to multiply this, as well as the light color, to specular, right? The code doing it is shown below.

Another problem with the result shown in Figure 5.3 is that the details of diffuse texture disappear when the diffuse light disappears. It is because (0.1, 0.1, 0.1) is used for the ambient light. Since ambient light is indirect light hitting the surface, it should be modulated by the diffuse map, as well. Find the code where ambient is calculated and change the code like this:

float3 ambient = float3(0.1f, 0.1f, 0.1f) * albedo;

Again, compile the vertex and pixel shaders, and see the preview window.

Figure 5.4 Result with diffuse/specular map and ambient light

You see the obvious difference from Figure 5.3, right? Specular light is not that strong anymore, and the seams look perfect, too! In addition, you can see the traits of diffuse map in the very dark pixels, too.

(Optional) DirectX Framework
This is an optional section for readers who want to use shaders in a C++ DirectX framework.

First, make a copy of the framework used in Chapter 4 and save it into a new folder. Next, save the shader and 3D model that we used in RenderMonkey into Sphere.x and SpecularMapping.fx files so that they can be used in the DirectX framework. Also copy and paste two textures used in RenderMonkey into the framework folder. Filedstone_DM.tga and Fieldstone_SM.tga will be the file names.

Now open the Visual C++ solution, and change all instances of gpLightingShader variable to gpSpecularMappingShader in the source code.

Next, go to the global variable section to declare pointers to the two textures and a variable to store the light color, as shown below:

You see that we are using the exact same bluish color, (0.7, 0.7, 1.0), for the light, right? Now we will add some code to release newly added 3D resources: two textures declared above. Add the following code to CleanUp() function.

Lastly, we are going to look at RenderScene() function. The shader is doing all the work already, so we can simply assign new variables. Remember where SetMatrix() function was called before? Add the following code below those function calls.

The above code passes the light color and two textures to the shader. You still remember that _Tex postfix must be added while assigning a texture map, right? It was mentioned in Chapter 3.

If you compile and execute the program, you will see the same result you saw in RenderMonkey.

As you probably know by now, there is not much to do in a DirectX Framework when shaders are used. Loading D3D resources and managing shader variables and render states are the only things that the framework should take care of.

Summary

The reason why we can recognize the color of an object is because different objects absorb and reflect different spectrums of a light.

In 3D graphics, diffuse and specular maps are used to simulate the absorption and reflection of light. Specular map mainly controls the amount of specular light reflection on each pixel.

Light color also contributes to the final color.

Textures don’t always store color information. A specular map is a good example of this exception.

This chapter was pretty easier than we thought, right? It’s because we started from the lighting shader implemented in last chapter. Like diffuse and specular mapping, there are many techniques that can be implemented very easily by extending some basic shaders.

However, do not take the material covered in this chapter lightly. Most recent games mix this technique with normal mapping to create pretty impressive visuals.