Getting started

The vertex shader and pixel shader are contained in the same code file called an Effect. The vertex shader is responsible for transforming geometry from object space into screen space, usually using the world, view, and projection matrices. The pixel shader's job is to calculate the color of every pixel onscreen. It is giving information about the geometry visible at whatever point onscreen it is being run for and takes into account lighting, texturing, and so on.

For your convenience, I've provided the starting code for this article here.

Assigning a shader to a model

In order to draw a model with XNA, it needs to have an instance of the Effect class assigned to it. Recall from the first chapter that each ModelMeshPart in a Model has its own Effect. This is because each ModelMeshPart may need to have a different appearance, as one ModelMeshPart may, for example, make up armor on a soldier while another may make up the head. If the two used the same effect (shader), then we could end up with a very shiny head or a very dull piece of armor. Instead, XNA provides us the option to give every ModelMeshPart a unique effect.

In order to draw our models with our own effects, we need to replace the BasicEffect of every ModelMeshPart with our own effect loaded from the content pipeline. For now, we won't worry about the fact that each ModelMeshPart can have its own effect; we'll just be assigning one effect to an entire model. Later, however, we will add more functionality to allow different effects on each part of a model.

However, before we start replacing the instances of BasicEffect assigned to our models, we need to extract some useful information from them, such as which texture and color to use for each ModelMeshPart. We will store this information in a new class that each ModelMeshPart will keep a reference to using its Tag properties:

This function will be called along with buildBoundingSphere() in the constructor:

...

buildBoundingSphere();generateTags();

...

Notice that the MeshTag class has a CachedEffect variable that is not currently used. We will use this value as a location to store a reference to an effect that we want to be able to restore to the ModelMeshPart on demand. This is useful when we want to draw a model using a different effect temporarily without having to completely reload the model's effects afterwards. The functions that will allow us to do this are as shown:

// Store references to all of the model's current effectspublic void CacheEffects(){ foreach (ModelMesh mesh in Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) ((MeshTag)part.Tag).CachedEffect = part.Effect;}

We are now ready to start assigning effects to our models. We will look at this in more detail in a moment, but it is worth noting that every Effect has a dictionary of effect parameters. These are variables that the Effect takes into account when performing its calculations—the world, view, and projection matrices, or colors and textures, for example. We modify a number of these parameters when assigning a new effect, so that each texture of ModelMeshPart can be informed of its specific properties:

// Sets the specified effect parameter to the given effect, if it// has that parametervoid setEffectParameter(Effect effect, string paramName, object val){ if (effect.Parameters[paramName] == null) return;

if (val is Vector3) effect.Parameters[paramName].SetValue((Vector3)val); else if (val is bool) effect.Parameters[paramName].SetValue((bool)val); else if (val is Matrix) effect.Parameters[paramName].SetValue((Matrix)val); else if (val is Texture2D) effect.Parameters[paramName].SetValue((Texture2D)val);}

The CopyEffect parameter, an option that this function has, is very important. If we specify false—telling the CModel not to copy the effect per ModelMeshPart—any changes made to the effect will be reflected any other time the effect is used. This is a problem if we want each ModelMeshPart to have a different texture, or if we want to use the same effect on multiple models. Instead, we can specify true to have the CModel copy the effect for each mesh part so that they can set their own effect parameters:

Finally, we need to update the Draw() function to handle Effects other than BasicEffect:

Creating a simple effect

We will create our first effect now, and assign it to our models so that we can see the result. To begin, right-click on the content project, choose Add New Item, and select Effect File. Call it something like SimpleEffect.fx:

The code for the new file is as follows. Don't worry, we'll go through each piece in a moment:

To assign this effect to the models in our scene, we need to first load it in the game's LoadContent() function, then use the SetModelEffect() function to assign the effect to each model. Add the following to the end of the LoadContent function:

If you were to run the game now, you would notice that the models appear both flat and gray. This is the correct behavior, as the effect doesn't have the code necessary to do anything else at the moment. After we break down each piece of the shader, we will add some more exciting behavior:

Let's begin at the top. The first three lines in this effect are its effect paremeters. These three should be familiar to you—they are the world, view, and projection matrices (in HLSL, float4x4 is the equivelant of XNA's Matrix class). There are many types of effect parameters and we will see more later.

float4x4 World;float4x4 View;float4x4 Projection;

The next few lines are where we define the structures used in the shaders. In this case, the two structs are VertexShaderInput and VertexShaderOutput. As you might guess, these two structs are used to send input into the vertex shader and retrieve the output from it. The data in the VertexShaderOutput struct is then interpolated between vertices and sent to the pixel shader. This way, when we access the Position value in the pixel shader for a pixel that sits between two vertices, we will get the actual position of that location instead of the position of one of the two vertices. In this case, the input and output are very simple: just the position of the vertex before and after it has been transformed using the world, view, and projection matrices:

A semantic is a string attached to a shader input or output that conveys information about the intended use of a parameter.

Basically, we need to specify what we intend to do with each member of our structs so that the graphics card can correctly map the vertex shader's outputs with the pixel shader's inputs. For example, in the previous code, we use the POSITION0 semantics to tell the graphics card that this value is the one that holds the position at which to draw the vertex.

The next few lines are the vertex shader itself. Basically, we are just multiplying the input (object space or untransformed) vertex position by the world, view, and projection matrices (the mul function is part of HLSL and is used to multiply matrices and vertices) and returning that value in a new instance of the VertexShaderOutput struct:

The next bit of code makes up the pixel shader. It accepts a VertexShaderOutput struct as its input (which is passed from the vertex shader), and returns a float4—equivelent to XNA's Vector4 class, in that it is basically a set of four floating point (decimal) numbers. We use the COLOR0 semantic for our return value to let the pipeline know that this function is returning the final pixel color. In this case, we are using those numbers to represent the red, green, blue, and transparency values respectively of the pixel that we are shading. In this extremely simple pixel shader, we are just returning the color gray (.5, .5, .5), so any pixel covered by the model we are drawing will be gray (like in the previous screenshot).

The last part of the shader is the shader definition. Here, we tell the graphics card which vertex and pixel shader versions to use (every graphics card supports a different set, but in this case we are using vertex shader 1.1 and pixel shader 2.0) and which functions in our code make up the vertex and pixel shaders:

Texture mapping

Let's now improve our shader by allowing it to render the textures each ModelMeshPart has assigned. As you may recall, the SetModelEffect function in the CModel class attempts to set the texture of each ModelMeshPart to its respective effect. However, it attempts to do so only if it finds the BasicTexture parameter on the effect. Let's add this parameter to our effect now (under the world, view, and projection properties):

texture BasicTexture;

We need one more parameter in order to draw textures on our models, and that is an instance of a sampler. The sampler is used by HLSL to retrieve the color of the pixel at a given position in a texture—which will be useful later on—in our pixel shader where we will need to retrieve the pixel from the texture corresponding the point on the model we are shading:

Every model that has a texture should also have what are called texture coordinates. The texture coordinates are basically two-dimensional coordinates called UV coordinates that range from (0, 0) to (1, 1) and that are assigned to every vertex in the model. These coordinates correspond to the point on the texture that should be drawn onto that vertex. A UV coordinate of (0, 0) corresponds to the top-left of the texture and (1, 1) corresponds to the bottom-right. The texture coordinates allow us to wrap two-dimensional textures onto the three-dimensional surfaces of our models. We need to include the texture coordinates in the input and output of the vertex shader, and add the code to pass the UV coordinates through the vertex shader to the pixel shader:

Finally, we can use the texture sampler, the texture coordinates (also called UV coordinates), and HLSL's tex2D function to retrieve the texture color corresponding to the pixel we are drawing on the model:

If you run the game now, you will see that the textures are properly drawn onto the models:

Texture sampling

The problem with texture sampling is that we are rarely able to simply copy each pixel from a texture directly onto the screen because our models bend and distort the texture due to their shape. Textures are distorted further by the transformations we apply to our models—rotation and other transformations. This means that we almost always have to calculate the approximate position in a texture to sample from and return that value, which is what HLSL's sampler2D does for us. There are a number of considerations to make when sampling.

How we sample from our textures can have a big impact on both our game's appearance and performance. More advanced sampling (or filtering) algorithms look better but slow down the game. Mip mapping refers to the use of multiple sizes of the same texture. These multiple sizes are calculated before the game is run and stored in the same texture, and the graphics card will swap them out on the fly, using a smaller version of the texture for objects in the distance, and so on. Finally, the address mode that we use when sampling will affect how the graphics card handles UV coordinates outside the (0, 1) range. For example, if the address mode is set to "clamp", the UV coordinates will be clamped to (0, 1). If the address mode is set to "wrap," the coordinates will be wrapped through the texture repeatedly. This can be used to create a tiling effect on terrain, for example.

For now, because we are drawing so few models, we will use anisotropic filtering. We will also enable mip mapping and set the address mode to "wrap".

Diffuse colors

Looking at the screenshot that appears immediately before the Texture sampling section, there is a problem with our render: some parts of the model are completely white. This is because this particular model does not have textures assigned to those pieces. Right now, if we don't have a texture assigned, our effect simply defaults to white. However, this model also specifies what are called diffuse colors. These are basic color values assigned to each ModelMeshPart. In this case, drawing the diffuse colors will fix our problem. We are already loading the diffuse colors into the MeshTag class, so all we need to do is add a parameter for them to our effect:

float3 DiffuseColor = float3(1, 1, 1);

Now we can make a small change to our pixel shader to use the diffuse color values instead of white:

float3 output = DiffuseColor;

Ambient lighting

Our model is now textured correctly and is using the correct diffuse color values, but it still looks flat and uninteresting. With the groundwork out of the way, we can start recreating some of the lighting effects provided by the BasicEffect shader. Let's start with ambient lighting: Ambient lighting is an attempt at simulating all the light that bounces off of other objects, the ground, and so on, which would be found in the real world. If you look at an object that doesn't have light shining directly onto it, you can still see it because light has bounced off of other objects nearby and lit it somewhat. As we can't possibly simulate all the bounced rays of light (technically, we can with a technique called ray tracing, but this is very slow), we instead simplify it into a constant color value. To add an ambient light value, we simply add another effect parameter:

float3 AmbientLightColor = float3(.1, .1, .1);

Now, we once again need only a small modification to the pixel shader:

float3 output = DiffuseColor + AmbientColor;

This will produce the following output (if you're following along with the code files, I've changed the model to a teapot at this point, as it will demonstrate lighting better due to its shape). The object should now look darker, as this light is mainly meant to fill in darker areas as though light were being bounced onto the object from its surroundings.

Lambertian directional lighting

Our next lighting type, directional lighting, is meant to provide some definition to our objects. The formula used for this lighting type is called Lambertian lighting, and is very simple:

kdiff = max(l • n, 0)

This equation simply means that the lighting amount on a given face is the dot product of the light vector and the face's normal. The dot product is defined as:

X • Y = X| |Y| cos θ|

In the previous equation, θ is the angle between the two vectors. X|| means the length of vector X. Because our vectors are normalized (their length is 1), we can disregard those calculations. The result is that the dot product is simply cos θ: 1 if the two vectors are parallel, and 0 if they are perpendicular. This is perfect for lighting, as the dot product of the normal and light direction will return 1 if the light is shining directly onto the surface (parallel to the normal vector), and 0 if it is perpendicular to the surface. In the Lambertian lighting equation, we clamp the light value to 0 to avoid negative light values.

To perform this lighting calculation, we need to retrieve the normals in the vertex shader so that they can be passed to the pixel shader where the calculation is done. We first need to update our vertex shader's input and output structs. (Note that the output uses the TEXCOORD1 semantic to store the normals. There are numerous texture coordinate channels, and they can be used to store data that is not strictly texture coordinates or of type float2.):

We next need to update the vertex shader to transform the normals passed in (which are in object space) with the world matrix to move them into world space. The following line does just this, and should be inserted before the VertexShaderOutput is returned from the vertex shader. Note that the value is "normalized" with the normalize() function. This resizes the normal vector to length 1 as it may have been scaled by the world matrix, which would cause the lighting to be incorrect later on. As discussed earlier, keeping our vectors at length 1 keeps dot products simple.

output.Normal = mul(input.Normal, World);

We also need to add a parameter at the beginning of the effect for the light direction. While we're at it, we will also add a parameter for the light color:

Finally, we can update the pixel shader to perform the lighting calculation. Note that once again we use the normalize function, this time to ensure that the user-given light vector's components fall within the -1 to 1 range. The dot product of the vertex's normal and the light direction is multiplied by the light color and added to the total amount of light. Note that the saturate() function clamps the bottom end of a number to 0 to avoid negative light amounts:

Phong specular highlights

Our object is now looking very defined, but it is still missing one thing: specular highlights. Specular highlights is the formal term for the "shininess" you see when looking at the reflection of a light source on an object's surface. Think of the highlights like you are looking at a light bulb through a mirror—you can clearly see the light source. Now, fog the mirror. You can't see the light source, but you can see the circular gradiant effect where the light source would appear on a shinier surface. It should be no surprise, then, that the formula for calculating specular highlights (what is called the phong shading model) is as follows:

kspec = max(r • v, 0)n

Translate this to a mirror: r is the ray of the light bouncing off from the light source of the mirror (a mirror of our last equation's l), and v is the view direction. Given the behavior of the last equation and the dot product, it would make sense that the closer v, or the direction your eye is facing is to the reflected light vector, the brighter the specular highlight would appear. If you were to move your eye far enough across the mirror, you would lose sight of the light entirely. In that case, there would be no specular highlight as you no longer have light reflecting across the mirror into your eye.

The previous equation also includes an n exponent. This, however, unlike the last equation, is not meant to be the normal of the polygon, but rather the "shininess" of the object. As the result of the dot product is between 0 and 1, the higher the n exponent value, the closer to 0 the result of raising it to an exponent will be, and thus the duller and smaller the highlight will appear. In order to implement this, we will need three more shader parameters:

float SpecularPower = 32;float3 SpecularColor = float3(1, 1, 1);

float3 CameraPosition;

The camera position is set automatically by the CModel class, so we can now update the vertex shader to calculate the direction between each vertex and the camera position—the view direction. First, we need to be sure to pass this value out of the vertex shader:

Creating a Material class to store effect parameters

By now we have accumulated a large number of effect parameters in our lighting effect. It would be great if we had an easy way to set and change them from our C# code, so we will add a class called Material that does solely this. Each Model will have its own material that will store surface properties such as specularity, diffuse color, and so on. This class will then handle setting those properties to an instance of the Effect class. As we may have many different types of material, we will also set up a base class:

We will now update the model class to use the Material class. First, we need an instance of the Material class:

public Material Material { get; set; }

We initialize it into a new material in the constructor. As this is the simplest material, no changes will be made to the effect when drawing. This is good as, by default, that effect is a BasicEffect, and does not match our parameters.

this.Material = new Material();

Finally, after the world, view, and projection matrices have been set to the effect in the Draw() function, we will call SetEffectParameters() on our material:

Material.SetEffectParameters(effect);

We can finally update the Game1 class to set the material to the model in the LoadContent() method, immediately after setting the effect to the model:

So if we now, for example, wanted to light our pot blue with red ambient light, we could set the following options to the modelMaterial and they would be automatically reflected onto the shader when the model is drawn:

Summary

Now that you've completed this article, you've learned the basics of shading—what the programmable pipeline is, what shaders and effects are, and how to write them in HLSL. You saw how to implement a number of lighting types with HLSL, as well as other effects such as diffuse colors and texturing. You upgraded the CModel class to support custom shaders and created a Material class to manage effect parameters.

Alerts & Offers

Series & Level

We understand your time is important. Uniquely amongst the major publishers, we seek to develop and publish the broadest range of learning and information products on each technology. Every Packt product delivers a specific learning pathway, broadly defined by the Series type. This structured approach enables you to select the pathway which best suits your knowledge level, learning style and task objectives.

Learning

As a new user, these step-by-step tutorial guides will give you all the practical skills necessary to become competent and efficient.

Beginner's Guide

Friendly, informal tutorials that provide a practical introduction using examples, activities, and challenges.

Essentials

Fast paced, concentrated introductions showing the quickest way to put the tool to work in the real world.

Cookbook

A collection of practical self-contained recipes that all users of the technology will find useful for building more powerful and reliable systems.

Blueprints

Guides you through the most common types of project you'll encounter, giving you end-to-end guidance on how to build your specific solution quickly and reliably.

Mastering

Take your skills to the next level with advanced tutorials that will give you confidence to master the tool's most powerful features.

Starting

Accessible to readers adopting the topic, these titles get you into the tool or technology so that you can become an effective user.

Progressing

Building on core skills you already have, these titles share solutions and expertise so you become a highly productive power user.