Tuesday, March 29, 2011

Normals stored in the texture are surface orientation dependent and are stored in what's called Tangent Space. But all the other lighting components such as view direction are supplied in world space. Because we can't use world space, why not convert every lighting component we need to compare the normal with, to this format called tangent space? Why not compare apples to apples?

Changing coordinate systems requires transformation. I'll just skip the hardcore math, but what I do want to explain here is that we need a matrix to transform world to tangent space. Just like we need a matrix to get world space from object space, we need a matrix to convert to tangent space. Remember this:

We need the surface orientation, because that's where the texture normals depend on.

We know everything about our surface (a triangle).

Any lighting component we need in PS (lightdir,viewdir,surfacedir) needs to be multiplied by the resulting matrix.

We need to create a 3x3 matrix to be able to use it to convert object normals to surface-relative ones. This matrix should be built by adding the three components up in a matrix, and then transposing it in de Vertex Shader:

// tangentin, binormalin and normalin are 3D vectors supplied by the CPU

// then multiply any vector we need in tangent space (the ones to be compared to

// the normal in the texture). For example, the light direction:

float3 lightdirtangent = mul(lightdir,tbnmatrix);

Then we're almost done. The only thing we need to do now is pass all the converted stuff to the Pixel Shader. Inside the same Pixel Shader retrieve the normal from the texture. Now you're supposed to end up with for example the light direction in tangent space. Then do your lighting calculations as you would always do, with the only exception being the source of the normal:

// we're inside a Pixel Shader now// texture coordinates are equal to the ones used for the diffuse color mapfloat3 normal = tex2D(normalmapsampler,coordin);

// color is stored in the [0,1] range (0 - 255), but we want our normals to be// in the range op [-1,1].// solution: multiply them by 2 (yields [0,2]) and substract one (yields [-1,1]).normal = 2.0f*normal-1.0f;

// now that we've got our normal to work with, obtain (for example) lightdir // for Phong shading// lightdirtangentin is the same vector as lightdir in the VS around // 20 lines abovefloat3 lightdir = normalize(lightdirtangentin);

/* use the variables as you would always do with your favourite lighting model */

You might've been wondering why you see lots of people doing their vector calculations in what they call texture or tangent space. Why can't they just do it in World space for example?

Well, because they want their special texture maps to be portable.

Let's take normals for example: 3D vectors which can be stored in textures to provide detail normals for every pixel on a texture (texel). Textures which store normals are called normal or bump maps. Here's an example of a normal map of a brick wall:

This image has three channels: R, G and B. They're used to store the X, Y and Z amount of a normal that's supposed to be at that given texel. This normal can be used to change surface lighting without having to add geometry. The only downside is that the object will still look flat from a side: it's just a trick to change lighting response, not to add actual depth. Lighting effects are changed because these normals are used in the color calculations, not the linear ones passed by the Vertex Shader.

Now about the colors: red equals to normals pointing along (tangent to) the surface, green equals to normals pointing left (bitangent to) and blue equals to normals pointing up away from the surface.

Let's say we were using this vector as a world space normal. That would work perfectly fine for a surface lying flat on its back. The texture reader would read (0,0,255) and determine the current normal pointing 100% up in World space. Then imagine an object on its belly with this texture. The GPU reading the texture will still read (0,0,255) believing the normal is still pointing up in World space. This is not the case, since the object itself has rotated. Oops.

Looks like we can't just pluck world normals out of a texture, because world normals are object orientation dependant. The normals are relative to the surface the texture is applied on. So, we need to know surface orientation to be able to use these normals. There's no way to store portable world normals in a texture.

Awards

Translators needed

Dev-C++ is looking for translators, because the author doesn't master all thirty languages Dev-C++ is (partially) translated in.

So, if you're willing to translate Dev-C++ into a language or update the existing translation, don't hesitate to open up YourLanguage.lng and start translating/updating, using English.lng as the reference language.