Now it is time for me to ask.
Can someone describe or suggest a solution to achieving environment mapping as seen in quake 3. It is ( as I think ) done with spherical or planar mapping texture coordinate generation.
I am not interested in mixing textures, etc. - just want to know about the math behind it.
The problem is that generic spherical mapping depends on camera rotation ( it uses eye coordinates ), also it distorts badly when applied to plane, when in quake 3 it does not. So it seems that we need to use planar mapping and/or playing with texture matrix.
Thanks.

take your transformation matrix and make a transposed copy of it. multiply your rotation matrix my the transposed one. Then with the resulting matrix, transform your vertex normals. The UV coords of each vertex can then be computed with:

The normal is the thing telling you in what direction the face is pointing, and the direction is the only one telling you what the reflection will look like (I mean what you will see in the reflection, not how it will look because of material properties). This is why you use the normal.

And you also need to know how it’s oriented in worldspace, i.e. relative to the viewer. This is why you need to transform it.

Well ok I got that,
but why do we need to transpose the
transformation matrix and multiply it with the rotation matrix. Seems to me that only
using the rotation matrix would be fine.
Can we just use the OGLs Modelview matrix do
get the matrix needed? Or do we have to store
seperaely the transformations and rotations for each object.

I don’t know the actual transformation needed to create this kind of enviroinment mapping, but multiplying only the modelview isn’t enough. But if you do some maths, and calculate how the transformation matrix would look like to do the reflections, you will find out that this matrix is the same as the modelview multiplicated with it’s own transposed version.

First you need to transform the normal so it’s relative to the viewpoint, this is why you need to multiply with the modelview matrix. But now you only have the normal as it is transformed, now you need to transform the normal again to get the reflection. And this matrix happens to be the transposed modeview matrix.

But you can also let OpenGL to perform environment mapping. Dunno if it’s the same or similar as in Q3A, but it’s quite nice anyway. Use texture coordinate generation, and generate sphere mapping coordinates.