General Questions About Old Or Limited Graphics Cards

I am working on a game that will (hopefully) squeeze any ounce of potential out of less powerfull graphics cards. This includes ATY Radeon (I know its ATI but thats what the system profiler says), Geforce 4 MX, and PVR MBX(iPhone). And I have some (a lot (I lied )) of questions.

With regards to (1) it's because there are software fallbacks. For shaders you can call: CGLGetParameter(CGLGetCurrentContext(), kCGLCPGPUFragmentProcessing, &fragmentGPUProcessing); CGLGetParameter(CGLGetCurrentContext(), kCGLCPGPUVertexProcessing, &vertexGPUProcessing);
to determine if it's running on the GPU or not.

1) Apple tries to support the same transformation-stage functionality across all hardware. That means ARB_vertex_shader, EXT_geometry_shader4, and EXT_transform_feedback are supported everywhere. They will fall back to software TCL on older hardware.

If this seems confusing, consider that even the newer hardware can't support arbitrarily complicated shaders-- they can also fall back to software. There are CGL queries to determine if a renderer can support any hardware acceleration of the vertex or fragment stages, as wells as determining if your current state is accelerated or not.

3) The SIGGRAPH '96 samples include discussion and code for projected self shadowing.

4) You could try stepping through a frame with OpenGL Profiler to see exactly how Dim3 draws shadows.

5) With vertex shaders, you have to put dummy vertices in the input mesh. See Nvidia's paper. With geometry shaders, you can do the silhouette determination and volume generation all in the shader. GPU Gems 3 has an article on this, with example code.

6) You can get the SGI '96 projected self-shadowing sample to run on hardware as old as a Rage 128, using only the texture matrix, two texture units, and an 8 bit alpha texture. No depth textures or shadow comparison hardware required. Obviously, performance/quality are compromised.

arekkusu Wrote:6) You can get the SGI '96 projected self-shadowing sample to run on hardware as old as a Rage 128, using only the texture matrix, two texture units, and an 8 bit alpha texture. No depth textures or shadow comparison hardware required. Obviously, performance/quality are compromised.

How? I looked at it and it only uses the depth texture and shadow extensions.

Quote:Shadow maps can almost be done with the OpenGL 1.0 implementation. What's missing is the ability to compare the texture's r component against the corresponding texel value.

The ARB_shadow extension provides the comparison on a depth texture lookup. Or, in a shader, you can do the comparison yourself by manually comparing the texture value to a reference (usually the texture R coordinate.)

But, if you think about it, you don't need shaders or ARB_shadow to compare a texture lookup with a reference value. You can do it by simply subtracting the two values, with ARB_texture_env_combine. And you don't need a depth texture to capture light Z. You can write distance from the light to a regular 8 bit color channel.

So, you can make a couple of modifications to that shadowmap sample, and run on old hardware (with 8 bit precision):

1) render scene from light's point of view.
1a) use regular depth testing to capture maxZ from the light's view.
1b) to put Z into a color channel, you need to transform Z into a color component. Fog can do this. Or, more flexibly, use the texture matrix to transform vertex Z into texture S coordinates. Use S to look up into a Z ramp texture.
1c) when drawing the scene, mirror any modelview matrix transforms in the texture matrix.

3) render shadowed scene from camera's point of view.
3a) set up shadow projection in the usual way (light MVP * object linear scene positions.) TexGen can be replaced by feeding in the scene positions as texcoords directly, since object linear is a passthrough transform.
3b) sample maxZ on unit0. Sample Z ramp texture on unit1, using the same light matrix computed in 1b). Use texture combiners to subtract Z from maxZ. Resulting values <=0 are "in shadow".
3c) use alpha test to discard "in light" values > 0. Use depth test EQUAL to only write pixels exactly matching the scene from 2a).

Various improvements to this are possible. Like attenuating the shadow falloff in the combiner, based on light Z. Or using multiple color channels for jittered light positions. Combine the jittered samples for cheesy shadow antialiasing.