When you have a new programming platform where you can render to a pixel buffer there are two types of "Hello World" applications I usually do. Either a Mandelbrot set,
or a small raytracer of spheres. Pixel shaders 3.0 is such a platform, cause it's the first shader model version that allows for conditional branching and other cool
things. It's still not as flexible as regular CPU programing, but it's good enought for simple effects like raytracing of spheres and planes.

In the other hand the GPUs are so fast that even brute force implementations of raytracing run fast at high screen resolutions. This is perfect for tiny demos (like
four kilobyte demos) where you don't really have room for acceleration structures or anything. So, I think it was in 2006 when I made that little demo called
Kinderpainter as an experiment on GLSL coding. I decided to implement a simple Whitted raytracer: only local lighting plus one shadow and one perfectly
specular reflection. I used two spheres, two cylinders and two planes as scene, and I allowed them to move so I could build two or three virtual different scenes
and so I could synchronize the movements to the music too. All the image was synthetized in a quad covering the complete screen to which a pixel shader was attached. The
shader was responsible for creating the image, and the CPU was just left with the code to create a desktop window, initialize OpenGL and the shader, move the
objects with some trigonometric functions, and render the quad.

As a simple raytracer, what the code does is, for each pixel, cast a ray on the scene to find the closest intersected object. That's done in the calcinter() function on
the pixel shader below. The implementation just calles the intersection functions for the six objects in the scene (two spheres, two cylinders and two planes), while it
keeps track of the closest intersection at all time.

Then the code calls the shading function calcshade(). The first thing this one does is to compute the normal of the object at the intersection point, by calling
calcnor(). Depending on the primitive type, that function executes the necessary computations. Then calcshade() does some basic diffuse and specular lighting
calculations. It also calls calcshadow to decide if the point being shaded is in shadow or not. This function is a simplified version of the regular intersection
function calcinter() (it's simple cause we don't really need to know the closest interseted object, we just need to know if any object was intersected at all).

Therefore main() function, the one executed for each pixel, just calls calcinter() and calcshade() and returns the result to the hardware so the pixel
of the screen is colored. Normally a Whitted raytracer would recursivelly call calcinter() and calcshade() from within the calcshade() function, up
to a number of levels, say 4. However, since current GPU shader models don't support recursion yet, I made the trick of doing a nonrecursive version of calcshade()
and calling calcinter() and calcshade() for a second time from the main() function, with the right reflection ray as argument.

The complete pixel shader is below, and you have a live version online in Shadertoy Of course, fpar00[] contains all the input to the shader. For example, fpar00[0] contains the information of the first sphere
(a position and a radius), fpar00[1] for the second sphere, fpar00[2] and fpar00[3] are the two cylinders, and fpar00[4] and fpar00[5]
are the two planes. The colors of the objects are stored from fpar00[6] to fpar00[11]. Also, fpar00[12] contains the camera position, and fpar00[13],
fpar00[14] and fpar00[15] contain the camera matrix. The raydirection is partially computed in the vertex shader for the corners of the full screen quad,
and interpolated down to the pixel shader by the rasterization hardware, and it arrives to the pixel shader trhu the varying called raydir.