In his tutorial, Mr. Bikker had an interesting suggestion of rasterizing the scene in coded colors, and using the buffer data to cast the primary rays. Does anyone use this technique? Is this suitable for real-time? My render process so far is 100% CPU- can I do this on the GPU parallel with my ray casting? In OpenGL, would I use glGetPixels() or is there a faster way? Anyone see any examples of this being done?

After playing around with getpixels / readpixels it looks like it hangs up the CPU way too long to get any buffer data that way.

I think I'm going to go with my own pseudo-rasterization method. I'm rendering each pixel ordered like a quadtree, to do a pixel data sharing scheme similar to the "coarse grid beam optimization" proposed by Laine. I'll try projecting the bounding spheres of each object onto the viewing plane and approximating the circle into pixel blocks of the quadtree. I'll have to store a bitfield, one bit for each bounded object for each pixel block.