I am displaying quake3 levels using lightmaps. I am using deferred shading, so I end up needing to render the rgb componets of the lightmap into the g-buffer. I am using a high precision G buffer (16 or 32 bit), and since the lightmap is 8 bytes per component, it is a waste to use 3 slots of the g-buffer for the lightmap. At most I should need 24 bytes, which is 2.5 of the 16 bit or .75 or the 32 bit.

So the question:
What is a good method to encode several floats into 1? It's sort of the space filling curve question...

Its pretty clear how to do this for integers by using shifts, etc.. but I need it to work in shaders....

Maybe something like this? It seems to preserve the precision reasonably well. Will have to see if there is a modf in GLSL...
Found the needed link at:
encode rgba to float (http://aras-p.info/blog/2009/07/30/encoding-floats-to-rgba-the-final/)
Basically the reverse procedure...
Why 65025???

Thus seems to give a maximum error of about 0.9 (scale of 0-255), which is probably fine... On the cpu, anyway...

Just found this as well, oops:
gl solution (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&amp;Number=268030)

kRogue

08-09-2010, 12:03 PM

Several bits:

1) If you are using GL3 hardware you can have multiple render targets, each target with a different format. For simplicity you can make another render target and store the lightmap data as GL_RGB8 (8-bit fixed point clamped), no conversion needed, though AFAIK, the hardware might implement GL_RGB8 as GL_RGBA8, so you "waste" one bytes per pixel.

2) If you are already using "too many render targets" then consider GL_EXT_texture_integer (http://www.opengl.org/registry/specs/EXT/texture_integer.txt). GeForce 8/9/2xx/3xx can render up to 8 textures at the same time.

Even under Mac OS-X (with a GeForce 8/9/2xx/3xx) both of these are possible [as Mac OS-X exports both GL_EXT_texture_integer and GL_ARB_framebuffer_object]

On the other hand if you are using GeForce6/7 neither of these is possible and they have a render target limit of 4 buffers, significant performance clip at 3 (I think) and all buffer targets must be the same format.

nickels

08-09-2010, 04:03 PM

Several bits:

1) If you are using GL3 hardware you can have multiple render targets, each target with a different format.

Thanks, I wasn't aware of that!

2) If you are already using "too many render targets" then consider GL_EXT_texture_integer (http://www.opengl.org/registry/specs/EXT/texture_integer.txt). GeForce 8/9/2xx/3xx can render up to 8 textures at the same time.

I will keep this in mind. My card (gtx 260) is pretty advanced, but I sort of want to 'play in bounds' as I learn this stuff.
I'm not near the limit yet, but I can see how you could get there pretty quick. Especially using more bytes than the precision of your data.
Thanks!

Alfonse Reinheart

08-12-2010, 05:51 PM

If you are using GL3 hardware you can have multiple render targets, each target with a different format.

Actually, this is not restricted to GL 3.x hardware. Any card that supports ARB_framebuffer_objects (as opposed to EXT_framebuffer_objects) can handle targets with different formats. According to the OpenGL extension viewer database, this includes most GL 2.x hardware (even down to the Radeon 9550).

GeForce 8/9/2xx/3xx can render up to 8 textures at the same time.

So can any DX10-class hardware.

nickels

08-13-2010, 08:43 AM

So can any DX10-class hardware.

That's really good to know. With that I am ok to assume up to 8 textures, which should be way more than I ever need (oops having a 64k moment there... :) )! Thanks. Of course, I will probably still do the compression just to not use any uneeded bits and to reduce bandwidth.... But as things progress I can definitely see needing more buffers.