HDR rendertarget problem

Recommended Posts

Hi!
I'm using a 64-bit integer(A16B16G16R16) cubemap in my project and rendering that to the framebuffer works just fine, I can adjust the exposure and everything looks all neat.
However, when I render to an intermediate render target to do some blooming effects I seem to lose the high dynamic range. It works if I use a 64-bit floating point(A16B16G16R16F) render target, but not the integer format, why is that? My graphics card(ATI X700 Mobile) only supports nearest-neighbour filtering with floating point textures, so I really want to use A16B16G16R16 instead. Have anyone else had this problem?
Thanks in advance for any thoughts on this.

Share this post

Link to post

Share on other sites

Sorry if I wasn't clear on that. My intermediate render target(s) is A16B16G16R16, which seems to lose the high dynamic range I'm after. When I switch to the floating point version - A16B16G16R16F, the high dynamic range is kept, but I won't get linear texture filtering that way, which I need.

Share this post

Link to post

Share on other sites

To store high values (>1.0) in integer formats, you could scale the data in the shaders by dividing the input data by some constant number like 256, and in the final output pass multiplying the data by the same constant.

256 is a good example constant here because it reserves 8 bits of accuracy in the pipeline, a precision most often used in the actual framebuffer output. If you need larger values than that, then increase it at the cost of precision. If you're happy with smaller values, do the opposite and effectively increase the potential output bitdepth.

The SDK also has a sample in which a 8bpc texture is used to store high dynamic range colors by encoding the exponent of the color intensity in the alpha channel. If you don't need alpha, this is a very good technique and is also fast due to reduced graphics bandwith demand.

0

Share this post

Link to post

Share on other sites

Thanks alot for clearing that up guys! I've now got it working. I wasn't aware of the 0-1 range that you pointed out. I'll look into the 8bit way of doing it, sounds smart. Also, are there graphics cards that supports linear filtering on floating point formats? Or are we still waiting for this?

Share this post

Link to post

Share on other sites

Original post by wallare there graphics cards that supports linear filtering on floating point formats? Or are we still waiting for this?

GeForce 6 and 7 series as well as Radeon's from the X1000 and above all support blending and linear filtering of FP16 formats. No current hardware supports this for FP32 and I'm pretty sure its still optional under D3D10 so theres no guarantee that we'll see it anytime soon. ISTR that ATi stated they saw no need for FP32 filtering/blending in real-time graphics.

Share this post

Link to post

Share on other sites

Original post by DemirugIIRC ATI does not support filters for FP16 texture even with the newest models.

Interesting... I was sure that the last time I looked at the ATi specifications it was blending+filtering, but a quick glance at the X1900 specification say "64-bit floating point HDR rendering supported throughout the pipeline (Includes support for blending and multi-sample anti-aliasing)".

I've been doing a fair bit of work with FP16 lately and having blending on my Gf6800 makes a very noticeable difference to the final image [headshake]