Recommended Posts

Hi to all,
I need to introduce HDR textures in my DX9 engine. Actually, IMHO the best texture format for HDR images is A16R16G16B16. The problem is that every texel occupies 8 bytes! An extremely huge quantity of memory.
Compression techniques are necessary. I looked at DX9 demos and I see a nice compression method called RGBE8. Basically, it rewrite every channel (r, g, b) in the form: m * 2^e. m represent the mantissa of the channel and e is the exponent (stored in the alpha channel). Exponent is common to all the three channels.
This method requires 4 bytes per texel, that is an improvement over the naive one.
Actually I can extend this compression up to 1,5 bytes per texel in the following way: I compress RGB channels with DXT1 compression (0,5 bytes per texel). Then, I use another texture to store the exponent uncompressed (this increase the precision because exponent precision is crucial). Total: 1,5 bytes per texel.
A quite good compression.
However, there is a problem: this method works only if you use point filtering. Other filtering methods, like the bilinearing filter, make the things go wrong.
I have tried to apply point filtering to exponent texture (to fetch it correctly) and bilinearing in the RGB channels... but this still expose too artifacts.
I have another idea: splitting the 16 bit per channel texture in two 8 bit per channel textures (low byte & high byte texture) and apply DXT1 compression to both. Total: 1 byte per texel! Problem: an error in the "low byte" texture is not very noticeable. But an error in the "high byte" texture (because DXT1 is a lossy compression as you know) is quite devastating... HELL!
A solution to the problem is to store the low byte texture compressed with DXT1 and the high byte texture uncompressed. Total: 3,5 bytes per texel.
Here bilinearing filters don't make problems.
Good quality, but It's not a great compression... I will be happy if I can compress it a bit more (up to 2 bytes per texel or less.
I'm very happy if you cool guys can suggest me others methods.
Thanks in advance,
- AGPX

I found that it works better if I return exp * 16 (instead of exp), but in high contrast edge some Halo effects appear.

Anyway, I dunno if is convenient to do bilinear filtering manually in the pixel shader...This sound too nasty and slow...

The following image is a shot from my editor. I use RGBE normally (without multiply by 16):

The following is with the exponent multiplication:

The last show halo and artifacts (again with multiplication by 16):

Exponent is clipped between in the range [-8, 7] when multiplied by 16, so that it's in the range [-128, 112]. I also sum 128 to remove the sign ([0, 240]). So it can stay in a single byte. Yes, dynamic range is less than previous, but images look really better.

I don't understand the nature of the artifacts, at all. Halo should be due to bilinear filtering. The problem is that filtering should happen AFTER decoding and not before.

Thanks,

- AGPX

0

Share this post

Link to post

Share on other sites

I have fixed the artifact problem. It's due to DXT1 compression of the RGB channels (not the exponent). However, halo still occurs...Actually, I have two textures: one for RGB and the other for exponent.I have tried to apply bilinear filtering only on the RGB, disabling it in the exponent. Here the result:

The halo is more evident.

The problem is that mantissas related to two quite different exponent, cannot be interpolated linearly. ;(

My research still continue...

Helps are greatly appreciated, thanks.

- AGPX

0

Share this post

Link to post

Share on other sites

I have changed my method. I split my 16 bit integer texture (A16R16G16B16) in two 8-bit per channel texture (one for the low byte, and the other for the high byte).The low texture is compressed with DXT3 compression. The other one is stored uncompressed (A8R8G8B8). According to my calculations, this method should be immune from bilinear filtering issue. And it is... but only with Reference Rasterizer! With HAL device, the thing don't works. :blink:WHY??????

Take a small look to the pixel shader I used to reassemble the two 8 bit textures in a 16 bit one:

Share this post

Link to post

Share on other sites

I post here my solution to support other peoples with a similar problem.Basically, the problem is due to the high interpolators imprecision.Reference rasterizer is preciser than HAL device.So basically, there are nothing to do: you have to perform POINT filtering and make bilinear filtering via pixel shader. Here the code to perform the filtering (gently posted to me by a guy on #flipcode. Thanks to you!)

This way, also the RGBE encoding works well, so I switch to it, because it requires less storage memory. Oh, well, at least... only if I can compress it with DXT1-5 without introduction of artifacts... I will investigate on that tomorrow and, finally, I think that I'll write a tutorial on HDR compression... could be helpful, until hardware manufacturer will introduce compression for 16 bit textures! (to say all the truth some 16-bit FourCC formats exists, but are largely not implemented).