But the result looks as if the value of pixel buffer is clamped to [0.0, 1.0], so the result is a binary image. I want to know if the float buffer must be scaled properly([0.0, 1.0]) before it can be passed to OpenGL? Because the pixelBuf is reused by later operation, does it mean that i must allocate another buffer and scale the value of original buffer to this new buffer? Is OpenGL support this scale in hardware?

Thank for any help.

vladk

11-12-2002, 10:05 PM

Since you can get a max of only 256 values per color (in RGBA 24-bit model), I strongly suggest that you convert your float texture space to integer. Not only does it improve on speed, but you have better control.

Using float or double numbers will *not* give you higher color precision.