This http://www.arcsynthesis.org/gltut/Texturing/Tut16%20Mipmaps%20and%20Linearity.html is the tutorial i am rewriting using LWJGL, i suggest to quickly read it to understand my problem.Now i'm asking you any advice you can give me for using sRGB...am i generating mipmaps in the correct way?I think the problem is i am not loading the texture correctly...could the error be caused by a wrong bytebuffer(image.getImageData())?I've checked it a lot of times and it seems correct to me...

To summarize, here is what i get:and here is what the texture should be:

i heard, that there is something like gamma correct scaling of images.I never used srgb textures, but perhaps the genMipmaps function doesn't work well with linear textures.what do you get when you generate the mipmaps with an external tool, like from nvidia or amd

glGenerateMipmap is not required to perform filtering in linear space, unless EXT_texture_sRGB_decode is supported. So that is most likely the problem.

The best option is to perform your own mipmap generation. This is advisable even for non-sRGB textures, because a) you get to control the filtering algorithm (can use bicubic or more advanced algorithms) and b) most artist-made textures are in sRGB space anyway and downsampling in linear space is the only proper way to generate mipmaps.

The process is simple: read sRGB -> convert to a linear fp format -> downsample using fp arithmetic -> convert back to sRGB. It can also be very easily translated to GLSL or OpenCL, so it's fast for both offline generation and in real-time for dynamic textures.

Standard RGB, as in pre-corrected for the basic standard "monitor gamma". I don't think it makes a lick of difference when it comes to to scaling though, as the above checkerboard demonstrates. Anisotropic filtering should help somewhat though.

Thanks for the responses.I'm new at programming OpenGL, and this is my first code on mipmaps, srgb and so on...sorry if i'll say something wrong.

@Danny02, SpasiIn the original tutorial (which is written in c++) there isn't glGenerateMipmap(), i had to add it otherwise the textures would be black.Perhaps this is the problem..? How can i check if mipmaps were generated with other tools? Are they stored in the texture data?

@theagentd, sproingieThe problem isn't getting gray, but getting a 'darker' gray then the expected one. And yes, with anisotropic filter the texture 'propagates' correctly, but this is not my problem For srgb texture see the link i write in the first post, basically is a texture with color differents from 'normal' RGB

In the original tutorial (which is written in c++) there isn't glGenerateMipmap(), i had to add it otherwise the textures would be black.Perhaps this is the problem..? How can i check if mipmaps were generated with other tools? Are they stored in the texture data?

I haven't seen the code, but most likely the mipmaps come from the texture data, which have been generated offline. The tutorial even suggests a couple of tools for that purpose:

Quote

The DDS plugin for GIMP is a good, free tool that is aware of linear colorspaces. NVIDIA's command-line texture tools, also free, are as well.

Standard RGB, as in pre-corrected for the basic standard "monitor gamma". I don't think it makes a lick of difference when it comes to to scaling though, as the above checkerboard demonstrates. Anisotropic filtering should help somewhat though.

It makes a big difference actually. Try comparing a texture in Photoshop, as the artist designed it, with how it looks in a game without gamma-correct rendering, it won't match. The result will get worse, depending on how many linear filtering operations have been performed on the non-linear texture data. The issue is not that hard to solve, but you'd be surprised how many games get this wrong.

The 3 most common sources of filtering error are: 1) mipmap generation 2) texture sampling 3) lighting calculations. Every time you perform an addition between a texel value and something else, the texel has to be in linear space. You can fix 1) with custom mipmap generation 2) with sRGB textures and 3) with sRGB textures or simple pow() functions in the shader.

There's no way around 2), the texture sampling hardware has to know that it's dealing with non-linear data. It first has to convert from gamma-space to linear, perform the linear/mipmap/anisotropic filter and then return the texture color to the shader. Older hardware cannot do this, but this tends to be the weakest source of error.

edit: Some games use a 2.0 gamma exponent instead of 2.2, an approximation that allows them to use sqrt(x) and x*x in the shader, instead of pow(x, 1.0/2.2) and pow(x, 2.2) which are more expensive. I wouldn't recommend it these days.

Half-way down in the second post, there's a comparison of two spheres, one with gamma-correct rendering and one without. That yellow-ish ring around the specular in the left image is the most obvious clue you can find in games that don't do gamma-correct rendering.

@Spasi, thanks again for your attention, and for the links i will read tomorrow . Your words lead me to another questions:

I have used the same resources (.dds) that the original author uses in his project, and the compiled c++ runs perfectly on my pc so it's not an hardware problem...so why i had to add glGenerateMipmaps() to get the texture displayed? Are there some differences between c++ OpenGL and LWJGL?

And also, i opened the dds with gimp + plugin, i see only the texture, not texture + mipmaps (like the image on "Gamma and Mipmapping" post you write to me) so i think the mipmap are generated on the go, am i right? How are they generated without glGenerateMipmaps()?

These two points are not clear to me

Last thing i want to highlight, the tutorial divides the 'textures srgb' part from the 'gamma correction' part: -the 'G' key switches between a lRGB checkerboard and the sRGB one (which is 'brighter', but not in my lwjgl application - my problem);-the 'A' key switches between the 'no gamma' shader (which does nothing) and the 'gamma' shader, which perform this correction:

If this is the source you're porting, then all mipmap levels should come from the texture. I'll check the .dds file tomorrow. There's no difference between C++ OpenGL and LWJGL, so it's either a problem of your DDS loader or you have the wrong .dds file.

The only difference between a normal RGB and an sRGB texture is in the texture sampling. When you sample an RGB texture, you get the raw data unchanged. When you sample an sRGB texture, you get the texture data in linear space. That means, you get the result of pow(texture(tex, coord), 2.2). Well, not exactly, hopefully the pow is done before linear/mipmap filtering, but in any case the end result is in linear space. So, you use that texture sample however you like (add lighting etc), then you need to output the final color. The problem is that, unless you're doing HDR rendering, you need to go back to sRGB space. This can be achieved in two ways:

- Do an explicit pow(color, 1.0 / 2.2) in the shader and write the result to the output color. This is what the shader code you posted does.- Use an sRGB framebuffer. That way you can output the linear-space color from the shader and the GPU will do the gamma-correction for you. An sRGB framebuffer basically provides the inverse functionality of an sRGB texture.

Yes, that is the source I thought the fault could be my dds loader (which is itself a part of the port of the same project), but i have debugged together the java and c++ code, and all the variables i could check had the same values...and the textures are working for previous tuts (but always with glGenerateMipmaps()), so i have no more ideas, and decided to ask for help here. I´ll be waiting for your reply on the .dds file

OK, I checked the files. Both checker_linear.dds and checker_gamma.dds in the gltut/Tut 16 Gamma and Textures/data folder contain mipmaps. The checker_gamma one has been generated with gamma-correction. So, you shouldn't need to use glGenerateMipmaps and using checker_gamma.dds + the above shader should result in gamma-correct rendering.

@princec, RoquenI've googled for some time before posting here, and i haven't found many resources on this topic...so thanks for the link

@Spasi i downloaded the nVidia texture tool and check the textures, and it says they have mipmaps. So i re-debugged the dds loader, and found the error i was confused because of gimp didn't show mipmaps to me, so i though it was some sort of sRGB problem. Now the mipmaps are correct, and they are loaded from the texture data, as you told to me. Thank you for your time and your explanations, now i understand both mipmaps and sRGB

I'm sorry if I sound like a complete idiot here, but I completely fail to see the point in hacking around a user-specific hardware problem with software. Doesn't gamma-correction effectively reduce the quality of the texture since the non-linear gamma color is put in a byte? Doesn't this screw up (additive only?) blending badly? Isn't it better to let the user apply gamma correction in his monitor or his graphics drivers, since if he want gamma correction, wouldn't he want it on everything? Why why why???

You might wonder why the monitor actually performs this gamma correction, well, if you render a gradient from black to white (or any other pair of colors) in RGB space, only after the gamma correction, it looks like a linear gradient. This compensates the non-linear perceived luminance by the human eye. The same effect can be found in analogue photographs, where increased exposure to light yields a non-linear decrease in remaining pigment in the picture. The non-linearity allows humans to view scenes where light intensities wildly vary (up to factor 10,000) without adjusting the diameter of the pupil.

These pages can help you to adjust the gamma of your monitor:http://www.lagom.nl/lcd-test/ (i'm watching this on a dual-monitor setup and 1 monitor is surprisingly perfect, and the other is horribly off)

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

Gosh and you young-en's have it easy. Monitor responses are much more uniform than they used to be. But you still see radical differences between say LCD camera/phone displays and your average computer/TV screen (other than just luminance). To slightly derail the thread, RGB is always (even properly gamma corrected) non-uniform. Which means than if you think of a color as being some vector is 3D space, then moving some fixed distance does not produce a uniform change in perceived color. You have to move further in some directions than in others to be just noticeably different. Specifically the eye is more sensitive to luminance changes than chromatic.

But what the hell is the point of doing that IN A PROGRAM, let alone precalculated in a texture?! It's a limitation of the hardware, so it should be solved either by the hardware, or more realistically by drivers! I know how to adjust gamma correction, I just don't see any reason at all for doing it myself!

I'm on a laptop, and the monitor sucks balls. The top of the monitor needs one gamma setting and the bottom needs another due to the viewing angle, so I can't even get a good image with gamma correction! I did increase the gamma slightly since it made gradients look better, specifically anti-aliased geometry in motion, but this caused INSANE banding for darker colors which was simply ridiculous, so I immediately disabled it again. Driver gamma correction for antialiasing gradients also looked like shit in motion, but that might be mainly because you can't tweak it at all. So WHY?! Just tell me a single reason for only gamma-correcting a single texture instead of the whole screen.

If it's a device made by humans for humans and it's not doing what it should (display linear colors when fed linear colors) it sure is a limitation in my book... >_>

No, it's a way to convert a linear gradient (black to white in RGB space) into a non-linear gradient (on monitor) to get the retina to make chemical reactions that the brain interpretes as a linear gradient.

(If you had a monitor emitted a specific amount of light for '128,128,128' and half of those photons when displaying '64,64,64', it wouldn't look half as bright, for the human eye)

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

It's not a limitation of the hardware, it's a feature to adjust output for humans.

If it's a device made by humans for humans and it's not doing what it should (display linear colors when fed linear colors) it sure is a limitation in my book... >_>

You fail to understand that this has nothing to do with hardware. Unless you're using floating point textures, there's no way you could use linear-space with an 8-bit per channel texture without a severe impact on quality. You can think of sRGB as a lossy compression method for packing more "dark" info into 8-bit RGB textures. That's because the human eye is more sensitive to dark details than bright details. And this compression comes for free, you don't even have to do anything to make it happen. When an artist works on a texture inside Photoshop, they make it so that it looks pretty on their monitor. But that monitor works in gamma 2.2 space, so the RGB texture they just made is in gamma space by definition. There's nothing you can do about it and there's no need to. The same is true for most other kinds of images, photographs you take with your phone for example.

Now, when you want to manipulate that texture, specifically when that data participates in additions, the math fails unless it's first converted to linear-space. It's math, it's not a hardware problem.

I also want to clarify that gamma setting you see in your graphics driver. That's the gamma function applied to the incoming RGB values before being displayed on the monitor. That's the inverse of the texture's gamma encoding. Specifically, the texture looks nice in Photoshop because:

The final observation that should also be interesting to you is that I said "no matter what". This means that even if you use floating point textures exclusively and you perform linear HDR rendering all the way, you STILL have to do gamma-correction during tone-mapping. Your tone-mapping operator needs to be gamma-aware or you simply go from linear to gamma-space as the last step in the tone-mapping shader.

- Artists create images, but since they are making them with a monitor they will make it look good with the gamma correction. - When we, the game makers, load those images we need to undo the gamma correction from the textures to be able to do correct calculations.

And I still don't get it. Why are the artists making incompatible textures? Why are we doing this correction when loading the texture instead of preprocessing the texture?

And why the f*ck does monitors expect inverse gamma data? That makes just as much sense as saying that you have to add Pi to each color channel or multiply each channel by 10 or something just because "the monitor expects it". Maybe I'm just being stupid... ._.

It's because the human eye has massive sensitivity in low light intensities compared to high light intensities. The scale is logarithmic, and you'd need to store a huge range of numbers if you wanted to store enough intensities of light such that you could get smooth darker gradients and also see black and white. Unfortunately when you've got 8 bits you've only got 256 values to play with. If you want to see smooth gradients in the darks, you'd need all 256 values just to cover, say, the first third of the available range in the monitor. So the output from the computer is encoded using this power scale thing, which basically gives you exponentially more as you get higher, just so you can get to white by the time you reach 255, but also see a consistent difference on the screen between each 1 point of difference.

The end result is, all the data is usually stored in ram as this exponentially encoded RGB stuff, which means when you come to mipmap it using simple linear maths, eg. (a + b) / 2, you get completely the wrong answer. You need to convert the log scale into linear scale first, then do the sum, then turn it back again. This goes for pretty much all blending operations in OpenGL, and that's why there are a bunch of extensions for dealing with this stuff, and why computer graphics are suddenly getting more realistic looking, because finally the GPUs have the actual power to be able to do this in realtime, and also why mostly nobody knows about it outside of hardcore engine coding Or that's my guess anyway.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org