Hi all,
I have been playing a bit with osgPPU, and I have a couple of questions/issues. Note that I am on the last stable reelase, that is 0.4, so I don't know if there are more recent changes in the trunk that might fix these things.
The main problem I have is with the HDR example, which I believe does not behave correctly. I'm referring to the osgppu_hdr example for my tests. The tone-mapping part has some problems and after some analysis I think it comes from the average luminance computation.
To check that a problem is there, try disabling the temporal adaptation from the luminance_adapted_fp.glsl shader (just change it at the end so to set "lum = current"). This way the instant average luminance will be used for the tone mapping operations. Then run the program, display the average luminance result with F5; then zoom out so to reduce a bit the teapot. Try then panning around so to move the teapot in different parts of the image and you will see sudden changes in the luminance values, which are not correct.
The shader tries to read the last value of the mipmapped texture for the sceneLuminance unit. But from what I see, it looks like this does not work and it actually reads the zero level, i.e. the full texture. In fact, when you move the teapot around keep an eye on what's in the center of the image: the shader samples at (0.5, 0.5)... When nothing is there you'll get a black luminance (well, a dark gray, there is a lower clamping at 0.2 in the shader).
An additional test you can do to verify the problem is to change the sampling point of the texture from (0.5, 0.5) to anything else. If you are reading the last mipmap level (1x1 texture) this should always retrieve the same value, while it doesn't.

Does anybody know if this is really the problem I see? If it is, any solution or any idea about how to address it?
Thanks,

I'm working with the average value, I've found some problems, just
like you. For me the two problems are :
- for each new mipmap level, where you have to perform the mean of the
four pixels of the previous level, the mipmap_shader use values of the
first mipmap level of the texture, and not the log10 values of the
previous mipmap level
- the value of the last mipmap level is wrong.

The way I found is :
1. compute the log10 values of the entire texture in a first shader
2. then perform the mipmap with:
osgPPU::UnitInMipmapOut* sceneLuminance = new osgPPU::UnitInMipmapOut();
sceneLuminance->setName("ComputeSceneLuminance");
{
sceneLuminance->generateMipmapForInputTexture(0);
}
3. get the 5th or 6th level of the mipmapped texture, and perform the
mean of this texture in a shader. You'll got your average luminance.

Hope this helps,
Josselin.

----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

I, myself, have been working with the blur effect in the hdr example as well as the glow example. I have not encountered issues with the hdr example except that it crashes for unknown reasons for me while running it.

This info is good to know if ever I must implement the average luminance. I wish I had a solution but I have not worked with average luminance at all.

Hi Luca,
The way I found is :
1. compute the log10 values of the entire texture in a first shader
2. then perform the mipmap with:
osgPPU::UnitInMipmapOut* sceneLuminance = new osgPPU::UnitInMipmapOut();
sceneLuminance->setName("ComputeSceneLuminance");
{
sceneLuminance->generateMipmapForInputTexture(0);
}
3. get the 5th or 6th level of the mipmapped texture, and perform the
mean of this texture in a shader. You'll got your average luminance.

Hi Josselin,
I don't think your solution is correct, the log should be computed only on the first level. And in general I think that the whole example is fine from an algorithm standpoint (it basically matches 1:1 the HDRLighting sample you get with DirectX SDK, with some minor changes).
The problem is really in accessing the last mipmap level of that luminance texture. I have no idea if this goes wrong because the mipmaps are actually not there of for some other reason.

Also, for Allen, the error is not so evident thanks to the temporal adaptation which tends to mask it. But if you disable it the problem will be well visible I think.

as Lucca said, from the algorithm point of view the computation is correct. However, it might be that there is a problem accessing last mipmap level of the texture.

Lucca, could you please try to change the texture2D () operations in the shaders which access last mipmap level (i.e. level 100) to an texture2DLod function. This might help. In deed in earlier versions I had have the texture2DLod functions instead of texture2D. However due to glsl specifications texture2DLod is only valid in a vertex shader although it also worked well in fragment shaders on nVidia hardware. After that I then changed to texture2D to match the correct specification, but never checked if this introduced the errors you was talking about.

Lucca, could you please try to change the texture2D () operations in the shaders which access last mipmap level (i.e. level 100) to an texture2DLod function. This might help.

Hi Art,
No luck... just tried it and it does not help. Actually the lod function does not produce any error message even if used in the fragment code, but it seems to retrieve wrong values (I always get a white pixel now).
I think the problem is really in the mipmaps then; I remember I had something similar in another situation (accessing any texture mipmap always returning the base level), can't remember how I fixed it or what caused it... I'll dig in some older code to see if I can retrieve it.

ok, I will then try to investigate what is going wrong there. One have first to check if there is at least some data in the mipmap structure. For this purpose glslDevil is a good debugger, which I have used several times to test opengl applications. If there is a data there, then one need to check why wrong values are readed from mipmaps. If there is no data, then it might be that osgPPU's mipmap units don't render to mipmap level correctly.

If you like, you could try yourself to find the issue and post a patch for this.

ok, I found the bug. as I said the bug is not really a bug it is more or less slightly unclear definition of glsl. The problem was to use texture2D method instead of texture2DLod to get value from mipmaps. Actually in the definition of glsl texture2DLod is mean to be used only in vertex shaders. For whatever reasons texture2D do not really accept the last parameter, so one do always read the first mipmap level, instead of the last one. Another problem was not to clamp the resulting luminance to 0 on negative values. Negative values happens when everything is black, so the log(0+epsilon) became negative.

On nVidia hardware texture2DLod can be also used in fragment shaders. Unfortunately I have no clue about ATI hardware. I placed some extra glsl extensions into the shader code, to activate this behaviour also an ATI cards. I found this info on a forum about game development. So, maybe this helps.

I debugged now the HDR example and it seems to work perfect. However I am not sure how it will work on ATI cards, I hope it would also work perfect

New changes are submited into osgPPU-0.4.1 repository, the svn trunk is not patched with this changes for now. I will do this in the next days.

Hi,
cool, I'll give it a try as soon as I can.
Out of curiosity, can you post here the syntax of the call to texture2DLod you used? I'm on 0.4 for now and I'd like to test on it, later I'll switch to the trunk.
Thanks!

Oh, just that you note this, I have moved 0.4.1 to 0.4.2-rc1, to better reflect current version state.

Lucca:
the syntax is exactly the same as on usual texture2D call. So texture2DLod(sampler, coordinate [,lod]). So in both shaders luminance_adapted and luminance_mipmap, just change the texture2D calls to texture2DLod where mipmap level is accessed.

Also add following two lines in the beginning of both shaders:
#extension GL_EXT_gpu_shader4 : enable
#extension GL_ATI_shader_texture_lod : enable
This should force to enable the correct behaviour even on ATI cards, I hope.

It is better if you just get the shaders and the hdr example (osgppu.cpp and hdrppu.h) from 0.4.2-rc1, it should then work.

Josselin:
yeah, I am not usre if it makes everything faster, but at least it makes it more or less correct. However, as I said, I can not guarantee that this also work on other GPU vendors as nVidia.

the syntax is exactly the same as on usual texture2D call. So texture2DLod(sampler, coordinate [,lod]). So in both shaders luminance_adapted and luminance_mipmap, just change the texture2D calls to texture2DLod where mipmap level is accessed.

Hi Art,
Just retried the 0.4.2rc1 tag and still no luck... I still don't get what I expect as the average luminance. I'm on Windows7 right now so I'd like to give it a try on WinXP as well. if I get a different behavior I'll let you know.

By the way, I don't think that clamping to zero the log values is correct. Even if you keep the negative values, at the last level you'll exp them and this will certainly bring them back to the positive realm. So actually the clamp is introducing an error I think.

Luca

PS
Note that the specs for OGL 3.x say that Texture2D is a deprecated syntax, you should just drop the 2D (also for the Lod version). However that is not solving the issue either...

ok, if you think that this is not correct, what do you understand under correct scene luminance value then?

I've corrected the luminance computation back to the previous one, without claming the log values. You are right, the final exponentiation should bring them to the right value, I just overlooked this fact.

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou cannot attach files in this forumYou cannot download files in this forum