Is the profiler lying/can GL use previous binds/will...

Sorry for creating so many new topics recently... But I'm combining several topics in one this time.

I'm trying to add (what I think is) a cool feature to my terrain (as in heightmaps/mountains/etc...) class. This is a feature all game engines/3d basics should have if they use terrains. I say this because when I used to use Dark Basic the lack of said feature drove me crazy.

It basically works like this. The user can pass several texture units (another class, in this case) to the terrain along with a list of heights the length of the amount of textures. The terrain then uses 3D OpenGL textures to blend all those texture units into one and uses different depths in that texture as it renders the terrain. This creates a smooth blend between the grass and rock on the mountainside, etc...

This brings me to my questions.

a) Can the OpenGL profiler lie?
b) If OpenGL receives a corrupt/broken texture at glBindTexture, will it use the previously bound texture?
c) What *might* be the problem if OpenGL is only using one layer of a 3D texture?

I ask 'a' because in the profiler I see my 3D texture perfectly fine, but OpenGL is only using the last layer loaded.

This brings me to 'b' and more explaining. Here's what the coder might do:

// Pass the arrays with texture data, the heights to use textures, and a number
// that represents the number of textures.

Now, in it's current state, my code will draw the terrain using only the last loaded texture. (So in the example, "rock.tga".) The order I place them in the array has no effect, however if OpenGL was only using depth 0 then whichever was *first in the array* would be drawn.

If I've confused you by explaining poorly I'll try to clarify a bit here...

I could inverse the 0 & 1 in the above code (in the array) and the rock texture would still be drawn, despite the fact that 'ApplyTextures' would add the rock data to the final image data *last*.

Anyways, this leads me to believe that the rock is being drawn (or grass, if it is called last) because the GL texture unit for those two textures was the last valid one called. Meaning that the internal GL-tex for the terrain is being rejected for being smurfy or something, and the last bound is being taken. Make sense?

Thirdly, 'c'. If 'b' is a no (in other words OpenGL is *not* behaving that way...) then what *might* be wrong. This (meaning one layer in a 3D tex being used) has happened to me before, and I fixed it. I just can't remember how. Has this kind of thing happened to you before? How did *you* fix it? (Slap my brain until I remember... )

Yes, the OpenGL profiler can lie, however it sounds like it's not in this case. The last successful call to glBindTexture will always win. If you think something is failing, use the OpenGL profiler's "break on error" feature to catch OpenGL errors as you make them.

OneSadCookie Wrote:Yes, the OpenGL profiler can lie, however it sounds like it's not in this case. The last successful call to glBindTexture will always win. If you think something is failing, use the OpenGL profiler's "break on error" feature to catch OpenGL errors as you make them.

Thanks, I'll try that.

EDIT: Where might I find this "break on error" feature? I can't seem to find it...
EDIT 2: Found it.

for a) I think that Profiler is (more or less) doing glGetTexImage on your texture and then drawing it as a bunch of 2D quads. So it will show whatever data is there, regardless of the "completeness" state. Note that this is different than runtime OpenGL. Also, there is no reason why Profiler should work like runtime OpenGL. It is a tool, and the ability to see your texture data is useful. A "texture is incomplete!" warning might be a nice feature to add, though.

for b), what is supposed to happen is that if you bind an incomplete texture to a unit:
* in fixed function, texturing "acts as if it is disabled" for that unit.
* in programs/shaders, you should get black (0,0,0,1) for that unit.
So depending on how you are texturing you could see "the previous bound texture", if that means "the texture bound to the previous unit". But you shouldn't get the last "complete" texture you bound the current unit-- glBindTexture does not no-op if the texture is incomplete ("completeness" can't be known until you actually try to draw.)

Definitly check to see if any of the currently bound textures are Incomplete.

Also, passing a texture name of 0 into glBindTexture is perfectly OK, as long as that is what you intended to do. There is a default (and non-deletable) texture at name 0. This oddity is an ancient artifact from OpenGL 1.0, when texture objects didn't exist. There was no glGenTextures/glBindTexture. If you wanted to change which texture you were drawing with, you called glTexImage2D.

I'm not sure what you meant in your last post. The pointer you pass to glGenTextures must already point to valid memory. OpenGL will not allocate it for you.

// These are variable.
// The second is not, because it has a limit of detail.
glTexParameterf(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, RAYNE_ITL_TEX_FILTER_GL_TRANS[t_filt]);
glTexParameterf(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, RAYNE_ITL_TEX_FILTER_GL_TRANS[1]);

When you bind the texture, don't cast the pointer to GLuint, you need to dereference it. (with *) Just FYI, whenever you have a pointer, and a function takes the type it's pointing to, you always dereference it. I have yet to see an instance where you need to cast a pointer into the type it's pointing to. The closest you will ever get is casting a pointer to a long to hold it in an integer for a while, where you'd then cast it back into the pointer in order to get that pointer back. (why you'd ever do that I don't know, but at least it's valid)

BTW, is target actually pointing to anything? Do you pass in the address to a GLuint that you stored somewhere else when calling that function, or did you just declare a pointer and pass that in? That, too, could cause a problem, but I know for a fact your call to glBindTexture is causing a major problem.

No it was an actual GLuint, not a pointer. Thanks for the tip on the de-referencing though. I'll try that.

...

Ok, deref'd the pointer. The program still loads and displays with one texture. When I run it through OpenGL profiler *all* the glBindTexture calls seem to receive 0 as the target, and it will always crash when run in the profiler.

I'm still not sure what's happening here. It's not my texture depth generation code, I've dropped all the array values into exel and they all line up the way they should. OpenGL just *does not* use the 3D texture.

I don't know, this looks an awful lot like a pointer to me:
GLuint *target
Regardless, glGenTextures must have a pointer, so regardless one will be wrong, even if you mistyped the function declaration. You will need to pass in a pointer, just to use glGenTextures, but you then need to dereference it for whenever you use something like glBindTexture.

The values come in sets of identical twos, and they all match up. Next I'm gonna write out all the code for making the 3D texture manually. I've actually done this for the same two textures in a demo project and it worked.

I'm betting it's the pointer stuff. Perhaps instead I shall re-write the function itself to not use pointers.