Using 32bit bitmaps with alpha channel as a texture - Transparency problem

Hello, World!
I'm developing a small game with my friend and we have a small but important problem.
We are trying to make transparency with 32-bit bitmap textures but not successfully. We think that the problem is in depth test or in blending, but we are beginners in OpenGL, so we can't solve this problem. Hope you can ;-)
Here is our coding of draw function:

We are trying to make transparency with 32-bit bitmap textures but not successfully. We think that the problem is in depth test or in blending, but we are beginners in OpenGL, so we can't solve this problem.

You disable the depth test when using blending, but then re-enable it almost immediately:

The problem with depth testing is that the test returns yes or no. Either the fragment is obscured or it isn't. If it's obscured, it won't be drawn, if it isn't obscured, it will be drawn. There's no "draw it, but partially obscured by some other fragment with an intermediate alpha value".

If you want to render translucent surfaces correctly, you basically have three options:

Render from back to front, with glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA).

Render from front to back with glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA), using pre-multiplied alpha and a framebuffer with an alpha channel.

Use "depth peeling", a technique involving rendering in multiple passes using two depth buffers. On each pass, the nearest fragment which hasn't already been drawn is rendered. This similar to the second option, but depth-sorting individual fragments rather than polygons.

The advantage of depth peeling is that you don't have to depth sort the polygons. The disadvantage is that it requires shaders, framebuffer objects and depth textures (i.e. OpenGL 3.x), and requires multiple passes.

The practicality of the first two options depends upon the geometry. If the data is in a BSP tree or similar structure, then sorting the polygons is straightforward. In other case, you may get away with sorting polygons based upon the nearest/farthest/average Z coordinate, but to do it correctly requires a topological sort, which in turn may require splitting polygons in order to break cycles in the dependency graph.

If you have to render arbitrary meshes and can't afford the expense or complexity of depth-sorting the polygons, you can disable depth testing and use a blending mode which doesn't depend upon order (e.g. GL_ONE,GL_ONE for addition). Or if you only need the front-most polygons, you can render into a separate buffer, with depth-testing and without blending, then composite the result with blending.

Could you send us any coding of one of the options? As I said, we are fairly new in OGL, so we don't understand it much... We have tried a lot to get it work, but not really successfully. First time we wanted to use masking but it didn't work well too. What would you prefer to use? Masking or 32-bit bitmap? I exported the model, what you can see on the screenshots from Cinema 4D, where the transparency works using Black and White mask... It sounds like, we are lazy and don't want to do it in our way, but we really tried a lot but we didn't get it to work... Please, could you send us any example, how to use it?
Thank you really much for your relply.

None of the three numbered options are simple, and I don't have code which could realistically be used as an example (either it's part of a much larger program from which it can't reasonably be extracted, or it's code which I don't have the right to distribute, or both).

Originally Posted by ProXicT

First time we wanted to use masking but it didn't work well too. What would you prefer to use? Masking or 32-bit bitmap?

If masking (i.e. alpha-test) is sufficient (i.e. you don't have large areas which are supposed to be translucent), I'd use that. It's a great deal simpler than any of the approaches which are required to make general-case translucency work.

If you need general-case translucency and you can assume OpenGL 3 support, search the web for "depth peeling"; the first result should be nVidia's original paper. If you don't understand it, you just need to spend more time learning OpenGL. Contrary to what some books might promise, you can't actually learn 3D graphics programming (to any reasonable level) in 28 days.

Failing that, simply disabling depth tests/writes will produce results which aren't as blatantly wrong as you'll get with them enabled.