Something I can't understand with Alpha Blend limitations

I can't understand why are we forced to use AlphaTest in order to display proper overlapping semi-transparencies on different depth values, as the Unity layer system can manage very efficiently that kind of trick with cameras...

AlphaTest is very hungry, it just sucks up to 15 fps in some of my scenes compared to simple Blend SrcAlpha, as I proceed alpha depth renders with 2 passes, each containing an AlphaTest (as the docs explain, one above a cutoff value, one below).

To make it short, is there a trick to properly render overlapping semi-transparency textures without AlphaTest ?

Unity Technologies

This is simply a limitation of the way OpenGL/D3D render transparent triangles. Everything has to be drawn back-to-front, which is difficult to do if two models intersect. By default, Unity only orders transparent meshes by their origins' distance from the camera. Perfect ordering would require not only ordering every transparent triangle (usually an unreasonably processor- and draw call-intensive approach), but splitting intersecting triangles such that every fragment was drawn in the correct order.

Even the technique at the end of the Alpha Testing page doesn't solve the problem, as it uses a combination of two approaches: a pass with ZWrite enabled that doesn't blend at all, and a pass that does alpha blending. Each part still suffers from its limitations: the ZWrite pass is not transparent, and the alpha blended pass can appear out-of-order when there is intersecting geometry.

Unity Technologies

It's not hopeless, it's just a difficult problem to solve in the general case. If you provide some detail about the situation in which you'd like to get geometry drawn correctly, there might be a simple solution for your case.

I have a full environment using a texture with semi or full transparent alpha portions. I just simply want these alpha portions to be rendered with the proper alpha transparency set in the PNG. Like alpha 50 would be half transparent with meshes beyond it, 100 totally opaque.

Actually I can do it with a Shader that uses 2 passes : one for the opaque pixels, and one for the semi / fully transparent pixels. It works, but I'm forced to use AlphaTest, which consumes a lot of horsepower compared to simple "Blend SrcAlpha OneMinusSrcAlpha".

What I'm targeting is to use one pass with no AlphaTest.

It is targetted for iPhone, so no CG fragment is allowed :roll:

I'm still researching a solution.
Speaking of which, I found a way to delete one AlphaTest from the Vegetation shader in the Unity docs :

Ah ok, the geometry is various, it can be simple quads to hemispheres. Textures are placed on them, with semi-transparent parts.
Unfortunately I'm limited in term of triangles budget, and cannot change this geometry.

Plus some textures are unreproducanle with meshes, because too complex, like dozens of humans, rain or destroyed buildings.

Well, a simple example that would be more explicit than a screenshot :

1) a quad with a circle PNG texture on it. What is not inside the circle is alpha zero.
2) a cube with another texture on it. No alpha (we don't need it here for the example).
3) quad is in front of the cube.

I would just want the final render to display a circle in front of a cube :

a) without alpha testing
b) possibly in one single pass.

_____________

From that apart, using this example, even if I understand the hardware limitations you specified above, I just can't understand yet why would it be impossible for lighting buffer ("primary" combiner in the texture block) to be faded by the texture's alpha.

Can't we hack that lighting basic render at all, like we can modify the texture's one ?

It would just boost by x1.5 (at least) every semi-transparency render ... And which game doesn't use semi-transparency nowadays ? It is so a primary feature in graphism, I can't understand why it's so complicated to properly do it

edit : Found this article interesting.
Don't know if Unity takes that front-to-back uselessness in account.

edit 2 : finally, another article that confirms we shouldn't use AlphaTest.

Avoid Alpha Test and Discard

If your application uses an alpha test in OpenGL ES 1.1 or the discard instruction in an OpenGL ES 2.0 fragment shader, some hardware depth-buffer optimizations must be disabled. In particular, this may require a fragment’s color to be calculated completely before being discarded.

An alternative to using alpha test or discard to kill pixels is to use alpha blending with alpha forced to zero. This can be implemented by looking up an alpha value in a texture. This effectively eliminates any contribution to the framebuffer color while retaining the Z-buffer optimizations. This does change the value stored in the depth buffer.

If you need to use alpha testing or a discard instruction, you should draw these objects separately in the scene after processing any geometry that does not require it. Place the discard instruction early in the fragment shader to avoid performing calculations whose results are unused.

Click to expand...

This truly means we can replace AlphaTest by Blend, with the same result.

Unity Technologies

Well, a simple example that would be more explicit than a screenshot :

1) a quad with a circle PNG texture on it. What is not inside the circle is alpha zero.
2) a cube with another texture on it. No alpha (we don't need it here for the example).
3) quad is in front of the cube.

Click to expand...

Is your geometry actually intersecting? If not, you might just need to use Material.renderQueue to force the drawing order of your objects.

This truly means we can replace AlphaTest by Blend, with the same result.

Click to expand...

The advantage that alpha test has is that it won't write anything if the test fails. If you're using the Z buffer for sorting, alpha testing will look right. Alpha blending will write to the Z buffer for every fragment, meaning that even transparent pixels will stop geometry behind them from being rendered later.

Geometry is not intersecting. Material.renderQueue would be an awesome solution, but on the iPhone we have to use the smallest possible amount of them. For example, a whole level will often have only one UV mapped material, making this solution uneffective.

But I keep your advice at warm, could be really useful for certain situations

15.080 How can I make part of my texture maps transparent or translucent?
It depends on the effect you're trying to achieve.
If you want blending to occur after the texture has been applied, then use the OpenGL blending feature. Try this:
glEnable (GL_BLEND); glBlendFunc (GL_ONE, GL_ONE);
You might want to use the alpha values that result from texture mapping in the blend function. If so, (GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA) is always a good function to start with.However, if you want blending to occur when the primitive is texture mapped (i.e., you want parts of the texture map to allow the underlying color of the primitive to show through), then don't use OpenGL blending. Instead, you'd use glTexEnv(), and set the texture environment mode to GL_BLEND. In this case, you'd want to leave the texture environment color to its default value of (0,0,0,0).

After a small amount of research, I found that polygon depth sorting is not possible (thing that wasn't specified in OpenGL Apple overground docs).

So in order to save performance, it would be better to use one pass, no Z writing, "queue"="Transparent", and detach all the translucent polygons apart, making them concave instead of convex, to avoid bad depths based on their object container (a huge cube containing a camera and another object would be displayed behind this object, for example).

I will keep the thread updated with any performance delta between this method and classic 2-pass AlphaTest.

Now another question, but that doesn't have to do with shader anymore (lol there should be a "Performance Tweakings" forum) :

Would it be even faster to split those translucent polygons into separate objects, to activate Dynamic Batching ?
(considering Dynamic batching was not activated before)

After a small amount of research, I found that polygon depth sorting is not possible (thing that wasn't specified in OpenGL Apple overground docs).

So in order to save performance, it would be better to use one pass, no Z writing, "queue"="Transparent", and detach all the translucent polygons apart, making them concave instead of convex, to avoid bad depths based on their object container (a huge cube containing a camera and another object would be displayed behind this object, for example).

I will keep the thread updated with any performance delta between this method and classic 2-pass AlphaTest.

Now another question, but that doesn't have to do with shader anymore (lol there should be a "Performance Tweakings" forum) :

Would it be even faster to split those translucent polygons into separate objects, to activate Dynamic Batching ?
(considering Dynamic batching was not activated before)

Click to expand...

Hi there!
I've been having the same hard time in my project Aff
Could you elaborate a bit about:
"
So in order to save performance, it would be better to use one pass, no Z writing, "queue"="Transparent", and detach all the translucent polygons apart, making them concave instead of convex, to avoid bad depths based on their object container (a huge cube containing a camera and another object would be displayed behind this object, for example).
"

This is a very old topic ! Right now I don't have anymore Z sorting problems, as I'm using Surface shaders instead of Shaderlab. They seem to manage far better on that part (plus years of Unity engine improvements, btw). If you're still experiencing Z fighting, try to use a builtin shader, and avoid overlapping transparent objects (or put them on different Z depthes).

This is a very old topic ! Right now I don't have anymore Z sorting problems, as I'm using Surface shaders instead of Shaderlab. They seem to manage far better on that part (plus years of Unity engine improvements, btw). If you're still experiencing Z fighting, try to use a builtin shader, and avoid overlapping transparent objects (or put them on different Z depthes).

Click to expand...

Hey, good to know. I'm still having some trouble with that... Your entire hair is one mesh, or you've separated each module to get a batch? That might also help ordering.
I've even tried changing the index ordering of vertices, but its quite hard to manage in complex mesh.

Digital ApeModerator

its also beneficial to split your mesh up into smaller parts if you don't want to fiddle too much as the origin point of the mesh is used for sorting transparency, so obviously big things will glitch. Splitting them up or using a clever design is an acceptable compromise in a lot of cases.

its also beneficial to split your mesh up into smaller parts if you don't want to fiddle too much as the origin point of the mesh is used for sorting transparency, so obviously big things will glitch. Splitting them up or using a clever design is an acceptable compromise in a lot of cases.

Click to expand...

Yep, doing some tests right now
Do you guys can confirm if hair mesh bounding box has also anything to do with z depth calculation? I've read somewhere that people were including far vertices to get a bigger bounding volume.