In my system there is currently objects and "helper" objects. Objects reacts to light and all other kinds to effects. "Helper" object are lines that can be a object's normal or axes, and this objects should not be affect by the render effects (Lines has no normals so they can be anyway).. How should this be handle, should have two diffrent shaders one that handels the usual stuff and one with our any effects or have the have a boolean on the objects that says "no effects" and the branch the shader accordingly?

Do not use dynamic branching (if statement) in your shader, it will only slow the retail version down.

This is not strictly correct as it depends on hardware generation and the coherency of the branching and the code being branched around.

You have to think in terms of groups of threads.If all the threads in a group take the 'if' or 'else' branch then your overhead will be minimal as the other code will not be run.If some of the threads take one branch and some take the other then you'll end up executing both paths and the results/execution lanes get masked by the hardware to give you the correct result.

HOWEVER this can still result in a speed up if used correctly, even on DX9 hardware, if you take into account how threads are batched.For example on a console game we had a system which used a mesh with a texture on to define a road; the road surface had bump maps etc on them and was a reasonably costly shader.The texture itself was made up of three ares;- solid colour where alpha = 1- a wavey boundary area (some pixel alpha blended between 0 and 1)- solid colour where alpha = 0

Because of this some groups of pixels had all alpha 0, some all alpha 1 and some with a mixture.As the boundry condition was a thin section and the 'else' path was one instruction by introducing a branch on the diffuse alpha value large numbers of pixels could skip all the complicated processing and the overall effect was a large speed up.

Now, while in the OP's case I wouldn't use a branch there is no reason to avoid them completely - you just have to think about how they are going to branch and if the overhead is acceptable.

Super thx folks. This helps a lot and opens up my eyes for the complexity of shader developing.

A short follow up question. So if i have one shader which I compile multiple times with different define paths each time. Is it possible to automate this process in VS2012? I setup so VS is compiling the shader when it builds everything else.

Super thx folks. This helps a lot and opens up my eyes for the complexity of shader developing.

A short follow up question. So if i have one shader which I compile multiple times with different define paths each time. Is it possible to automate this process in VS2012? I setup so VS is compiling the shader when it builds everything else.

If you use straight forward hlsl code (that you compile using D3DCompile) is a bit more work as you need to create a small program or script (that you again call using a custom build event). this program/script compiles the shader based on arguments and writes it as a binary blob to a file.

If you use straight forward hlsl code (that you compile using D3DCompile) is a bit more work as you need to create a small program or script (that you again call using a custom build event). this program/script compiles the shader based on arguments and writes it as a binary blob to a file.

I use hlsl code so i will try it out.

It would be interesting to know how the big house solves this problem. Do they have one mega shader that is compile with Diffrent options or do they use a more branched approched. The mega shader would be very large and would have to be compiled with many many many options (the combinations would be almost endles, Point light with X function, Spot light with X function and so on). This would be a major overhead in the development process i guess.