Hi,
If you're zooming in and performance get worse, then most probably problem is too complex pixel shader and/or overdraw.
From pixel shader which you posted you can do following optimizations:
- Reduce shadow map resolution (high resolution shadow map will make you bandwith bound)
- Get rid of following conditional (you can treat conditionals as expensive if you're not sure if branch coherence is high):
if ((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y))
- I'd also go for first calculation of light intensity and then if(lightIntensity > 0.0f) do your shadow sampling
- Sort all object basing on distance from camera and draw nearest object first

What graphics cards you're using? Some graphics cards could have implemented part of functions on software instead od hardware. It would be good to profile it with AMD CodeXL(http://gpuopen.com/compute-product/codexl/) and see where the problem exactly is(I had the same problem with AMD firepro cards and cubemaps sampling).

Hello Gamedev
I'm currently working on my master thesis that compares various ways to filter shadow maps and stumbled upon a problem with Variance Shadow Maps. Is it possible to omit rendering in shadow receiver in to shadow map? What i've read from original paper, it's not, due to possible high variance between shadow caster and shadow receiver. But it's quite common in games to skip some object from rendering in to shadow map(for example, terrain that can have shadows from raycasting technique, explicit per object shadow maps etc.).
Is there any kind of "trick" for that kind of situations? Or is it common to just use old plain shadow maps in this case?

I'm not really sure if dropping support for exceptions in current era is good idea.
Exceptions make code cleaner(for ex. you don't need to pass error code via method return codes or arguments), but they need to be used in exceptional situations(a lot of people abuse use of exceptions and try to hide some high level logic inside of catch block).
About performance, in x64 you don't pay any cost when calling function that may throw, all the cost is moved in case when function actually throws and you need to handle exceptions. But as i said, this should occur in exceptional situations(ex. missing file that should be there, problem with connection etc) and mostly at this time you don't care about performance.
I can't speak about consoles(i've never worked on one), but i bet their differ between architectures, as well in x86 land you pay small cost for every function call as compiler generates code for stack unwinding.

IMHO, it should be the whole image, not just the bright parts. The versions of bloom that appeared before HDR, you had to use some kind of threshold value to extract only bright parts, but that makes no physical sense. Bloom is the light being blurred by imperfections in the lens (either your eye, or smudges/etc on the camera lens... ), and it's impossible to construct a lens that will let through X number of photons perfectly, and then blur all other photons. Natural lighting effects are additive and multiplicative, but thresholding is a subtractive (unnatural) effect. In my HDR pipeline, I just multiply the scene by a small value, such as 1%, instead of thresholding -- e.g. 1% of light refracts through the smudges taking blurry paths to the sensor, and 99% take a direct path. Changing that multiplier will change how smudged or imperfect your lens is.
However, after doing this, the end result is similar - you only notice the bloom effect on bright areas :D
That's quite interesting. What puzzles me is should result of this multiplication and further blurring be stored in HDR format(ex. 16F) and then compose with HDR result(to not lose informations) or is it correct to do tonemap after multiplication and store result in SDR format? (To improve performance)

Hello,
I've got strange artifacts on borders of triangles when rendering scene to texture, then applying it to other mesh.
As picture is worth a thousand words, here is a screenshot(here better quality: https://www.dropbox.com/s/qaqptxeojft3ggf/artifacts.png?dl=0):
[attachment=34607:artifacts.png]
What i learned from debugging is that problem occurs when rendering scene to texture, but in mainpass problem doesn't occur. I thought about some NaNs in shader, but even when i'm outputting solid color value artifacts occur(less visible, but still).
It only happens on borderes of triangles(artifacts layouts matches wireframe).
Do you have any idea what could output that weird results?

Removing sampler2D from struct and putting it as external uniform was the way to remove crash from intel hardware! Thanks!
I Wonder if removing indexing on functions variable would improve performance(based on docs, it somehow would!), thanks again!
I think this is way to somehow improve performance on low end hardware, thanks!

According to: https://www.opengl.org/wiki/Data_Type_%28GLSL%29#Opaque_types they can be part of struct in GLSL.
But i'll try with moving sampler out of struct and checking if it's fix issue.
Both debug and release. As i mentioned previously, it compiles without errors and crashes on line glLinkProgram(ProgramID);

But how is it possible that it works correctly on two amd gpus(hd 7770, windows 10; hd 6470,. windows 7)? And both produces correct images?
If there were any mismatch between vertex and pixel shader there would be an error on amd gpus or at least they would produce incorrect image.

Problem is that, this shader is quite big, and consist of several files, with many #Ifdefs, but here is link to it: https://www.dropbox.com/sh/9vafm0neagtnavs/AADI3BY8XWV9om-PK3j6p6oma?dl=0.
The snipper that i posted in first post is part of shader that is being compiled with deadcode removed by #define NUM_DIRECTIONAL_SHADOW 1, as any other lighting calculations are removed from shader code due to the fact that other defines are defined as:
#define NUM_DIRECTIONAL 0
#define NUM_POINT 0
etc.
So i suppose that the problematic code is this snipper that i posted

I remember playing via mobile network(plugged through mobile phone) in Counter strike 1.6. I had ping like 70ms on HSPA(signal strength around 2/5). So it's totally doable in big cities with HSPA+/LTE connection.