"Accidentally used a matrix that assumed a [0, 1] depth range (which is
what D3D
uses) instead of OpenGL's [-1, 1] range. This effectively cut the depth
precision in half."
This is just a general technical question: Would the [0, 1] range actually
make a difference? I always thought [0, 1] was the output range required
from fragment shaders, in which case precision would be lost from [-1, 1]
anyway.
Thanks :)