Recent Profile Visitors

A normal is a unit vector, you can imagine a unit vector starting at the origin of a sphere with a radius of one.
No matter at which direction the vector points, it always ends at the surface of the sphere, so it has a length of one.
Same in 2D where the sphere becomes a circle, the radius is one and the length (or 'magnitude') of the vector must be one too.
By summing up various face normals, you get a vector of arbitary length, and you want to have it length of exactly one but keep its direction as is.
Lets say we end up with a vector (3,4,0). We now want to calculate its length, and then divide by that length so it has final length of one.
Some code how to do this usually looks like so:
n.Normalize();
n /= sqrt (n.dot(n));
n *= 1 / sqrt (n.dot(n));
... all the same. But lets do it ourself. Notice the right angled triangle you get if you think of the 2D vector (3,4) in the coordinate system: a = 3 (x axis), b = 4 (y axis), c is the unknown diagonal length of the vector to its origin at (0,0).
So we can use Pythagoras right angled triangle rule:
a^2 + b^2 = c^2
c = sqrt (a^2 + b^2)
c = sqrt (3*3 + 4*4)
c = 5
So the length of the vector is five, and we divide by that to make it unit length: (3/5, 4/5)
This works for any number of dimensions.
3D: (3,4,0) normalized = (3,4,0) / sqrt (3*3 +4*4 + 0*0) = (3,4,0) / 5
even for 1D: (3) normalized = (3) / sqrt (3*3) = 3/3 = 1

Harder than i thought and probably too expensive for the inner loop of a solver for UV maps.
I ended up at the same point few days ago but noticed there was a simple geometric solution to get around it. It should work in a similar way too this time i hope...

Instead dividing by size, you need to divide the resulting normal by its length to normalize. Otherwise you end up with too short normals on curved geometry.
Also, you should consider using angle weighted normals, which is much more correct than simple average. For this you multiply each face normal by the angle of the face at the vertex before accumulating.

sin(a*t) * d = sin(a*(1-t)) * e
I want to solve for t, t has to be in range [0,1], and all other values are known.
Using math tools they only spit out special and disallowed cases (I don't know how to use them properly for periodic stuff).
Graphing both sides of the equation, both sides are simple sine waves and any intersection would be my solution. But so far i only know how to add 2 waves, not how to find intersections. Maybe somebody has a quick answer while i try from there...

I have texted on NV 670, 1070 and AMD 7850 and FuryX. No issues, but i have only simple debug visuals and mainly compute. Validation is really helpful - it's harder to make this happy than the hardware. (Hardware is more forgiving than validation.)
To me it seems much better than with OpenGL where i often had issues. I never want to use GL again after the move. But as said i have not yet started work on the renderer, so i can't say much.
Yes. Did not figure out details elsewhere than on AMD, but without doubt that graph has to be adjusted for various hardware, likely even different configurations for large or small AMD GPUs.

As you often mention your joy implementing things like skinning or soft shadows, i repeatingly get the impression you develop your own engine mainly because, honestly you just enjoy this?
And that could be wrong from the business perspective. Because Unity / UE4 already has all this tech, likely even better or faster, so why replicating the exact same again? Couldn't you spend this time better on working on the game?
My point is, if you would have new skinning or shadowing tech that would be hard to integrate in existing engines, then i would agree on a custom engine. But remembering the screenshot from your space station game, it seems just standard tech, readily available in those engines as well, in combination with all those tools and stuff...
I'm really the last one to be against custom engines, but if there is no difference or reason, then it seems wrong for small indies to go that route nowadays

Haha, no, but i think it's worth to mention. Everybody knows some people dislike chromatic aberation, DOF, blur from TAA, motion blur... But i assume few are aware that cartoon strokes in texture hurt someones eyes too. (I also hate rim lighting, and highlighting interactive objects for example, while i like CA and blur.)
Sure it is. Artists paint strokes at the silhouette, at shallow angles and in shadow. So it depends on normal and lighting. Strokes everywhere ignore this the same way than baked lighting does.
I agree it's the only option for the shown art style, but simply blending out certain strokes if the normal faces the eye would make me happy already (If it's worth additional stroke textures).

Yes, but there is also another reason: Retro gaming. Some people just want old school games or art. We have now what exists in Music for a long time: Some people prefer Led Zeppelin against Nickelback.
I think the root of all evil is that production costs have exploded. Indies can't compete with AAA, AAA can't afford risks to drive innovation in gameplay (so they say).
It's a death spiral. But there are some solutions: Better software for content creation (options are really infinite in this field), and... the asset store?
Yeah, 95% of games i like are indie titles. If i try to be objective, still all progress in gameplay of the last decade comes from there. I really believe in the idea to develop games with very small teams and costs (even by a single man).
Why bother with such details if all your enviroment probes and screenspace fakery provides wrong / incomplete data?

Agree, but removing constraints has an even larger boost on creativity, especially after those constraints exist for decades.
But it takes time to utilize the new possibilities, to see their potential. While working on new tech i often think about what to do with it, what becomes possible, how it it could change games... but those are difficult questions. There is a need for new ideas, and likely they won't pop up in one day.
I don't think that will become worse. Currently games use pretty similar graphics tech. Years ago you could often even tell the engine a game was using - 'No baked GI - Cryengine', 'Hard shadows - Doom3 engine', 'a bit of everything - UE3'... that's gone. But many games now manage to have a unique art style, and that's worth much more. Further, now we see many titles with pixel, low poly or minimalistic art - more variety than ever. This proofs progress towards realism did not hinder any alternatives.
In general i'm very happy with art in current games - it's the field that has improved the most the last two decades.
I'm also fine with progress in tech.
But i'm totally bored about gameplay, and narratives / characters are mostly ridiculous. Maybe i've just grown out and i would not know how to fix that either.

Notice that if dir = (0,+/-1,0), the normalization would fail. (but this does not cause your problems.)
This is a problem, because the result can change widely in an instant. You can prevent this by a simple temporal filter:
static target_vec(0,1,0);
target_vec = target_vec * 0.98 + hit.normal * 0.02;
target_vec.Normalize();
so the vector changes smoothly. You should get rid of all discontinuities by using this practice everywhere.
But be warned that this simple lerp bevhaviour depends on the timestep. If you have variable timesteps, you need something more advanced.
To get on overall better alignment of the player, i'd try to keep it more upwards even at the ramp, and i would make it lean against acceleration. (It's not velocity that matters here, but acceleration.)
This would look more like natural balancing. So you would use this for the upper body target, and the legs should be set by IK.

No, i mean it's possible to utilize realism for example like Pixar does.
Now you can argue we already do that, look ant Kingdom Hearts or something - but i answer it would still look MUCH better with infinite indirect bounces or correct reflections or proper DOF etc.
To further add variation and style, one can decide to add subsurface scattering to every material to generate an overall soft look, or desaturate just the indirect light for film noir, ... Just to list some options we do not have yet, but offscreen rendering already has.
So i think it's primarily the word 'realism' that's wrong here. Usually i use the term 'realltime GI', which sounds less restrictive i hope.
BTW, Borderlands / Telltale games do not look great IMO. They bake the cartoon strokes into the textures, which is exactly the same as baking lighting into textures we did decades ago when there was no per pixel lighting. To me this looks totally terrible. I can't play those games because it makes me constantly upset, while i'm fine with 'proper' cartoon rendering that adds the strokes at the silhouette.

Agree, but we are not even close to photorealism. PBR or photogrametry does not help much here - actually it just increases the gap between 'that trailer / screenshot looks awesome almost real!' and 'meh - playing the game often everything looks off'
I see what you mean here with a true classic may manage to keep looking fine even after a decade, by dealing well with actual limitations. Thanks.

vec curDir = a1.Unit();
vec targetDir = a2.Unit();
float dot = curDir.Dot(targetDir);
float angle = acos(dot);
if (angle > maxAngle)
{
// calculate axis and angle rotation between directions
vec axis = (curDir.Cross(targetDir)).Unit();
// limit angle if necessary
quat rot; rot.FromAxisAndAngle (axis, maxAngle);
targetDir = rot.Rotate(targetDir);
}
vec result = targetDir;
Instead of creating a quaternion to rotate the initial direction, you could just construct the result using
sin(maxAngle) and cos (maxAngle) times the basis vectors 'axis' and 'curDir.Cross(axis)'.
But i would need to do trial and error to figure out which trig function to use with which vectors, and what signs give the expected result, so i was lazily using the quat.

Style has nothing to do with realism. Even if you would have total realism you could still apply any art style you want, with the additional options to utilize the beauty of multi-bounce diffuse interreflection, for example. But you can still do low-poly or cartoonish graphics.
So my point is that realism only adds options and variation - it does not remove them. What you say somehow sounds like 'the technical limitations of realtime graphics help me to find a unique art style by accident'. But this can't be true if you think of it, because others have the same limitations, and so the same 'ideas' coming from it. With total realism the artist has much less limitations to deal with, and he can focus more on creating just art. Win-win.
(The same would apply to things like gameplay mechanics if we would talk about better physics, but lets focus on just graphics.)
Would i convince you with this kind of argumentation? (I'm curious what you think because working on photorealism myself.)