How can terrain physics be handled when a lot of modern terrain rendering is done via the GPU?

Normally for brute force approach and small heightmaps you could create vertices on the CPU and load them into a physics engine as a collision mesh but with GPU approaches there isn't always a 1:1 mapping of vertex data on the CPU to whats displayed on screen (eg. hardware tessellation or vertex morphing). One idea was to generate a low-medium resolution version on the CPU to form the collision mesh but I'm not sure how practical this is as it may cause visual artifacts with physics objects sinking into the terrain or perhaps even floating.

It's been standard practice forever for physics and graphics to use different representations of the world, with different levels of detail.
E.g. in a car racing game, the renderer might have a mesh of 100k triangles per car, but physics just has half a dozen simple convex-polytopes per car.

You've also got to consider that object physics may be dictated by some graphic-less server, but will be viewed by two different clients -- one with uber-detail mode with 1M triangles for the terrain, and one with low-detail mode with 10k triangles for the terrain. In that case, the server can't generate a result that looks perfect for both clients, so it's got to make some kind of trade-off, such as using the medium-detail terrain mesh for physics.

If this is the case, where physics and graphics are necessarily somewhat out of sync, then there's a few tricks that can be done on the graphics side to patch up obvious problems. E.g. after being told where a humanoid is standing, the graphics could trace rays down from the characters feet against it's actual (current LOD) terrain mesh, and then use IK to properly anchor the character to the ground on the client-side.

So my initial thoughts were somewhat correct in terms of using lower-resolution meshes. Do you know of any demo's that show this sort of thing? Most of the demo's I've found around terrain rendering on the GPU don't do any physics simulation.

When you play some AAA games you will notice that some characters do sink into the floor (or gound) or do also float on some platforms. So I personally wouldn't worry too much about it.
What you don't want to see happen is that your character collision gets the player stuck into a wall or bush, with the only resolve being to restart a game checkpoint.
(This has happened in Uncharted 2 in a swimming pool level and in Uncharted 3 when I sent the boy jumping from the roof into a bush on the adjacent balcony)
Fixing those will get your collision detection into the right direction.

In case of tessellation, you can have the physics be roughly correct with your terrain physics mesh, and then render a sporadically updated orthographically projected view of the displacement map for the terrain around your characters, and then sample that map when you do your skinning, and offset the verts up or down depending on the displacement at that point, and fade the offset out as the distance in between the terrain and the verts being processed gets bigger. (credits to the clever guys at NV for that).

In case of tessellation, ... render a sporadically updated orthographically projected view of the displacement map for the terrain ... and offset the verts up or down depending on the displacement at that point...

Maybe PhysX allows that. I'm afraid (albeit not sure) it would wreak havoc in Bullet as all the contact points would get invalidated due to collision shape getting changed. The implication of this is that collision against heightfields would have to be bypassed... or is it just me?

It appears to me it's just a better solution to use standard signal analysis techniques. Heightmap points represent original signal with an adeguate margin of error. Interpolation must work inside this tolerable margin. Therefore, tassellation does not change physics representation, as it is already "accurate enough" for simulation purposes, leaving tassellation as a purely graphical effect.

Vertex morphing is also introduced to avoid visual popping or other visual artifacts. There's little reason to propagate those changes to the physical representation IMHO.

In case of tessellation, ... render a sporadically updated orthographically projected view of the displacement map for the terrain ... and offset the verts up or down depending on the displacement at that point...

Maybe PhysX allows that. I'm afraid (albeit not sure) it would wreak havoc in Bullet as all the contact points would get invalidated due to collision shape getting changed. The implication of this is that collision against heightfields would have to be bypassed... or is it just me?

It appears to me it's just a better solution to use standard signal analysis techniques. Heightmap points represent original signal with an adeguate margin of error. Interpolation must work inside this tolerable margin. Therefore, tassellation does not change physics representation, as it is already "accurate enough" for simulation purposes, leaving tassellation as a purely graphical effect.

Vertex morphing is also introduced to avoid visual popping or other visual artifacts. There's little reason to propagate those changes to the physical representation IMHO.

The technique (which they called displacement aware skinning) is visual only, so it is completely orthogonal to your physics computation methods. The idea is to still compute collision and IK or whatever using your physics solution, whatever it might be. Then when rendering the mesh itself you offset vertices based on the displacement used for tessellation (of the rendered meshes, not physics meshes) so that your previously computed vert positions based on the bone positions based on your physics computation match the actual displaced geometry rather than the simpler collision geometry.

I don't understand. Graphics is not a vacuum. There will always be implications.

So we get now two systems, which must be kept in sync, with a serious possibility for them to diverge, a GPU read back for what? To fix a couple of pixels of encroaching? I still think this technique is taking for granted displacement will be very high. That might be the case for a generic tassellation, but in general it's not the case for terrain (the whole point of terrain is that it tolerates small errors).
This is what Hodgman is saying... live with approximations. And I agree with him.

Displace this:
Please explain how would they fix it.
The implications of the "big influence" scenario are massive. If we have to use displacement like that, than in line of theory bounding boxes are... by extension of the same level of detail no more maningful. Occlusion queries should be perfomed on the real geometry... so... I don't understand what the starting point is. Perhaps it's just me.

Here is another way graphics influences gameplay. Or at least player reactions.
Please propose a solution. We can go great lengths in saying how this is a purely graphic effect.

An extreme problem. Consider steps of a stair. This configuration would cut a ragdoll to pieces.
Unless we write code to be smart and figure out what to do in each case (such as figuring out a global rotation: this is no more a plain displacement map rendering and usage.
Artistic issues: tell all your artists to deal with an additional texture fetch for something they don't really care. Or write a system which injects the proper code. Neither is free of consequences.

This is what Hodgman is saying... live with approximations. And I agree with him.

I also described something analogous to the above "displacement aware skinning" -- allowing your visual representation to move itself slightly more out of sync with the physics representation in order to become visually plausible.With the usual method, the physics and graphics are out of sync slightly (with e.g. players feet hovering slightly, or penetrating the ground slightly). The above hack of moving the players feet to match the graphics representation is just a method to cover up the slight visual errors arising from the necessary approximations. It's not a general solution that you should implement in every possible circumstance.

I dont understand why you're having a problem with this -- you're describing and agreeing that graphics and physics are necessarily slightly out of sync in a typical game, and the above suggestion is a hack that makes the results visually plausible, while introducing out-of-sync problems elsewhere instead. It's just a shifting around of our approximations. It's just another option to add to your toolbox, to be evaluated on a case-by-case basis.

e.g. if the graphics engine moves a characters feet using IK so they don't penetrate the floor, then now their visual representation is slightly out of sync with the game's hit-boxes, so if you shoot a character in the foot, you might actually miss. Any hack like this can only be evaluated on a case-by-case basis. This might be of the greatest importance for some games (e.g. where realistic graphics without cliipping is important), and it might have terrible consequences for other games (e.g. where perfect hit-detection is important).

As another example; Bungie has particle systems that perform collision detection against the z-buffer, which has serious implications: if a surface isn't visible, then it's not collidable. If a particle is off-screen, it can't collide with anything. No read-back system is mentioned, so the particles can't interact with gameplay. etc, etc...So, you don't simply just throw this idea out because it's useless for most cases... you add it to your tool-box in case you ever need to simulate a large number of short-lived, non-gameplay particles that only need visually-plausible collisions while on-screen -- such as sparks from gunshots, etc...

I dont understand why you're having a problem with this -- you're describing and agreeing that graphics and physics are necessarily slightly out of sync in a typical game, and the above suggestion is a hack that makes the results visually plausible, while introducing out-of-sync problems elsewhere instead. It's just a shifting around of our approximations. It's just another option to add to your toolbox, to be evaluated on a case-by-case basis.

Thank you very much for stressing this for me. I think this is very important and was exactly the goal of my previous message.

How can terrain physics be handled when a lot of modern terrain rendering is done via the GPU?

Normally for brute force approach and small heightmaps you could create vertices on the CPU and load them into a physics engine as a collision mesh but with GPU approaches there isn't always a 1:1 mapping of vertex data on the CPU to whats displayed on screen (eg. hardware tessellation or vertex morphing). One idea was to generate a low-medium resolution version on the CPU to form the collision mesh but I'm not sure how practical this is as it may cause visual artifacts with physics objects sinking into the terrain or perhaps even floating.

clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.