Naive version: measure distance from the camera. This works fine if your field of view is constant, but our cameras were all over the place. In particular the TV replay cameras used a very narrow field of view (which is the same thing as a telephoto lens) so distant bikes would appear large on the screen.

Better version: project the extents of each object into screen pixels. Anything higher than 80 pixels might get high detail, while between 20 and 80 pixels is medium, and we use the low detail model for objects smaller than 20 pixels.

This provides consistent visual quality (you never see a low detail model drawn large enough to look ugly), but not consistent performance. If a TV camera with a telephoto lens looks down a straight section of track, all 20 bikes could end up a similar size on screen, but the framerate would plummet if we drew them all at high detail!

MotoGP version: sort objects by distance from camera, then allocate LOD on a first come, first served basis. The closest 4 track sectors get high detail, the next 6 get medium, and the remainder get low detail.

This provides consistent performance (we never draw more high detail models than we can afford) but not consistent visual quality. When the TV camera sees all the bikes coming down that straight, the ones at the back end up with low detail models drawn so big that you can see all the flaws. But hey. Framerate was higher priority, so this was a good tradeoff for us.

Note that when all the bikes are far away, this algorithm chooses high detail for some even though medium could suffice. We didn't care about that, because the goal was to improve worst case performance in order to maintain a steady 60 fps. There are no prizes for going faster than 60, so no point optimizing scenarios that are already the best case.

LOD selection is a classic case where hysteresis is useful to avoid popping, but I can't remember whether we implemented that in MotoGP.

Wouldn’t it be possible to do a variation of the Naive version that takes into account the field of view? I would think it would be possible to create some sort of weighting for objects depending on distance and the field of view, so that if the field of view was narrowed, object ‘weights’ would increase and thus still be rendered in appropriately high detail.

Joel: that’s the same thing as what I called the "better version". When you weight distance by field of view (assuming you use the right units for this weighting to give consistent results at any field of view), you end up computing the projected screen space size of each object.

Shawn, my bad. I guess I misunderstood what you were doing, or didn’t think things through properly. I guess this goes along with what you previously said:

"Moral of this story: optimize for your worst case, not the average. A technique that performs consistently ok is better than one that performs superbly most of the time, but then occasionally spikes and drops frames."

Just want to say that I really enjoyed your MotoGP series. From fog to particles, you really give some great concepts on implementing short-cuts while still preserving the visual realism.

Oh, and of course this timeless treasure nestled just on the outskirts… "Math note: the screen size of a 3D object is proportional to (1 / DistanceFromCamera / tan(FieldOfView / 2))"

I don't usually come on to your blog for specifics, but rather to get a more overall sense of how I can work around my own concepts, and as always, I was not disappointed. Thank you Shawn. One of these days, I'm going to have to pick up a copy of MotoGP (I'm still curious to get a closer look at your cube map technique for the muffler… a picture of your keyboard!? lol)