Over the years, I've changed Quetoo's network protocol many times -- sometimes for the better, sometimes not. It's often the case that I don't discover the true nature of my changes until I try the game over the Internet. Things will look fine on localhost, and go to hell in a hand basket once 80ms of latency and 15ms of jitter come into play.

To make this easier to test and debug, I came up with a quick little hack. My engine was based on Quake2, which had a separate network channel reserved for loopback communications. This channel had a fixed size buffer of 4 messages. This trick requires increasing that buffer size to a large enough figure to allow for, say, 1000ms worth of packets to accumulate. So if your engine runs at 10hz, like Quake2, you could get away with a packet buffer of 16 messages. Mine runs at 60hz, so my loop message buffer is 64 messages large.

I introduced two new cvars: net_loop_latency and net_loop_jitter. The former adds constant latency to your loopback channel, and the latter adds randomized jitter. Both are specified in milliseconds. And with that, here is my new Net_ReceiveDatagram_Loop:

Pretty simple. Until the configured latency threshold is met, the next message will not be returned. I multiply the latency by 0.5 because this function is applied by both the client and the server, and so if you specify 60ms of latency, I add 30ms on both ends. So far, this has proven very useful in debugging stair prediction and other more network-dependent physics interactions. And it's kinda cool to be able to shape your own netgraph

one thing to look out for is when the ISP effectively only ever delivers packets in pairs (multiplexing in bursts to different clients in turn can be more efficient at the cost of extra lag).this of course screws up naive interpolation based upon packet arrival times (vanilla quakeworld packets have no timestamps within them so has no choice, while other protocols still need to deal with stalls and inaccurate clocks).

I think it's quite obvious that you could add another cvar and some logic to omit packets at random here, if you were inclined to.

Spike wrote:one thing to look out for is when the ISP effectively only ever delivers packets in pairs (multiplexing in bursts to different clients in turn can be more efficient at the cost of extra lag).this of course screws up naive interpolation based upon packet arrival times (vanilla quakeworld packets have no timestamps within them so has no choice, while other protocols still need to deal with stalls and inaccurate clocks).

This is precisely what happens when you increase net_loop_jitter. On some Cl_Frame 's, you'll get zero packets, and on others you might get two. That's actually exactly one of the situations I'm wrestling with right now in Quetoo. Take the ISP out of the equation for a minute. With a 30hz server tick, which is what Quetoo has had for years, it was rare to see two server updates in a single client frame because clients were typically running at at least twice that framerate. But with a new 60hz server tick and 60hz vsync client, it's quite often the case that the client will receive two server frames in a single Cl_Frame. The shorter interpolation interval of 16ms (i.e. a single video frame) compounds the problem, because there's less time to soak up any discrepancy before the next packet arrives. So dealing with prediction errors and lerp in this case becomes much tricker, indeed. Honestly, if you know if any open source engines that use a 60hz Quake-derived protocol, I'd love to see how they handle this!

quakeworld goes with 72hz by default. there's a few servers that crank it up to 150hz...but then vanilla quakeworld also had no prediction so dupes were not even noticable, and the server only responds if the client actually sent a packet, which reduces dupes too...

FTE handles it the Q3 way (if you set cl_lerp_smooth 1, anyway). that is, instead of only tracking the two latest snapshots, have a whole series of snapshots, 64 or whatever.add a serverside timestamp, and you will now know the exact timings instead of depending on arrival time.then your client's time can run somewhat independantly from the server. just pick the two snapshots with a timestamp each side of your local simulation time.if you've got a missing packet then you stall your simulation time, otherwise you drift your client's time to try to retain a frame's worth of buffer or so. if you've got antilag and prediction and stuff, noone will really notice.the result should completely negate normal jitter, as well as help cover up consistent packetloss - hurrah for staying in the past. if you have heavy-but-consistent packetloss you can just drift another frame into the past. obviously nothing will help complete stalls, and trying to deal with packetloss bursts is just kinda futile. but jitter or consistent packetloss induced by misordered packets should be covered up quite nicely.the catch is that sounds and particles might appear 'before' they ought to. and of course antilag can have its own issues in a similar vein, but that's true even if the client isn't simulating its own past. To be fair, ALL interpolation is a simulation of the past, you're just nudging it a bit further back.

alternatively if you're JUST after a solution to de-bunch packets, just buffer the entire packet if it would be bunched, then parse it the next frame and fake the arrival time slightly so that lerping still happens. its lame but should mostly work.