Enemy Territory provides a small graph with some network information which can be enabled by setting cg_lagometer to 1. Although this is not a feature specific to ET Pro, it has never been well-documented.

The lagometer consists of two graphs--the upper is the frame graph, and the lower is the snapshot graph. New data is added to the frame graph every time the client renders a frame (so this graph will move faster at higher framerates), and new data is added to the snapshot graph every time a snapshot packet is received from the server.

The snapshot graph is the easiest to understand--it is essentially a graph of the latency (ping) between you and the server. The graph's colors provide some additional information: green is normal, yellow indicates that the snapshot was delayed so that the server would not exceed the rate value (either rate on your client, or sv_maxRate on the server, whichever is lower), and red indicates that a snapshot was lost entirely (i.e. a snapshot sent by the server never made it to the client, probably due to network problems.)

Illustration:

In the above image, arrow 1 points to a normal snapshot with a ping of about 130ms, arrow 2 points to 700ms of packet loss, and arrow 3 points to 2 snapshots that were delayed to stay under the rate limit.

The frame graph is a little more complicated, and requires some background information about how Enemy Territory's network code works for a full understanding. In Enemy Territory, each client runs at a non-fixed frame rate (depending on configuration, system performance, etc.) while the server runs at a fixed rate of 20 frames per second. After running each frame, the server will send a snapshot to every client (so long as the client's snaps and rate settings allows it) describing everything that has changed since the previous snapshot. When the client receives this snapshot, it interpolates the values between the old snapshot and the snapshot that was just received--that is, it smooths out the movement so that things in the map don't appear to be jumping around when the client is drawing frames faster than the server is sending them. If the client doesn't receive a snapshot, it may have to guess where an object is going to be in order to keep things looking normal.

The client and the server each keep track of the current game time, and the differences between these times are expressed in the frame graph as well as object movement. For a simple example of this, imagine that the server has sent you two snapshots, snapshot a at time 12350, and snapshot b at time 12400¹, and then imagine that the client's current time is 12375. The client will interpolate² the position sent in snapshot a and in snapshot b to determine that the object in question should be drawn in the middle of the two points sent (because 12375 (the time our client is drawing for) is exactly halfway between 12350 and 12400.)

In the situation presented in the previous paragraph, imagine the client's time is 12425 and that no additional snapshots have been received yet. In order to maintain fluid motion, the client will need to guess where objects will be (using the last known position, angle, and velocity.) The calculation for this is quite not as easy to explain, so an example has been omitted for brevity.

The frame graph in the lagometer represents how far the time used for currently drawn image is away from the most recently received snapshot, and whether the client interpolated or extrapolated to obtain the positions used. Normally, the client interpolates positions, which is represented in blue on the lagometer; the graph height will spike downwards when a new snapshot is received, then crawl upwards towards the baseline), although it will sometimes be yellow to indicate extrapolation (and the graph will crawl upwards away from the baseline as the client extrapolates farther and farther from the last known data.) Since the cl_timenudge cvar causes the world to be rendered a certain number of milliseconds behind or ahead of the client's internal time, negative values may cause extrapolation during normal gameplay. For example, in the first situation described above (where the client's time is 12375), a cl_timenudge value of -30 would end up pushing the time used for rendering up to 12405 (which is newer than the data we have), so a small amount of extrapolation (5ms) would be required here.

Illustration:

In the above image, arrow 1 points to an interpolated frame immediately after a new snapshot was received, and arrow 2 points to a frame where a fair bit of extrapolation was performed.

Footnotes:
¹ Note the increase of exactly 50 -- remember that the server is running at 20 frames per second, so:

1 frame ·

1000ms (1 second)20 frames

= 50ms

² This is technically called linear interpolation (the simplest interpolation method), which is sometimes abbreviated to lerping._________________Rain

Last edited by Rain on Tue May 04, 2004 1:11 pm; edited 1 time in total

I've had an ongoing issue with a constant extrapolation cycle which repeats about every 0.75 seconds - it appears that a single frame is extrapolated :

You can't see the cycle on the top line (although you can see one yellow spike, but the cycle is clear on the lower one).

I've tried shutting down everything else I can think of on my machine mIRC, MSN, ASE, f/W etc etc but with no success. Is there an obvious reason for it ? I wouldn't worry but it becomes noticable at times and with such frequency is a pain in the ass.

Excellent piece of work. You might want to add a link to the Lagometer explanation on Gameadmins.com, which contains a number of lagometer screens for different error conditions (although the text is not entirely accurate). Perhaps adding some more screenshots to this article will help explain things better.

... and red indicates that a snapshot was lost entirely (i.e. a snapshot sent by the server never made it to the client, probably due to network problems.)

I just noticed that quote. If I'm not mistaken, that's wrong. A red line indicates lost updates from the client to the server, which is then indicated by the server to the client and shown as a red spike/block on the snapshot graph. If updates from the server to the client get lost, the snapshot graph just stalls (i.e. doesn't display anything until the next packet is received).

You can test this by connecting to a win32 server. Clicking on the title bar of the server console stalls the server, effectively simulating 100% packet loss from the server to the client. You'll see that the snapshot graph on the client just stops at that point (it doesn't show a red block).

The upper graph (blue/yellow) slides one pixel for every rendered frame. Blue lines below the baseline mean that the frame is interpolating between two valid snapshots. Yellow lines above the baseline mean the frame is extrapolating beyond the latest valid time. The length of the line is proportional to the time.

The lower graph (green/yellow/red) slides one pixel for every received snapshot. By default, snapshots come 20 times a second, so if you are running >20 fps, the top graph will move faster, and vice versa. A red bar means the snapshot was dropped by the network. Green and yellow bars are properly received snapshots, with the height of the bar proportional to the ping. A yellow bar indicates that the previous snapshot was intentionally supressed to stay under the rate limit.

I just noticed that quote. If I'm not mistaken, that's wrong. A red line indicates lost updates from the client to the server, which is then indicated by the server to the client and shown as a red spike/block on the snapshot graph. If updates from the server to the client get lost, the snapshot graph just stalls (i.e. doesn't display anything until the next packet is received).

This incorrect, AFAIK. snapshots are from server to client. The bottom line represents snapshots. Hence, the red lines must represent server -> client loss. client -> server loss is not reported at all.

Re: scrolling the server console on win32, the difference is that the server didn't send the packets, which not the same thing as sending them and having them lost._________________send lawyers, guns and money

This incorrect, AFAIK. snapshots are from server to client. The bottom line represents snapshots. Hence, the red lines must represent server -> client loss. client -> server loss is not reported at all.

Re: scrolling the server console on win32, the difference is that the server didn't send the packets, which not the same thing as sending them and having them lost.

Well, it is the same to the client, which is where the lagometer is running. The client has no way of knowing the difference between a packet getting lost in transit from the server to the client or a packet that was never sent by the server at all. Hence, the measured effect of both situations must be the same. As The Carmack said, the snapshot line displays one pixel for every received snapshot. If no snapshot is received, it doesn't scroll.

Every packet sent by the server is relative to a certain amount of time... If a packet is lost, you will receive snapshots 12400 to 12450 and then 12500 to 12550. There you go, now you know you lost a packet (12450-12500).

From what is written above, that's what I understood._________________nZ/IdNotFoundNaZGūL TeaM Leader
SAWL Tech Staff

Well, it is the same to the client, which is where the lagometer is running.

Not necessarily. Snapshots have times and numbers. If the number doesn't advance, then the server simply isn't sending them.

The fact remains that the bottom portion shows snapshots, which are strictly server->client. There is no way it can show client -> server loss. Even if client -> server loss were to be shown (which it isn't), it would make no sense to do it in the same graph as snapshots.

Quote:

As The Carmack said, the snapshot line displays one pixel for every received snapshot. If no snapshot is received, it doesn't scroll.

I suspect you have misinterpreted that...

Carmack wrote:

A red bar means the snapshot was dropped by the network

ephasis added.

The client system knows the order of snapshots, thus, it retroactively knows when they have been dropped (e.g. if the last valid snapshot was number 8, and the next valid snapshot was number ten, it is safe to say number 9 was dropped), and puts red lines in the lagometer to represent this. See void CG_ProcessSnapshots( void ) in src/cgame/cg_snapshot.c

This is exactly what happens when you scroll the server console... the top portion of the clients lagometer goes yellow (because there are no new snapshots to intererpolate) and the bottom stops advancing.

this probably doesn't belong in the documentation thread...._________________send lawyers, guns and money

Yes, you must be right. I made the logic error of thinking that a stalled server yields the same effect as packet loss. The timestamps in the snapshots make that assumption incorrect, obviously.

Well, all this might have a place in the documentation thread, because it's about reading the lagometer. Reading is one thing, understanding a second. So maybe Rain could add some things from this discussion to the documentation, to further clarify the workings of the lagometer (and Q3A client-server communication in general).