So I am going to make a Live Action Networked Game (or at least a template for one) and would like some guidelines for how to implement it in theory.

Here's what I've gotten from the things I've read so far -

You have a server with an authoritative game state from some time in the past Si. At set intervals, for any period of time in which it has inputs from all of its clients, it will simulate the game state to a new authoritative state at time Sf, and then send this to all of its clients.

The clients will then revert their game states to Sf, discard all cached inputs before Sf, and then use the remaining ones to interpolate the game state up to time N, the actual current game time. To simulate the other client players, they will simply guess based on the last input given to them.

Whenever the clients update, they will continue to cache inputs for that period of time, and also send their inputs to the server.

So does that sound about right? Is there anything I can do to improve how a client guesses what the other clients are doing? Or just improve it in general?

Additionally, I am not sure whether or not I should use UDP or TCP. I know UDP is generally faster. However the server always needs to guarantee that it receives every client input, so I do not think UDP all by itself is reliable enough. Would it be worth it to implement some sort of reliability layer on top of UDP, or to just use TCP instead? And if so, what would be the best way to go about this? Is there a good library for Java that handles this kind of thing already?

UDP is good for peer-peer, as you can use UDP punch-thru to get through the NAT in routers. For real-time games, my choice is for each client to maintain a database of objects including position, velocity and last update time. The positions are locally updated based on velocity. When a UDP packet arrives for that object, the position data in that object is adjusted based on the velocity in that incoming object, for the time difference between the server and the client. The local database is then updated with the corrected position and the (unchanged) in-coming velocity.

Measuring Lag is a problem. It's required for real-time games for lag-compensation. It is possible to measure round-trip delay to a server and use that to lock local time to a common time source. This really needs incoming packets to be time-stamped immediately on receipt, which is a problem if the OS sticks them in a FIFO with a large random delay before the user process gets them. My mileage has varied a bit here. This also pre-disposes that lag is symmetrical upstream and downstream, which is unlikely, particularly if you are using a satellite dish for receive. Anyone know how to do this better?

The problem with UDP comes when you have one-off event data to transmit around. The choice is either to use TCP, or to implement your own guaranteed delivery service over UDP. For a client-server implementation, then TCP works well, and would be my choice for turn based games (non real time). For anything that does peer-peer then it's a case of grow-your-own on top of UDP, so as to be able to use UDP punch-thru.

Incidentally peer-peer sucks for security. To reduce hacking, all decision logic must be on a server, which tends to mandate a thin client approach. Of course you have to pay for the server bandwidth, so this is more appropriate for play-to-play games. My hobby efforts have focused more on peer-peer with distributed decision making, and limiting server bandwidth to a time server and NAT punch-thru introducer. That way uses less server bandwidth. However a regular keep-alive ping is still required to keep track of game joiners/leavers.

I also had a go at LANplay, using multicasting. That works peer-peer and you don't have to worry about NAT as you are all on the same segment. No need for lag compensation either. However in these internet gaming days, no one can be arsed to take their computers over to a friends and set up an impromptu net for LANplay, so really it's more of historical interest.

Riven and I have been having a very interesting time making a networked multiplayer game. We're about 2.5-3 months in to it, although Riven only started full time maybe 6-7 weeks ago, when we really started solving the problems. I say "problems" but really most of it is "design"; "problems" are what comes when "design" isn't right

For the first few weeks I was just mostly prototyping and experimenting with designs on my own with a lot of input from Riven which I kinda ignored so that I could, well, experiment. It turns out though that he was basically right on a number of things.

Firstly, UDP is hard to do. Especially if you're going to actually do it how you're meant to do it, which generally involves sending a delta against some known state that we understand the client has. It turns out that keeping track of the known state of a client is quite tricky if you're never sure exactly what data has been received and when it was received. I eventually solved this problem, using a system very much like the "Quake 3 networking model", but I have to say, the code was so complex and nasty I did not like the look of it one bit, and it was very hard to maintain and refactor.

So eventually we switched to TCP on Riven's advice; and it's worked out much more simple for us, as we now no longer have to worry about out-of-order packets and resending ourselves; instead TCP takes care of that, in the actually rare circumstances in which packets actually get lost these days. All we've really got to be careful about is not sending so much data that a backlog starts to grow between server and client that it can never empty. The whole design of the system got a whole lot simpler from this point onwards as a result of the net code being so much easier. We've implemented something that looks a bit like half-duplex RMI, but extremely efficiently implemented.

These days over broadband the actual overhead of a TCP packet is almost irrelevant compared to the overall efficiency of the protocol and design itself. The only reason we had for using UDP was that we could implicitly discard stale state and just keep sending deltas versus some known state, however as we discovered that actually making deltas is a massive, massive problem in itself, solved completely simply by using reliable data delivery, TCP was the way to go. Latency is excellent - certainly barely noticeably any different from UDP - and we're also free to send "big data" without confusion eg. map chunks.

One thing to remember is that if you use TCP/IP and send 10 packets, of which packet 1 is lost, then packets 2 through 9 are held up on the stack, while the missing packet 2 is re-requested and re-transmitted. If you send stuff over the internet it goes through a number of IP stacks, any of which can cause lag to a whole series of packets due to a single packet loss. It's really a horses for courses thing, nether UDP not TCP/IP is better overall.

Packet loss in TCP starts to becomes an issue under severe congestion, but it should also be noted that under those same conditions, packet loss for UDP tends to be catastrophic.

I was trying to say that for a real time game, those delayed TCP/IP packets may as well be dropped because they are too late to be useful. Provided each UDP packet is useful as a stand-alone update, it is possible to design to withstand packet loss. When a packet is lost, the screen object has to continue on a bit longer on dead reckoning.

Edit: You can see this in my 2011 Mage Wars entry for Java4k (Source code available), although being a 4k game, it's not very well structured.

My philosophy here: I will need reliable data transfer somewhere in my game, and I don't think my lone brain is superior to the sum of all brains that have worked on TCP, so reimplementing TCP over UDP sounds pretty stupid when there's a perfectly fine solution just sitting there that's easier to use and has been improved/perfected since before I was born.

My philosophy here: I will need reliable data transfer somewhere in my game, and I don't think my lone brain is superior to the sum of all brains that have worked on TCP, so reimplementing TCP over UDP sounds pretty stupid when there's a perfectly fine solution just sitting there that's easier to use and has been improved/perfected since before I was born.

What about reliable unordered data transfer? I had an article around here that explained how to do it.....

Thanks, great article. I didn't know the bit about parallel TCP streams increasing UDP packet loss and the section on reliable UDP communication is gold dust. I had been assuming that the best solution would be to use parallel UDP and TCP/IP streams so as to get the best of both worlds. I'm bookmarking this for a more thorough read.

Edit: The TCP/IP Sync causing UDP packet loss is interesting. However I think it might not matter so much if a parallel TCP/IP connection was only used for low bandwidth reliable data (e.g. inventory changes). It looks like using TCP/IP for dynamically downloading map segments, whilst simultaneously using UDP for real-time play, is a poor idea. It's also interesting to see that UDP packet loss increases massively once the packet size reaches 160 bytes. There is clearly a trade off between packet overhead and packet loss.

Edit 2: The comments on the article make an interesting read too. Several people note that MMORPGs often use TCP/IP. That does make sense considering they are heavily event driven. However I used to play WoW several years ago and the lag was something awful, when an area got busy.

Firstly, I want to briefly talk about a protocol on top of UDP I've created that I think is useful. Secondly, I made a simple networked game (using UDP) all in a few hours last night with interpolation and extrapolation - I'll post some code.

A few concepts:

Multiple "channels" can be setup to differentiate between reliability and order.

A channel is marked as unreliable or reliable, and ordered or unordered.

A command is sent through a channel, has a retry count, a priority, has a determinable size in bytes, and can be read/written to a ByteBuffer

A packet is formatted [protocolId][sequence][ack][ackuence][command id+channel][command data][command id+channel][command data...]Where sequence is the remote packet sequence, ack is the last packet sequence received by the remote address, ackuence is 32-bits which stores which of the past 32 packets have been received by the remote address.

The strength of this system lies in the amount of data you send that you don't care about. The more of that there is, the better this system performs over TCP (plus TCP acks one at a time, this does 32).This is a library I've developed for my game engine, I'm going to have it available separately however.

Actually it is harder than that. And worse is not a good idea...if fact it's a terrible idea except in some very narrow scientific computations which won't be using primitive types anyway. Bit-exact FP = higher errors & slower performance (over prims).

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org