Isn't a vonJacobson optimized TCP header on a dialup line going down to 3 bytes? Anyway, this doesn't matter...

I've always been a TCP advocate here and I feel very comfortable with blahblahblahh's statements.

Besides the technical bandwidth/latency issues that are based on the resp. protocol itself (header size, vonJacobsen, handshake,...) I'd like to emphasize on the algorithmical issues. And these are for performance, latency tolerance and beauty of design!

If you hammer out 25 msg/sec anyway, for sure this will be better done with UDP. I'd call that a brute-force approach. The information transmitted is highly redundant and it has to be, bc. you cannot make assumptions on what the other side received.

OTOH, the 1-2 msg/sec I talked about are possible ONLY under the assumption that delivery is guaranteed. The messages are not redundant due to an algorithmic approach (dead reckoning up to the second derivative in time).

Although the messages are quite rich of information and therefor quite big, the saving in bandwidth against the brute-force approach is enormous. By using a suitable gameplay (this is important when deciding to make a network game!! Avoid Tron.) and a smart deal with a distributed timebase, the approach also is latency tolerant.

Quote

As for the debate itself - I think it is easiest to look at successful games. Unreal, Quake, Tribes - all these engines need close to real-time updates and all use UDP.

Can you give me examples of really fast paced, sucessful game, which has used TCP/IP for network communication ?

Eat shit!! Millions of flies....

And it depends. Unreal, Quake ... all FPS, very fast, very inexact .... and they all suck playing on the internet if the line is only a bit less than 'excellent++'. They are LAN games. Brute-force.CS is highly latency dependant and asymmetric - very annoying.

WarBirds, one of the online action games pioneers, goes for TCP AFAIK. 200 players on a server - no problem. And no need for a 30ms ping.

Success of a game does not say very much on the quality of the underlying technology. And that's what we are talking about here?

I myself like to use millions of flies argument - but I use it mostly to note that it is not important what _common_ people do. But are you suggesting that Carmack/GarageGames/Unreal team etc are just misguided flies ?

I will not agree with you that excellent line is needed. I often play Americas Army (it is based on Unreal). Unfortunately most of servers are in US, so I have around 300-400 ms ping and 10-20% packet loss. And I'm perfectly able to compete with other people in most sitations (except jump-from-behind-corner-you-are-dead-in-200ms case).

I'm not able to comment on WarBirds case - I have not played it - but I believe you that they were able to manage it decently with TCP/IP. More 'inertia' avatar has, more acceptable TCP/IP is.

TCP/IP is a lot more simple. Less places to get things wrong. You can focus on logic of game, instead of trying to reimplement reliable messages on top of UDP. But I still claim that there is certain subset of games which are playable over internet with UDP and would not be playable with TCP.

As for the Tron game - I think it will be laggy regardless of protocol. You need exact path for each player (not only latest position to which you can interpolate), and tiny fraction of second latencies would literally kill players when running head to head.

When considering the tron game I did think about doing stuff where the bike keeps going in one direction until the server tells it otherwise.

However, the game was really a test platform for trying out networking code (and Java3D at the time). I was really more concerned about the MMORPG I've been playing with. In the case of someone running around a map I'm not really sure how else to implement the comms. I actually be quite interested in any ideas?

At the moment the only way I can think of is the "brute" force approach to get any sort of decent response. Just send a message if the particular player/AI has moved every so many millis.

I did consider:

Sending a message when the client starts moving, stop moving and changes direction. But the system would still have to support people running in circles (as people on MMORPG so often do).

My main concern is the responce between one player seeing another players actions.

Sending a message when the client starts moving, stop moving and changes direction. But the system would still have to support people running in circles (as people on MMORPG so often do).

It depends how do you plan to control avatars. If we are talking about mouse-click on target, then just transmit current position + destination each click. If you want to have some kind of full control like Tomb Raider, situation is a bit more complicated - but with classic MMORPG you are in very good situation - there is almost none arcade skill involved. This means that one person seeing other person one meter from it's real position will not cause you any problems.

For non-arcade MMORPG, I would certainly suggest you to use TCP/IP. Most data has to go through reliable route (all events, trade, chat messages) - most of it even with correct order. You can consider adding second UDP connection for some other data - but I really doubt it will get you much.

Now, as far as actual data to send is concerned... I think this is one of last places where you count every byte, like in programming old Atari games. With guaranteed TCP/IP you have a bigger field for being smart - you can for example transmit delta of position encoded in some way making small numbers shorter.

I know that you would like to get as real time updates as possible. But as long as life of player is not dependent on few degrees of aiming line or fraction of second timing when trowing cooked grenade, sticking to TCP/IP will save you many worries.

Some name dropping here (please forgive, I feel I must make sure those who made it happen get proper credit) --

The previous posting stating that 'vonJacobson' who is really "Van Jacobson" did enable PPP link to use TCP header compression. I worked at LBNL shortly after I worked in the game biz, not in the network technologies group but I did get to know them reasonably well (http://www-nrg.ee.lbl.gov/). They were incredibly influential in the network world, the folks who made 'traceroute', 'tcpdump', and 'libpcap', and hundreds of low level optimizations to IP stacks and tools against TCP in the 90's. Let me begin by saying that Van Jacobson is an incredibly brilliant man and did a lot of amazing and unimaginable things, he lead that group (I cannot give him the credit he deserves), however the TCP header compression was intended for very slow point-to-point links mostly which are disappearing every day and those that are not are out-optimizing his compression (e.g everyones compressing streams these days).

Back into the UDP v. TCP argument -- In the late 80's (I'll call this the pre-quake era) everything was IPX(Novell) , but there was some effort in trying to figure out what worked being converted to TCP/IP. I was fortunate enough to work with a very brave, ingenious, and adventurous lot that were trying to make games playable on the Internet -- everything was TCP then. Back in those days when one person lost the connection even in a game of backgammon, the whole game crashed (lockstep).

Later, a few innovative and ingenious souls started to realize that the problem was not nature of the network but how they were looking at the network. I personally think of Bill Lipa who was the Architect at TEN, but Quake came out a few weeks after TEN did Duke Nukem in UDP, so I'm not sure if it was Bill Lipa or John Carmack who originally thought of it first (I know I'm biased since I worked at TEN in those days, but Bill Lipa was a network proramming guru while John Carmack was a god, but mostly in graphics and non-networked game stuff).

I have to simultaneuously agree and disagree with ' blahblahblahh' . I've not read the X-wing-versus-tie-fighter article, but I have to state quite bluntly, use TCP until you understand whether to use UDP in place of TCP. There is quite simply no reason to blindly assume you will get better performance out of UDP than TCP, and using UDP will make things much more complicated.

1. "Guaranteed Delivery"

Agreed, it is generally *NOT* worth doing on your own - if you don't understand this please accept this as fact! Many, many really smart people have spent half of their lives on this and only gotten tiny advancements! If its critical your packet gets there then use TCP, and focus on your game design.

2. "Connection negotiation and maintenance is not trivial."

Agreed. Again people have spent half of their lives on this and gotten tiny advancements. Save yourself the grief and accept this as fact! Unless you want to dedicate your life to a network protocol stack, again a reason to use TCP.

3. "TCP is normally as fast or faster than UDP. "

Heres where I disagree, this depends largely on whether you are measuring as 'fast', latency or bandwidth.

"Normally, UDP is very inefficient and wastes bandwidth",

True UDP wastes bandwidth, but its in trade for latency. If the raw packet delivery is more important than the sequence or state, and the game itself can deal with these factors on a frame-by-frame basis then what a great place we've gotten to. The fact is in some games its important, in others, its not.

4. "The vast majority of games developers only have one problem with TCP (but often mistakenly believe they need more!). They need to remove the "in-order arrival". "Whenever you hear about "TCP is sloooow" or "TCP has high latencies", you are listening to someone who is 99% likely to have bitten by this problem but not understood it. The problem is that if you send 30 packets, and the fifth one gets dropped, but packets 6-10 arrive OK, your network stack will NOT allow your application to see those last 5 packets UNTIL it has received a re-transmitted packet 5."

Agreed, a single retransmitted packet can seem tragic for a developer trying to make their game playable across the internet. Which often makes developers inappropriately reach for UDP, thinking it will fix their problem and usually it will not. But there is something beautiful to the idea of being able to drop packet '5' and proceed that seems appealing.

5. "Lastly, there ARE alternatives to TCP and UDP. Not surprisingly, since almost every game finds that neither is really good enough (the games that just go TCP only suffer weird stuttering, the ones that are UDP only often get players freezing, or their guns not firing because of lost packets). The last time I looked, ENet seems to be the best widely/freely available implementation around, but people have suggested several others to me, including: RAKnet (sp?), RDP (covered by an official internet RFC) "

Ok here, fully disagree. Random replacements for to on top of of IP seem to invoke the same arguments you were posing earlier - working against your initial premise. I would say quite simply use TCP for reliable delivery, use UDP where latency is at a premium.

I would say quite simply use TCP for reliable delivery, use UDP where latency is at a premium.

Nail on the head here, as far as it goes, what about when you need reliable delivery in a situation where latency is at a premium??

Quote

1. "Guaranteed Delivery"

Agreed, it is *NOT* worth doing on your own - if you don't understand this please accept this as fact! Many, many really smart people have spent half of their lives on this and only gotten tiny advancements! If its critical your packet gets there then use TCP, and focus on your game design.

I got to disagree here, I agree that for a generic case it might be non trivial, but if you are writting something specific on top of UDP you can make is as complex or as trivial as you like, at the most trivial you send the message with a sequence id, if you don't get an ack back, resend it.

Quote

2. "Connection negotiation and maintenance is not trivial."

Agreed. Again people have spent half of their lives on this and gotten tiny advancements. Save yourself the grief and accept this as fact! Unless you want to dedicate your life to a network protocol stack, again a reason to use TCP.

again, if your writting something specific you probably don't need all of the connection states, maybe 4, 'i'm trying to construct the 'connection me message'', 'ive sent a 'connect me message', 'i'm connected' (received the server responce), and some kind of error state, to say it all went horribly wrong

Completely agree about UDP being a bandwidth waste and TCP having latency problems.

Quote

The problem is that if you send 30 packets, and the fifth one gets dropped, but packets 6-10 arrive OK, your network stack will NOT allow your application to see those last 5 packets UNTIL it has received a re-transmitted packet 5."

the more often hit issue is more likely to be a buffer size one, a game update message is for example 100 bytes, my tcp stack size is 8k, so the message won't get sent right away, likewise on the receiving end.

There is also the point that TCP is presented as a stream of data (what it's designed for), where as UDP is datagrams, so if what i need is reliable datagrams then i should be using UDP plus a bit of roll your own for the reliability side.

These comments are my observations, and if there are any reasons why these don't hold up other than 'this is the way it's always been done' or 'because I told you', then i would be interested to hear them.

Generally I live by the philosophy of 'not reinventing the wheel', if you really understand TCP, have done all your proper tuning to how you are using TCP, and realy *REALLY* understand what your doing then by all means make a UDP implementation. Its no walk in the park and you *WILL* spend a great amount of time trying to debug and make it work properly and you could still wind up with something that is effective as well tuned TCP.

Quote

the more often hit issue is more likely to be a buffer size one, a game update message is for example 100 bytes, my tcp stack size is 8k, so the message won't get sent right away, likewise on the receiving end.

There is also the point that TCP is presented as a stream of data (what it's designed for), where as UDP is datagrams, so if what i need is reliable datagrams then i should be using UDP plus a bit of roll your own for the reliability side.

This is quite simply not true. There are congestion control mechanisms in most TCP stacks such as the Nagle algorithm , it *IS* the default behavior to use this because it reduces overall bandwidth utilization. Those algorithms are easily turned off.

Let me say straight forward and bluntly your statements are a clear indication that you do not understand TCP enough and you would likely spend a inordinate amount of time reinventing the wheel and would likely still not have any better performance.

Its no walk in the park and you *WILL* spend a great amount of time trying to debug and make it work properly and you could still wind up with something that is effective as well tuned TCP.

We wrote a number of test cases of the period of a couple of days using different message sizes and TCP settings, and couldn't get decent performance. Our first stab at implementing some UDP addons (Connections and delivery options) took a little over a day, we have just discovered a couple of bugs, but even now, we have something in UDP that far out performs the TCP options we tried. Using a 100mbit switched network we sometimes had messages that took 800ms+ to travel, we now rarely get messages that take more than 16ms.we havn't sped the network up by 5000%, just the worst case, and the 'normal' case is a faster responce time.

Quote

Let me say straight forward and bluntly your statements are a clear indication that you do not understand TCP enough

The fact that we wrote a UDP wrapper that works in less time that it took us to investigate some TCP options suggests the UDP option in our case isn't that complex to write.

I'm not trying to say that UDP is the best way, just put forward our observations.

We wrote a number of test cases of the period of a couple of days using different message sizes and TCP settings, and couldn't get decent performance. Our first stab at implementing some UDP addons (Connections and delivery options) took a little over a day, we have just discovered a couple of bugs, but even now, we have something in UDP that far out performs the TCP options we tried. Using a 100mbit switched network we sometimes had messages that took 800ms+ to travel, we now rarely get messages that take more than 16ms.we havn't sped the network up by 5000%, just the worst case, and the 'normal' case is a faster responce time.

The fact that we wrote a UDP wrapper that works in less time that it took us to investigate some TCP options suggests the UDP option in our case isn't that complex to write.

I'm not trying to say that UDP is the best way, just put forward our observations.

Did you say its faster for you to spend "a little over a day" making your own than finding out what options you can use to tune TCP? Hey by that same logic you should write your own database, configuring and using a canned database like MySQL can be diffcult. I don't know about you but I can find dozens of articles that talk about optimizing TCP performance by spending 5 minutes googling.

Also expect to spend a bunch more time when you use that thing in a real situation (like inside your app across the internet cloud).

I'm genuinely interested to learn some of these TCP optimisations if they are as great as you say, but what we tried didn't work for us, can you give us some technical explanations rather than 'use google' ?

Probably the most import is turning off the Nagles algorithm, which is designed to gain bandwidth at the expense of latency

Socket.setTcpNoDelay(true); // TCP_NODELAY

There are other things to work on, but the time difference you were seeing doesn't make sense considering that you should get almost no packet loss since your on that switch. Of course your packets will still be a bit bigger with TCP (more header info), but with no packet loss you should get nearly the same latency from TCP and UDP. If you are not within a millisecond or two something is very wrong.

Probably the most import is turning off the Nagles algorithm, which is designed to gain bandwidth at the expense of latency

Socket.setTcpNoDelay(true); // TCP_NODELAY

yup, thats what we thought too, but we still saw huge latency

Quote

Of course your packets will still be a bit bigger with TCP (more header info)

no biggy, it's not *that* much more info, and we have bandwidth to spare atm

Quote

but with no packet loss you should get nearly the same latency from TCP and UDP. If you are not within a millisecond or two something is very wrong.

again, this is what we thought, we tried changing buffer sizes too, they all helped some, but nothing got us anything that we thought 'hey, we've cracked it', the send / receive buffer and the TcpNoDelay seem to be the only settings we have to play with under java, we also tried a completely remote network to check if that helped, and it didn't seem to either, kevin went away and tried using NIO channels too, that again helped some, but not enough. So we looked at UDP, and seemed to beat the latency, we drew some conclusions from this that may not be 100% accurate, but seem to explain our situation.

In the interest of thoroughness I was going to dig out my old test code and do some more playing, but i couldn't find it, so i rewrote it, and damn if I didn't get good ping times (avrg 2 millis, worst case about 4 millis, this was with TcpNoDelay set, I'm sure we tried that before, but ho hum, our UDP code uses message objects so kevin knocked together a TCP version of the endpoints and we are still on sensible latency. I guess we did something wrong in out tests, but what ever it was, we both did it independantly, and without the code I can't check. So, large slice of humble pie, and a bald spot from all the head scratching.

Of course the real problem with TCP comes in from packet loss (and the subsequent retransmits and sequencing), which is when you start having to get into really interesting experiments with what works best for your particullar game.

I myself like to use millions of flies argument - but I use it mostly to note that it is not important what _common_ people do. But are you suggesting that Carmack/GarageGames/Unreal team etc are just misguided flies ?

There is no doubt that initially most of them had no real idea what they were doing. Quake 1 and 2 both had rubbish network code (and q2 even came late enough, IIRC, to benefit from seeing the PlanetQuake improvements - however, my knowledge of Q2 is very sparse; I didn't play it much and haven't studied it (OTOH I've spent thousands of hours on Q1 and 3)).

But that was THEN...see below for a note on Q3.

Quote

I will not agree with you that excellent line is needed. I often play Americas Army (it is based on Unreal). Unfortunately most of servers are in US, so I have around 300-400 ms ping and 10-20% packet loss. And I'm perfectly able to compete with other people in most sitations (except jump-from-behind-corner-you-are-dead-in-200ms case).

OK, but that's NOTHING! Quake 3 can cope with 40%-50% packet loss, no probs (well enough, at least, to be able to still play the game effectively, modulo the difficulties/differences that may ALSO be caused by the high ping time that usually goes hand-in-hand with a high packet loss).

Quote

As for the Tron game - I think it will be laggy regardless of protocol. You need exact path for each player (not only latest position to which you can interpolate), and tiny fraction of second latencies would literally kill players when running head to head.

[/quote]

Indeed; it is a sufficiently difficult problem with Tron to merit some serious academic study - comparing players' effectiveness against the same bots on different simulated artificially-delayed systems. E.g. run TRON on a 10 Mb LAN, and run ten sets of tests, from "no additional latency" up to "+100ms latency" or something. This could be compared to the results of similar tests that IIRC have already been performed for generic "twitch" games a la Quake/Doom.

Certainly, tron on TCP is pretty much pointless without the guarantee of LAN-only games. One dropped packet can screw the synchronization (between players differing views of the game-state) for the next 100-3000 milliseconds.

Quick side note for endolf - who by the sound of things was trying sensible things to solve a weird problem:

This may be a red herring, but in future also investigate your hardware. I've generally stuck to one or two top-tier branded NIC's since first discovering how much seriously terrifyingly screwed hardware was around many years ago. Probably not worth naming them because I've not re-analysed whose best recently, and now know the quirks so well I don't need to change. However, I can assure you that even the most expensive NIC's from the most famous manufacturers have some SERIOUS problems - e.g. I've seen 3COM 100Mbit cards that can "accidentally" get stuck at approx 1Mbit (yes, really!); 3COM also in particular had a problem for a long while of people selling things that looked like a 3COM, smelled like a 3COM (!), but wasn't really - it was a carefully branded clone.

It's crazy the first time it happens to you, but sometimes you find that a factor-of-a-hundred (or more) performance problem on your LAN can be attributed to one piece of dodgy hardware - from a reputable manufacturer.

Quote

yup, thats what we thought too, but we still saw huge latency

no biggy, it's not *that* much more info, and we have bandwidth to spare atm

again, this is what we thought, we tried changing buffer sizes too, they all helped some, but nothing got us anything that we thought 'hey, we've cracked it', the send / receive buffer and the TcpNoDelay seem to be the only settings we have to play with under java, we also tried a completely remote network to check if that helped, and it didn't seem to either, kevin went away and tried using NIO channels too, that again helped some, but not enough. So we looked at UDP, and seemed to beat the latency, we drew some conclusions from this that may not be 100% accurate, but seem to explain our situation.

As my initial inordinately long post has now kicked up quite a few interesting comments and ideas, I'm weighing back in with a generalised response .

FYI, I come from a strong academic background (Cambridge University, where being/becoming familiar with all the prior art is heavily encouraged) and now work on the Grexengine (an MMOG technology - grexengine.com), although only tangentially on the networking side (I choose protocols, not implement them). But I've never been a pro network engineer, and most of my knowledge is a combination of talking to real net engineers, detailed academic study of the systems and protocols, and my own experiences over 9 years. So you can take what I say with a pinch of salt...

What I'd like people to take home as the core points:

1. TCP vs UDP: It's not a simple argument; be careful about making any decisions that will cost you significant effort to implement.

2. UDP + (some parts of TCP): Is VERY difficult to get right; it's one of the areas of programming that is still "hard", as opposed to just being time-consuming to get right.

3. Of the three options covered in those two points, each is suitable for a significant percentage of games; I say "suitable" not "perfect" - a perfect option is the one that is technologically best for the game, a suitable one is the one that is affordable, not too risky, and provides acceptably good performance.

4. A depressingly large number of people who offer advice on these topics are naive or ignorant at best (i.e. they have gaps in their knowledge), or just plain wrong at worst. Be careful what advice you take! (and as endolf discovered, it's often not as simple as just experimenting with the alternatives people suggested and benchmarking them - there are nasty non-obvious subtleties in accidentally mis-implementing many of the protocols).

To illustrate each of these a bit further (some I covered initially):

1:Well, I guess the length of this discussion already makes that clear enough. Nod to all those who've contributed examples and counter-examples and shown how non-trivial a question this really is . I'd quote more, but this board makes it difficult to quote lots of posts in one, sigh.

2:

Quote

Again people have spent half of their lives on this and gotten tiny advancements. Save yourself the grief and accept this as fact! Unless you want to dedicate your life to a network protocol stack, again a reason to use TCP.

Coilcore gave a good explanation of how hard it is to replicate even just some of the excellent work of Jacobson's (sp?) - and many others. Not by any means impossible, and most of it is documented as research papers - but it wasn't done by "ordinary" developers: much was done by hard core specialists.

3:

Quote

(in response to my quote: "Lastly, there ARE alternatives to TCP and UDP. Not surprisingly")

Ok here, fully disagree. Random replacements for to on top of of IP seem to invoke the same arguments you were posing earlier - working against your initial premise. I would say quite simply use TCP for reliable delivery, use UDP where latency is at a premium.

Most of coilcore's expansions on what I said originally are extra details that I might myself have come up with - we concur strongly.

However, there seems to be one point coilcore doesn't appreciate: there are games where neither UDP nor TCP - nor both together - "work". Some absolutely require simultaneous low latency with guaranteed delivery - and e.g. even using the TCP as a control channel to police the UDP (which is a relatively easy "first draft" way of implementing this) is itself not "fast" enough (in the cases I've seen, it's too high latency, because a dropped TCP packet can delay the realisation that a UDP packet got dropped, delaying the resend).

(but note: this is NOT the case for the majority of games - for most games, I agree with coilcore, and my initial arguments for not doing so come to the fore)

For some of those I've looked at, separating one stream into TCP-bits and UDP-bits (i.e. sending the data that needed reliabilty down TCP, the rest on UDP, etc) was highly undesirable - but in the overall scheme of things saved enough implementation time and hassle that it made sense financially. Others just HAD to have the best of both worlds. As someone else mentioned, TCP was designed as a generic excellent-average-case protocol; some games sadly pay the price for that (if they use TCP at all).

So, people shouldn't dismiss "roll your own" (RYO) out of hand - but OTOH, there's SO MANY people who assume they should RYO in the first place that perhaps I should say as you do, just to combat the huge weight of current opinion, and even the scales!

As I said before, think several times (lots) before RYO'ing - but don't completely dismiss it if it looks like it might be necessary. You can probably solve what you thought needed RYO by e.g. using a clever higher-level algo like in Quake3 - but maybe you can't.

4:

Quote

proramming guru while John Carmack was a god, but mostly in graphics and non-networked game stuff).

Indeed, a very good point: it's very important in the games industry to be wary of the fact that most of us are brilliant coders, but only highly experienced in our own specialist areas - and can be quite naive in others.

I've read Carmack's long explanation of Q3's networking technology, which Brian Hook requested off him and posted to the MUD-DEV mailing list. A search for "Hook, Carmack, Quake3, MUD-DEV" on google should find the post in the MUD-DEV archives. Nicely explained, and a reasonably good approach - although some people consider it a bit more ground-breaking than perhaps is fair.

In summary, by the time id got to Q3, they'd come up with a good algorithm for network-play. As you explained, possibly the biggest problem has not been the technology, but the higher-level algorithms/protocols/etc and the decisions on HOW to use the available tech; Q3's networking is a good solution for a Q-like game.

I've read Carmack's long explanation of Q3's networking technology, which Brian Hook requested off him and posted to the MUD-DEV mailing list. A search for "Hook, Carmack, Quake3, MUD-DEV" on google should find the post in the MUD-DEV archives. Nicely explained, and a reasonably good approach - although some people consider it a bit more ground-breaking than perhaps is fair.

Looked it up an read it. I think the solution is a typical Carmack-solution and reminds of the early published Doom sourcecodes. This is very typical for game coding in contrast to application coding. The solution is absolutely focused on the current topic, no abstraction, no layering, simple as possible. Maybe that is one of Carmacks biggest talents. KISS - keep it stupid simple.

When I start doing this kind of construction as a not-so-talented game coder, I think of optional clients that only need parts of the information, think of scalability, how to hide networking at all, setting up distributed databases of several kinds, care for minimal bandwidth usage, ..... I just cannot manage to focus on transmitting the gamestate of a specific FPS. Cannot even develop an imagination what a 'gamestate' could be.....

Just totally different from a Carmack-approach. I'm afraid my system to just identify things is more complex than the whole Q3 network logic....

One thing I don't remember being mentioned in this thread is there is a desired middle state between UDP and TCP that as far as I know doesn't exists.

UDP is a bunch of atomic datagrams.TCP is a reliablestream of bytes.

I think many developers *want* a reliable and atomic protocol but don't need the stream concept. Something more than UDP but less than TCP. A protocol where packet #5 doesn't prevent packet #6 from arriving but #5 will be retransimited asap.

Personally I feel that this desire stems for the developer's desire for a quick/easy solution instead of doing the right thingTM and writing their game protocol such that current updates are not dependant on past state.

IP is... (TCP and UDP typically both run on IP but don't have to)...checksums, fragmentation, QOS (sometimes), TTL management.

Some consider that TCP offers too much for it to be ever worth not using it.

Note that TCP/IP is also: - ppor at QOS - very poor at congestion control - very poor at encryption

Other protocols do these better, but some things (like e.g. congestion control) only work well when everyone (on the network) is using them. We could stop most DDOS attacks easily, if only the world were willing to dump IP and switch to ATM. Another example: IPv6 is good at encryption.

TCP is not perfect, and it doesn't do the kitchen sink - but perhaps it does everything else .

Looked it up an read it. I think the solution is a typical Carmack-solution and reminds of the early published Doom sourcecodes. This is very typical for game coding in contrast to application coding. The solution is absolutely focused on the current topic, no abstraction, no layering, simple as possible. Maybe that is one of Carmacks biggest talents. KISS - keep it stupid simple.

When I start doing this kind of construction as a not-so-talented game coder, I think of optional clients that only need parts of the information, think of scalability, how to hide networking at all, setting up distributed databases of several kinds, care for minimal bandwidth usage, ..... I just cannot manage to focus on transmitting the gamestate of a specific FPS. Cannot even develop an imagination what a 'gamestate' could be.....

Just totally different from a Carmack-approach. I'm afraid my system to just identify things is more complex than the whole Q3 network logic....

Quote

Personally I feel that this desire stems for the developer's desire for a quick/easy solution instead of doing the right thingTM and writing their game protocol such that current updates are not dependant on past state.

A very interesting idea. I've spoken to some within the games industry who are really pissed off with "the idiots who try to do everything with objects all the time" (they are themselves usually expert OO developers). We at least agree that too many educational establishments teach how to OO without teaching why, and there's a huge number of people around who have the age-old problem "when all you have is a hammer, everything in the world starts looking like a nail".

I can give an interesting example from another industry, same problem: someone I know was managing the development of radar systems. There was a team working on the software to draw the display. The old version was C, with little reusability - everything was so tightly integrated that it was hard/impossible to separate and encapsulate functionality. The new version that the new team proudly delivered had a class for each of: - the planes - the previous positions of each plane (so you could draw trails of arbitrary length) - the predicted positions of each plane - the graphic for the plane (methods to change colours, change size, rotate, etc)

Unfortunately, on the first beta test, they discovered that it took 30 seconds to draw EACH FRAME of the radar - i.e a frame rate of 0.03 fps. This is because each real-world plane had approximately 30 objects associated with it, and there were a LOT of planes (and the redraw was heavily OO'd, with separate layers drawn separately, so that it took IIRC 5 or 6 passes to draw the whole frame).

Ultimately, you should never use a tool unless you know what it's meant for; OO was invented to solve a small set of recurring problems - it's great for solving them, but it has soooooo many disadvantages that it can be as lethal as it can be a lifesaver.

I'm not implying that you in particular shouldn't use OO, but it's a sword that cuts both ways. In particular (to come back on topic), when you're writing network code, you CANNOT speed up the internet, and you probably cannot speed up the hardware (you only control the server hardware, not the client stuff). So, all that's left for you to squeeze optimizations out of is the software. Performance is typically a heck of a lot more important than whether you can easily add new features later on (one of the primary advantages of OO). Reusability for network code often is no more complex than "is it a single API that I can remember how to use?". It's quite hard to write a network-API that isn't reusable (you have to write pretty hard-core complex algorithms and protocols to make it that way).

Several other parts of game development are similar. I suggest using thingy's equation (hopefully someone else can remember the name!) that's the yardstick for VLSI (cpu) developers:

To compare two possible ways of doing a new function (or choosing one of two functions to implement in silicon when you only have room for one), you rank each using the formula:

score = %age speed improvement / %age of the time that the speed improvement can be used.

So simple, yet for instance I wouldn't bother with abstraction if I were writing a new 3D engine except at a very high level - e.g. I'd want to be able to swap in and out different renderers, and be able to reproduce "sketched quake" (where the renderer is replaced with one that draws everything like pencil sketches; it's pretty cool) or "fisheye quake" where everything is rendered through a super-wide-angle ("fisheye") lens.

Many (most?) low-level decisions in game development are 100% all or nothing: if I write a game whose engine uses BSP's, there's no point in being able to swap that out and use a non-BSP solution - the use of BSP's contaminates so much other code in so many places, that you never need a fine grain of replaceability.

These examples are perhaps a little contrived; BSP is a particularly nasty (contaminating) technology that it would be nice to never use at all - but it is very effective and was miles ahead of anything else (for performance) invented for a long while.

While you're right about OO being heavily misused, I don't think that it's anything inherent in OO designs. Take Java 1.1 vs. many of the later APIs for example. With the exception of the AWT, the 1.1 APIs generally consisted of 1 class per function with as little "tying" as possible. This open design allowed each class to perform independently and performance was handled internal to the class. Now take a Java 2 API such as JavaMail. JavaMail has an object to describe every little item in the system, from body text to email addresses to attachments.

In the case of JavaMail it ended up making sense due to the flexibility it gave. But what about Graphics2D? Did we really need objects to describe transforms instead of constants? Especially in performance critical code. The error that I think designers make is that they look for perfect OO instead of pragmatic OO. In thoeretical "perfect" OO, every part of the system should be described by objects down to the slightest detail of using an object instead of a primitive. This works fine, *in theory*. In reality, you could continue churning out objects ad-infinitum because you'll never be able to stop deciding what should be described by your system, and what should be taken as an axiom. Thus you end up with object after object being created just to run the simplest of programs.

At the same time, where to draw the line to get the best balance can be a difficult question to answer. Just be aware of what your code is doing and you should be able to deliver both maintainable OO code and high performance code in the same package.

While you're right about OO being heavily misused, I don't think that it's anything inherent in OO designs.

Yes, there are situations where OO is abused without it being OO's fault. There are also many many ways in which OO is just plain rubbish and ruinous for a project - you're onto the right track by saying that "sometimes it can be taken too far", but it goes much further than that.

OO was invented largely to provide: encapsulation, data-hiding, versioning-independence, explicit-interfaces, ADT's, etc. Encapsulation is probably the most important of those, depending upon whom you talk to ("an object is defined as Data + the methods that act upon that data").

E.g.: "Global" methods and data are fundamentally non-OO. There was a time when it was preached that "You should NEVER use global anything - there's always a better way". There were good reasons for this - e.g. some compiler-optimizations are impossible if the source contains any globals. But as PC's get more powerful, compilers can take more time to compile - which enables new transforms which are more "brute force" than the original ones.

So nowadays at least one major reason not to use globals is evaporating; but I've worked on at least one game where every method had to have access to every set of data - the problem domain was really hard to solve if you used any encapsulation. Which is a bitch to solve if you're working in a programming language which really only supports imperative code and OO - and doesn't also do, e.g., aspects or similar out-of-the-box (IIRC, Aspect-oriented programming was invented largely to circumvent problems like this)

There are mathematical theories around, along the lines of "no programming paradigm will ever be the best paradigm in the majority of situations" - but no-one has yet proved them AFAIAA . They are based on work to do with search-spaces, and the concept that the ordered set of searches of an algorithm is equivalent to a description of the algorithm. You can probably find more info anywhere that discusses the Church-Turing thesis (that all non-trivial programming languages are 100% equivalent).

(side note: until someone "disproves" C-T, everything can be written in the lambda calculus, which is a programming language with only two operations, and no other symbols - no numbers, no constants, no characters etc - but it can still represent any program you can think of).

...this also happens to underlie most compiler-development theory. But now I'm guilty of taking this thread completely OT...

until someone "disproves" C-T, everything can be written in the lambda calculus, which is a programming language with only two operations, and no other symbols - no numbers, no constants, no characters etc - but it can still represent any program you can think of

Well, after re-reading the thread and having learnt my lesson, I'd have to recommend carrier pigeon as the most flexible transport. That, possibly in conjuction with the use of two coffee cups and a piece of string.

The problem usually with the "UDP is faster" cotnention is that this is a first blush response which gets most of its "speed" from lack of reliability. However some form of reliability almost always ends up being something the game programmer then has to layer on top of UDP.

At that point you have many of the same problems WITHOUT the saving garce of the infrastructure being pre-tuned for your protocol.

Similarly UDP looks ta first blush to be cheaper ion bandwidth BUT the only palce that bandwidth is really an issue any more is in the last mile of an analog connection. As that is invariable a PPP link, TCP is actually much LOWER bandwidth across the choke point because PPP otpimizes the headers. (8 bytes for a TCP packet across a PPP link, 30 for a UDP packet AIR.)

My suggestion for first blush is to turn nagle and keep-alive off and use TCP. (You may not even need to turn keep-alive off if you aren't really filling the channel with data.)

If/when you run into individual situations where occasional latency spieks are really hurting you, ask yourself if that part of the communication can be done out-of-band in an unreliable manner.

AND lastly remember that if you are on an analog last mile, then your big spikes are going to be up to 6 seconds losses off communication for modem-retrains and it doesnt matter WHAT your protocol is then.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org