OTOH "punch through" isn't needed because you can simply map a TCP port on your NAT router and as long as you have either a fixed IP or a dynamic DNS entry the other user can find it.

Quote

[li]In certain situations (typically on narrow channels like modems or a slow DSL uplink), TCP can interfere with the UDP protocol layer,

But on a dial up you really dont want to use UDP at all because TCP has much better bandwidth charcteristics. (UDP ha a 30 byte header over that criticla last mile of PPP, TCP has 1 byte.)

By doing TCP and a side channel of batched UDP we got a lot better bandwidth usage then a stream of individual UDP packets would.

Quote

[li]Using two protocols forces you to make two different outgoing/incoming connections and handle two different types of incoming messages, which probably means you’ll have to design your game to take this in account. This point is more of a design question though.[/li][/list]

Hide it in your comm library. Thats what we did with Bullet.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

I like the idea of BULLET... the only thing I'm a bit skeptical about is (as pointed out before) that it's based on stochastics. You have any expirience with how good this would work over the internet?

When TCP needs to resend alot of packages, there would be a big chance these in between UDP packets will get dropped too I guess? That would render the use of these UDP backups kinda useless... as it would only waste bandwidth?

Well, most of the time you will be receiving twice as many packets as you need. So long as you have the bandwidth to support it, it shouldn't be a problem. You'll gain the benefits of having a backup just in case the TCP message your waiting for gets lagged hopefully the UDP backup will make it through a lot sooner so you can continue and then just drop the TCP packet when it finally arrives.

Well, most of the time you will be receiving twice as many packets as you need. So long as you have the bandwidth to support it, it shouldn't be a problem. You'll gain the benefits of having a backup just in case the TCP message your waiting for gets lagged hopefully the UDP backup will make it through a lot sooner so you can continue and then just drop the TCP packet when it finally arrives.

yeah I got that, also bandwidth is supposely better used than with pure UDP (fx reliable with a UDP protocol) because TCP headers have much less overhead on PPP connections (2bytes vs 30 or something). Even with backup UDP packets bandwidth send periodically, it's still much more efficient than with UDP only.

But how well does this UDP backup perform in reality? In the scenario's where the UDP backup is usefull (when a TCP packet is dropped and resend, other TCP packets are stalled because of the resend) this UDP backup could fill in the gap and prevent stalling from hapening. But whenever TCP has problems with dropped packets, UDP will probably have that too, and hence these UDP backups might not get the desired effect in such environment. Im curious how this would play out and if anyone has any expirience with this way of sending data on the internet...

I'm planning on spending some time looking into this further. I'll post on here if I end up adding it to JGN.

BTW, how's your networking API coming along thijs? I would be very interested in seeing it. Have you checked back at JGN recently? I just released beta 5 and it has support for P2P, NIO UDP, and several other new features.

BTW, how's your networking API coming along thijs? I would be very interested in seeing it. Have you checked back at JGN recently? I just released beta 5 and it has support for P2P, NIO UDP, and several other new features.

Well I haven't done much work on it lately... (other projects interferred with more prio)But I'll pick up work on it soon again. The problem with it is that the project/company I'm doing this for prohibits from sharing with the rest of the world (closed source). But that might change in the near future though...

I like the idea of BULLET... the only thing I'm a bit skeptical about is (as pointed out before) that it's based on stochastics. You have any expirience with how good this would work over the internet?

Havent I said this before?Twice?

Yes.

Its what we used at TEN for Quake2 internet play. It worked extremely well for us. YMMV.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

I've been messing around with UDP and TCP and am currently working on my own implementation of the BULLET technique...

However, in BULLET, UDP is sent as a window of packets in order to cover the lag spikes in TCP.

Why? I've been testing, and I've found that UDP is actually quite reliable, I'm getting around 80+ lost packets every 1000 packets, and around 200 packets sent out of order. Of course, TCP has 0 packet loss and 0 out of order packets, but UDP is much faster...

On average UDP is going out latencies of 10 at best and 100 at worst, with the most dense at around 30.In contrast, TCP is pretty inconsistent, anywhere between 30 and 260.

So why not use UDP as the main protocol, and instead have TCP cover the UDP's lost packets instead of the other way around? I'm implementing this right now, so I'll let u know how it turns out.

At the moment my plan is:

have a constant flow of both TCP and UDP and whenever a TCP packet is received before a UDP packet, you can assume that the corresponding UDP packet was dropped.

Well you can't guarantee how reliable UDP will be. On my local network I send 10,000 messages and receive 10,000 messages. On the internet depending on distance, hops, latency issues per hop, etc. all act as factors into the packet loss. Just because in your environment you can consistently get a specific reliability doesn't mean it will maintain across the board.

I'm just saying that UDP should be used as primary because its always faster to send/receive that TCP, so surely for performance reasons the slower protocol should be used as a backup when the faster one fails.

This is all assuming that the packets out of order doesnt occur very often in UDP, and that the re-ordering process is good.I guess all I can do is experiment with different ways and see what works best in my situation.

Ya know, I'm a big fan of UDP for its advantages over TCP for the "fire and forget" capabilities, but there really is a lot of TCP bashing from people who obviously don't know the actual performance benchmarking between TCP and UDP.

I think realistically games needs both. Use UDP for messages are not absolutely necessary to reach their destination and do not require a specific order. Then use TCP for aspects that need guaranteed delivery. The implementation of such an idea may be something very much like BULLET, or it may be something completely different, but I think a statement of saying that one is always better than another completely ignores the fact that both still exist and are still commonly used. There's a real need for both.

I'm not targeting this specifically to you phi6, but really also mentioning myself as one of the original TCP bashers in favor of UDP. As I've looked into it deeper I've realized the necessity of both protocols.

I've been experimenting and UDP is faster... unless that means TCP spikes more than UDP fails.

That would depend on the specific of your network and its operation.

It also depends on what you mean by "faster". The word has many definitions... are your referring to net latencies? Bandwidth?Do you have any idea if you are saturating bandwidth? Because then bandwidth turns into latencies. What size are your test apckets? Are you close to the MTU? If so then packet over head could havbe a major effect by causing sub-division of one kind of packet and not the other one.

Also what kind of link to the net do you have? Analog? Digital? All of these effect the results of a ping type test.

Fundementally however, TCP uses IP to transfer packets. UDP is just a user interface to IP that layers an EDC on it to detect bad packets.

So a TCP or a UDP packet move across the net at *exactly* the same rate because they are the same thing.

After that its a matter of how the packets are processed.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

Your speed of TCP and UDP across the net are equal until a packet is lost.

Thats not really true either is it? I mean if you had a high latency connection (say from the UK to the US) and a small window size TCP may have to wait for a acknowledgement from the reciever before sending any more data on the window.

For instance, say we send we want to send 128k of data with a window size of 64k.

UDP just spits out a bunch of packets for the 128k and assuming a latency of N seconds the 128k gets to the end point N seconds later (assuming no packet loss).

In TCP the 64k can be sent (the window size) and then the sender must wait for an ack before proceeding. Assuming symetric lag this would be 64k to the end point in N seconds, the ack comes back in N seconds, the next 64k reaches the end point N seconds after that. So 3xN time before the whole 128k gets there. From what I understand reading the RFC (for the 400th time) if the receiver only acked 32k of the 64k (say some was still in transit when it was time to send the ACK) the sender still isn't allowed to send any until they've recieved an ack for the full 64k window.

This TCP behaviour is of course completely justified in preventing network congestion and that in itself may speed things up (low congestion = low packet loss - at least some times).

Now, I'm sure Jeff already knows this and he's going to respond by saying you just reduce send/recieve buffers. I think minimum they'll go down to is 8k (which should give you an 8k window). Or turning Nagle's off or something (not sure that ectually effects the window size but rather how often a packet is sent).

Either way, I wonder if these details are where the confusion of "TCP being slow comes from".

Your speed of TCP and UDP across the net are equal until a packet is lost.

Thats not really true either is it? I mean if you had a high latency connection (say from the UK to the US) and a small window size TCP may have to wait for a acknowledgement from the reciever before sending any more data on the window.

Packet propegation is identical. That much is a given

If your flow controll is well tuned it should not effect your latencies.

Your going to hit MTU long before you hit the typical window size (which AIR is adjustable on the fly)and thats likely to screw any measurement attempts anyhoo.

Look at it this way,, if you are spewing packets fast enough to cause a slow down by flow-control then WITHOUT flow control you are going to start getting massive packet loss. SO either way its going to hose you and flow control is likely to be the better of the two options.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

If your flow controll is well tuned it should not effect your latencies.

Sorry, I still don't see how you're getting to this. I assume I'm just being desnse. If TCP the sender _has_ to wait for the ack of a window before sending any part of the next window. However you look at it that still means that the second window in a stream won't reach the destintaion as fast as just sending it straight away over a non-lossy link.

But now imagine a window of two packetsSender -->PKT --> receiverWHILE that pkt is traveling to the receiever, Sender --> PKT2 --> receievernow the sender waits...AT (latency of first PKT: receiver --> ACK --> senderreceiver immediately receives the next pkt and as the ACK1 is going back: receiever ->ACK2 -->sender

Im not drawing that in the ideal pictorial manner but I hopwe what you cna see is that, during the latency time for 1 pkt above+the time to send one additional packet, 2 packets have been transferred.

The bigger the window, the bigger the overlap, until at the ideal window size both sides are continuously pumping data responding to what theother side did latency ms ago.

At that point you are never waiting and your transfer time is idnetical to an ack-less transfer.

As I said, a variation of this technique was in use as far back as ZMODEM.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

Yep, that makes sense to me That means the correct thing to say is that TCP latency will be the same as UDP assuming you're not trying to send more data than would fit in one TCP window at the any single instant. In that case TCP would send some of the data and then wait for the ACK then send the rest. Where a naive UDP implementation would just send all the data at once in theory alowing it to arrive at the send sooner.

I don't think the above "non-window" case is very clear (I realise its hard in ASCII) but its not equivilent to a native UDP implementation which would be.

Yep, that makes sense to me That means the correct thing to say is that TCP latency will be the same as UDP assuming you're not trying to send more data than would fit in one TCP window at the any single instant.

Well I believe the windows are adaptive, though I might be wrong. As I say id have to cehck Tannenbaum to be sure.

A more problematic case for TCP I think woudl be a sudden extreme latency that is bigger then the buffer space provided by the window, such asa modem retrain, which might put a hole in the flow. On the other hand it would stop the UDp flow too, so Id really need to sit downa nd think about how much it impacted TCP and UDP to see how much difference it makes.

Ofcourse there ARE the latency spikes you get when a apcket gets lost and needs to be resent and inserted into the flow. Again with clever buffering you can do some of that without stopping the foward march of packets. This is why you generally see a TCP latency spike follwoed by a rush of packets. Theya re there, they just arent being delievered to YOU til the missing one gets through the flow.

As I say, if what you need is reliable ,in order delivery of packets in a reasonably bandwidth efficient manner its hard to beat all the time and effort thats gone into TCP. OTOH if you can take limitations in some areas, you might imrpove on it. Cheif among these is trading redundnacy, in one form or another, for bandwidth. TCP, being designed for pipes of unknown size, doesnt do that itself.

Speakign of alternate tarde offs, another inetresting recent protocol a freidn was just telling me about is STCP. STCP loosens in order gaurantees without eliminating them and gains some flow improvements. Data is group into large "bunches". the bunches are in order but inside of the bunch it is not. The cannonical exampel of soemthign that can use STCP is a web browser where you dont really care what order your images show up in as logn as it all gets there. Another good example might be streaming 3D world data to a client...

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

Yep, that makes sense to me That means the correct thing to say is that TCP latency will be the same as UDP assuming you're not trying to send more data than would fit in one TCP window at the any single instant. In that case TCP would send some of the data and then wait for the ACK then send the rest. Where a naive UDP implementation would just send all the data at once in theory alowing it to arrive at the send sooner.

I don't think the above "non-window" case is very clear (I realise its hard in ASCII) but its not equivilent to a native UDP implementation which would be.

Kev

.

Yup. If TCP requried an ACK for each packet before proceeding it *would* be horrendously slow.

But it doesnt.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

There are limits on the window size and TCP requires Acks. That puts a limit on the throughput that becomes somewhat independant of bandwidth, for high bandwidths, and primarily dependant on latency.

That's where UDP-based algorithms can have an advantage. There are a few methods using UDP and FEC or some other fancy error handling to get substantial benefits.

The simplest example is perhaps a file transfer that uses the actual entire file as the "window" in a data carosel, that way allows ACKS without ever really waiting for an ACK... the un-acknowledged packets are automatically re-sent after you have been through the entire file once and this continues until finally there is an ack for all the data in the file.

Block-based FEC algorithms perform better in real life... but this is primarily a benefit over TCP for bandwidth-sucking bulk transfers. The steady trickle or reasonable flow of a real-time networked game is hopefully not saturating the bandwidth or requiring such high-speed links that the lantency effects things this way. Generally I suspect the latency issues will effect gameplay in much more drastic ways while the bandwidth requirements are still quite low.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org