Hello, I had a problem with TCP. My networking code works fine when I run the server and client on my pc, but when my friend tries to join the game, or I join his, the data gets corrupted and the received length is wrong when a 22kb packet is sent in one go.

I changed it so I send the tcp data one byte at a time, flushing the output stream after each byte. I guess this is quite inefficient / slow? the main thing is that now it works.

So, what is optimal? and right now, I am waiting in an infinite loop for the next byte, like so (because I know how many bytes are expected, that was sent earlier), this is bad right? is there a better way? (read() blocks anyway, but I had to put it in a loop because each time flush is called i get a -1 on the receiving end)

First of all TCP is stream based. Not packet based. At least at the application layer which is what you care about.

Second after having worked on a lot of network code over the years in many settings, including java's tcp sockets, I can assure (99.99% sure) that there is a problem in your code somewhere to cause the corruption. The only outside 0.001% is some kind of crap router or wireless thing... but that seems unlikely since TCP sits on top of that and will resend and reorder data as required.

Flushing every byte is really really slow because TCP only looks stream based to you, the coder. Underneath it is packet based of course and each flush will result in a full IP packet which is IIRC bigger than 20 bytes. So you are expanding everything by a lot. However your code does not look like you read one byte at a time. But up to length bytes.

So I would first of perhaps post your old code... we could see if there is an issue. Something does not seem quite right. I don't get -1 in my InputStream.read when i flush the other end for example. I may get a zero bytes read, but not a -1 (EOF/stream closed IIRC).

I have no special talents. I am only passionately curious.--Albert Einstein

as delt0r says, TCP is stream based. (At least at the application layer which is what you care about.)

This means that you first establish and open a connection between two computers and a once you've done that you simply send and receive bytes from output and input streams of the socket connection. I.e, you don't have to worry about the other computers address after that, the two computers are essentially "linked" together.

UDP on the other hand is packet based or in other words "connectionless". This means that you need to put in the address and port inside every packet you send. (TCP fundamentally works like this as well, but it is abstracted from the application layer. TCP comes with other neat things like making sure packets arrive in the order they were sent and re-sending lost packets.)

If you do that sort of length-prefixed encoding, I recommend using some kind of framing sequence so that you can check for it and make sure you haven't gotten out of sync and started reading arbitrary bytes of messages and interpreting them as lengths. If you stick a null byte before the length and after the message, you more or less have the websockets protocol.

If you do that sort of length-prefixed encoding, I recommend using some kind of framing sequence so that you can check for it and make sure you haven't gotten out of sync and started reading arbitrary bytes of messages and interpreting them as lengths. If you stick a null byte before the length and after the message, you more or less have the websockets protocol.

I don't quite agree with this. Encoding and decoding is so trivially simple that there is no way it will ever go out of sync.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

It's an extra byte. Next time you make an off-by-one error, your blood pressure will thank you.

If you can't get 2-byte-length + payload right, and have to guard against off-by-one errors, you might as well stop coding. Don't get me wrong, I code extremely defensively, testing everything against everything, but not second guessing my own code to this extend.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

If you do that sort of length-prefixed encoding, I recommend using some kind of framing sequence so that you can check for it and make sure you haven't gotten out of sync and started reading arbitrary bytes of messages and interpreting them as lengths. If you stick a null byte before the length and after the message, you more or less have the websockets protocol.

It's easy to make sure you will not get out of synch when encapsulating a message. It's not like I will manually calculate the length of the message every time I send one. And since TCP guarantees the correctness of the stream, I have to agree with Riven that there is no way it will ever go out of sync.

I'm not talking about the stream being corrupted, I'm talking about reads going wrong, such as a short read not being handled, or some other programmer error that causes you to desync. But I'm not going to die on this hill, so if that extra byte doth offend thee, pluck it out. Hopefully your next layer up will at least detect the fault.

I'm not talking about the stream being corrupted, I'm talking about reads going wrong, such as a short read not being handled, or some other programmer error that causes you to desync. But I'm not going to die on this hill, so if that extra byte doth offend thee, pluck it out. Hopefully your next layer up will at least detect the fault.

If a mis-read of a int/short/float is allowed to corrupt the stream, then you're setting yourself up for disaster indeed. You have to guard against that, I do it like this:

While we're on the subject, TCP/IP's built-in error checking is not particularly robust - I've even had a data corruption on a LAN. Adding a CRC check to your data will make it near-as-dammit 100% robust.

Well IP packets have a 16 bit CRC header checksum IIRC . So that is no error on a bugged packet every ~16K packets *if* every packet is getting corrupted randomly. Only that is not really for the right layer....

The lower layers (physical/frame whatever) do a good job of not handing off bad packets in the first place. IIRC Ethernet is a 32 bit CRC checksum. Even with a lot of corrupted data, that is still a lot of packets between a missed corrupted packets.

I have no special talents. I am only passionately curious.--Albert Einstein

It may have been useful to say that I am also using UDP on a different port. I know there are conflicts so I should try to minimize UDP sending while downloading data with TCP. (When the player joins the server, TCP is only used to download the server data, then UDP is used for everything else).

Thanks for the information about flushing. I really want to make sure there is no chance of corruption, since TCP is supposed to be reliable...I read up that the maximum packet size for ethernet is 1500 bytes, do I have to worry about this, or does TCP take care of it? eg. Should I write 1024 bytes at a time and flush after each?

I read up that the maximum packet size for ethernet is 1500 bytes, do I have to worry about this, or does TCP take care of it? eg. Should I write 1024 bytes at a time and flush after each?

Roland

TCP takes care of it. Just make sure you read all the bytes you're expecting before stopping reading.

I recommend the approach I said in my last message: read an int (the size) then readFully(size). The first int will always be consistent, as well as the readFully (as long as your size is correct!), and with that you only need to flush once in the sender after the data is sent.

I read up that the maximum packet size for ethernet is 1500 bytes, do I have to worry about this, or does TCP take care of it? eg. Should I write 1024 bytes at a time and flush after each?

Roland

TCP takes care of it. Just make sure you read all the bytes you're expecting before stopping reading.

I recommend the approach I said in my last message: read an int (the size) then readFully(size). The first int will always be consistent, as well as the readFully (as long as your size is correct!), and with that you only need to flush once in the sender after the data is sent.

Thanks that works perfectly, woot no more corrupted packets I can't actually use a short / char because my packets will be larger than 65536 bytes, but this is only from server to client. If the client is sending I'll use this precaution.

Well there is your problem. is.read(byte[] data) is *not* is.readFully(). read(byte[] data) will block and read at least one byte or more, but does not block to fill the array, so you would often get partial fills, the rest would be old data in the array.

TCP really does take care of everything. Its just streams. But you still must use the streams correctly.

I have no special talents. I am only passionately curious.--Albert Einstein

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org