Hi Having written network code several times, including a few NIO attempts, I thought I would look at how others have done it . I'm reading blahblahblah's docs on NIO and came across something that has got me thinking (doesn't happen very often mind). In the section where he discusses the old way of networking, he comments on the fact that if you have a latency of 250ms, then to iterate over 4 players and send them the same message (assuming the same latency) that it will take 1000ms. I was under the impression that the old IO dumped the bytes down to the tcp stack and let it deal with things having returned the thread of execution back to the caller. The impression I get from the doc is that you have to wait for acks to all packets before it returns, am I reading it wrong, is it written wrong, or is this really the case???, it might explain a few of the wierdeties i've seen

Hi Having written network code several times, including a few NIO attempts, I thought I would look at how others have done it . I'm reading blahblahblah's docs on NIO and came across something that has got me thinking (doesn't happen very often mind). In the section where he discusses the old way of networking, he comments on the fact that if you have a latency of 250ms, then to iterate over 4 players and send them the same message (assuming the same latency) that it will take 1000ms. I was under the impression that the old IO dumped the bytes down to the tcp stack and let it deal with things having returned the thread of execution back to the caller. The impression I get from the doc is that you have to wait for acks to all packets before it returns, am I reading it wrong, is it written wrong, or is this really the case???, it might explain a few of the wierdeties i've seen

Good question. However, you are almost perfectly describing an "asynchronous I/O system". Perhaps this makes it clearer why so many people hate synch I/O?

A true synchronous I/O system ought to block by definition until "all the data has been confirmed received"; this would imply you have to wait the full latency.

Non-java applications that use synch TCP communications do exactly that, as you will know if you have a 56k (or slower) modem - especially when the re-dial starts! I only mention this because this is an example most people have real experience of - although there is potential for confusion between this and "waiting for DNS timeouts" which has identical symptoms.

Anyway, think about exceptions: if control passes back to the program *before the entire TCP send has completed, been acked, etc*, then how the heck do you throw an IOException when someone unplugs the network cable during transfer of the very last byte? . One of the reasons synch I/O is easier for people like Sun to implement for us to use is that they don't have to do any of the "delayed notification" etc implicit in asynch I/O: all possible outcomes are contained within one method call.

In summary, this is why synchronous I/O mandates thread-per-client in most situations.

Anyway, think about exceptions: if control passes back to the program *before the entire TCP send has completed, been acked, etc*, then how the heck do you throw an IOException when someone unplugs the network cable during transfer of the very last byte? .

Duh, should have spotted that one a mile off, at this particular moment in time I feel like I shouldn't be a software engineer for a living, and maybe refocus to refuse management or private transportation consultant (bin man/taxi driver).

"Do you mean DatagramSocket.send()? or do you mean Socket.getOutputStream().write()?

Either way, the data is buffered in the OS and the call is returned as soon as the data is written to the buffer."

To answer your hypothetical, you DON'T necessarily get an exception for any given packet, you get an exception when the OS has registered a failure. That may not be registered until multiple packets have been queued up.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

"Do you mean DatagramSocket.send()? or do you mean Socket.getOutputStream().write()?

Either way, the data is buffered in the OS and the call is returned as soon as the data is written to the buffer."

To answer your hypothetical, you DON'T necessarily get an exception for any given packet, you get an exception when the OS has registered a failure. That may not be registered until multiple packets have been queued up.

Cool, thanks. When endolf originally asked, I had to go away and think about it and try and work out what was actually happening. I had been taught a long time ago precisely what this engineer has stated, but spurred on by endolf's question to really think about this I'm now a little disturbed.

(N.B. in the article, I was attempting to illustrate some of the theoretical differences between IO and NIO; because of variance from JVM to JVM anything that's not mandated by Sun e.g. within the API docs is not guaranteed to hold anywhere, though it may be true somewhere. I've recently been revising the article to add more examples of such theoretical differences, and also to make more clear that I'm only highlighting how your VM's *may* be expected to work; you can never be sure which JVM's will observe which differences. I'll be glad to also make clear the current state of Sun's VM's on this issue...)

Anyway... semantically within java, can you get an exception except inside the method call to trigger the send (I suggest let's just concentrate on the write() method of the output stream for now, since it's the most commonly used...)?

I'm sure I've got the wrong end of the stick here, but...The response you've been given makes it sound like there are exceptions that will only be thrown if you make a SUBSEQUENT call to the same method - simply because they were waiting in limbo for the opportunity to happen (i.e. even if the OS signalled java, java has moved on and it's now illegal for java to throw the exception)?

Another thing that has me confused now is the observed behaviour that java can (and does!) hang indefinitely on attempts to synchronously send a few hundred bytes of data until the receiver actually starts reading it from the receiver's local buffers. Just guessing here, perhaps this observation is due to java peeking at the TCP state, observing that although it's dumped the data into the send buffer, the remote host is not reading it, and blocking anyway?

The most recent examples I've seen of this are simple HTTP communication - if you set up a simple IO-based HTTP Server, listen to a serversocket, accept the incoming connection, but don't make the read() call on the socket, then the sender (in a different class, connecting to the serversocket, and calling write()) blocks indefinitely. I haven't done this for quite a long time (I use NIO exclusively these days ), so if this differs from other people's experience I'll have to try and dig out old code and see what I'm misremembering. Assuming my memory's correct, it sounds like my interpretation of this symptom has been wrong all along?

Well this is a bit of a guess on your HTTP behavior. Id have to dig into a test case to have a real intelligent opion BUT...

My guess if this effect is real is that there is some part of the connection handshake that the OS isn't completing until the first read. Nasty but quite possible. Have you tested to see hat hhappens if you do the initial read and then block on subsequent reads on the same connection? My guess would be that it would allow you to pump data unti lthe buffer was filled then block.

The OTHER possability I'd think is that your system has very tiny networking buffer and that you or some other code running at the same time as you has managed to fil it up. It certainly IS true that if yo uare sending data at a faster rate then you can read it, eventually you WILL indeed back up and block.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

While it IS possible for Java to throw an exception outside of a declared exception chain (using children of RuntimeException, one common example being a null pointer exception), in this case I don't thin kanythign that complex is needed.

You are always going to call SOMETHING after a write, either another write or a close(). I think its likely that close() DOES block until the last byte is acked, but again thats really up to the OS. We just report what it tells us

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

You are always going to call SOMETHING after a write, either another write or a close(). I think its likely that close() DOES block until the last byte is acked, but again thats really up to the OS. We just report what it tells us

Doh. Of course, close() is the catch-all here.

However, surely that sucks for application development, because you may well send data and then do nothing with that client for minutes (yes, I know a lot of apps have to resort to heartbeats anyway to get around bugs in the IO design of Java, so perhaps it just gets absorbed by that). FPS's don't care, but there are quite a lot of games where a net connection may go silent for a number of seconds.

Hmmm Thanks for replying to this one jeff. It sounds like things work pretty much the way I had assumed before I read the article. Nice to know I'm not quite as mad as I thought I was

Endolf

Sorry about that; I should obviously open my big mouth more often before thinking . (my initial reaction was "oops; that's only intended as a theoretical difference to contrast synch with asynch" before I started thinking about it).

However, surely that sucks for application development, because you may well send data and then do nothing with that client for minutes (yes, I know a lot of apps have to resort to heartbeats anyway to get around bugs in the IO design of Java, so perhaps it just gets absorbed by that). FPS's don't care, but there are quite a lot of games where a net connection may go silent for a number of seconds.

Another nasty of this is that if you do a send and get an exception you assume that it was that message that broke it, when in fact it might well be that the exception is left over from your previous write, and that it was the previous message that failed, and your sat there thinking it got the last message but not this one, especially if you are trying to then re-establish the connection and state to recover from the error means you have to rewind an unknown amount

Sorry about that; I should obviously open my big mouth more often before thinking . (my initial reaction was "oops; that's only intended as a theoretical difference to contrast synch with asynch" before I started thinking about it).

Thats the problem of being known to be right so often, people read everything you write literally . If your wrong as often as I am people take everything you write with a pinch of salt so it doesn't matter if it's not 100% accurate

Still a nice article, although full source at the end for the NIO bits might be usefull to NIO newbies (like how to create a selector in the first place)

LOL. I just wish I knew how that felt ...normally I get accused of always *thinking* I'm right, and people feel less inclined to believe me because they're sure I must be wrong sooner or later . Ah, the joys of english cynicism!

Quote

Still a nice article,

Thanks. Just as long as it helps people...

Quote

although full source at the end for the NIO bits might be usefull to NIO newbies (like how to create a selector in the first place)

Ah. Good point. I hate the modern tendency to write HTML articles that are mostly "simple vague overview; description of source code; downloadable source code". That's not a tutorial, it's an open-source project without the benefits of being able to update it .

So I accidentally veer too much away from any source code, unless it's short, concise, and can be fully understood with just one read through. But...I'd been wanting to provide a framework for getting started, and with a NIO server that's a lot of code. A few complete class files at the end will probably be perfect for this. Thanks .

Sure (all in response to your post on the previous page, about what you learnt from the Sun / VM guy...)

1. What about... for a particular client, you may well send data and then do nothing with that client for minutes. FPS's don't care, but there are quite a lot of games where a net connection may go silent for a number of seconds. Are you saying the Exception for the "last byte" (worst-case scenario) doesn't get thrown until your next write, which could be seconds, or even minutes, later? Then your game (clients or servers) could take huge amounts of time (e.g. seconds) to notice a synchronization problem, and perhaps get kicked out of the game (the other side notices the problem much sooner, and timesout waiting for a reconnect)...

EDIT: Or, are you saying that certain IO errors do NOT throw exceptions in Java - e.g. the termination of a TCP connection during the actual send from the NIC buffer? (this is the only obvious alternative explanation I can think of...)

But...with the described blocking IO impl, if there were a problem with the OS sending the data to C1 in a), you won't (*MIGHT NOT* - i.e. depends how "early" the problem occurs) find out until b), and then will resend the WRONG data to the WRONG backup client.

(note: this is NOT obviously connected to game servers, although I *think* I can think of examples that are , but this is the simplest version of this problem I could think of, for the sake of clarity ...)

In conclusion, a "blocking I/O call" to me suggests it blocks until the I/O is performed! Writing to the OS buffer and then returning to the app is not blocking-IO (to me), it's "blocking method calls" - i.e. only the basic data transfer to another part of your local machine has been completed, there's been no "block" to wait for the actual I/O to complete!

However, I'm pretty sure I'm making some basic mistake of misunderstanding something here; so I'm probably assuming something I shouldn't be, or vice versa. Hopefully the example above illustrates this sufficiently. (PS I think I've included Endolf's queries too, if not I'm misunderstanding you as well !)

I'll ask but I believe the answer is that that is NOT a valid server design. That even on the native level you aren't assured of an error condition return immediately from the send(). Which is to say return from sending as packet to the OS does not gaurantee that the connection will complete.

I believe that is a misunderstanding of the term "reliable".

JK

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

I'm not contesting you're right here, but IMHO java should either say "we don't offer blocking IO" or should actually make it block. I find it really hard to accept that you could do a TCP transfer, have it fail, and java not throw any exception. What's the frigging point of using TCP if your API is going to partially (and non-deterministically) disable the "guaranteed" aspect? How many seconds do you have to loop for, flushing, until you can feel safe you've received any IOExceptions that may have occurred? And why the heck isn't this documented?

As it stands, it seems that the apparent option in the java API's to do "non-blocking" or "blocking" I/O doesn't really exist. Instead you can choose "non-blocking" or "non-blocking, with some processes simplified, and some features missing (e.g. Buffers)".

PS Could you please kick whoever wrote the API docs for io.* for me - this is not mentioned anywhere AFAICS in the API, let alone in the obvious places where it should be - e.g. OutputStream (and if someone can find it somewhere else in the API, please shout). Implementing counter-intuitive API's is unfortunate but hey it's free - not bothering to document their behaviour is completely unacceptable.

Which is to say return from sending as packet to the OS does not gaurantee that the connection will complete.

I believe that is a misunderstanding of the term "reliable".

JK

It has been a very long time since I did any network programming in C, and I never did much. I could be completely wrong here, but I would have thought I'd remember if TCP's guaranteed delivery were not actually accessible with standard networking API's...I don't see that java has an excuse to be any different. Or am I misunderstanding what you're saying here?

If you are offering reliability as a feature (which, if you claim to be supplying TCP, you are), the application *needs to know* if that reliability has not been achieved. How can anyone write a serious network application where they never know if the data has been sent or not? That would be crazy! It would mean implementing your own ACK scheme on top of TCP (which is one of the things TCP is supposed to avoid!).

...Unless TCP doesn't *actually* offer reliability at all, and I've been misunderstanding it all these years .

I'm not contesting you're right here, but IMHO java should either say "we don't offer blocking IO" or should actually make it block..

Well I think you are mis-interpreting the meaning of blocking in this case. The APi docs don't actually say "This whole API is blocking." They say that IF the underlying channel for the input stream is non-blocking then this API will throw an exception.It makes a similar statement about the output stream.

We cannot make promises about how your OS and TCP/IP stack handles things, only what we do with what they tell us.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

They say that IF the underlying channel for the input stream is non-blocking then this API will throw an exception.It makes a similar statement about the output stream.

This whole thread has been about the non-NIO API's throughout, although it sounds like you *might* now be talking about blocking-mode NIO? Although, of course, the same questions need to be answered for NIO's blocking modes, they may already be answered in API docs etc - I don't know, I don't use that mode.

If you're still talking about IO, then please point out to me where in the API docs it says what you're referring to? As I said, I couldn't see it, having looked in the "obvious" places (e.g. java.io.OutputStream, 1.4.x docs; I may not have been looking at the absolute latest version, though...).

We cannot make promises about how your OS and TCP/IP stack handles things, only what we do with what they tell us.

Taking all your statements, and all the info you've provdided, together, AFAICS there is no functionality that lets a java programmer write a real TCP-based network application using io.*, unless by using some incredibly ugly hacky workarounds that aren't even guaranteed to work (c.f. previous posts).

This is basic stuff - "were my packets transmitted or not?" - without which you cannot do TCP programming (it's part of the protocol, part of the spec, IIRC?).

Waving hands in the air and saying "well, it's all up to your OS; maybe they were, maybe they weren't" is not in any way an excuse for not providing this information / guarantee / option. At the very least the API should say "although this uses TCP, and looks just like TCP, it is in fact something slightly different that doesn't allow you, the common java programmer, the option of using TCP".

Your last statement just sounds exactly like the old fun in 1.0.x where we were only allowed access to one mouse button, allegedly because the OS would only guarantee one. Go that way for long, and java becomes a "toy" programming language useless for real work.

" Send(), sendto(), and sendmsg() are used to transmit a message to another socket. Send() may be used only when the socket is in a connected state, while sendto() and sendmsg() may be used at any time.

The address of the target is given by to with tolen specifying its size. The length of the message is given by len. If the message is too long to pass atomically through the underlying protocol, the error EMSGSIZE is returned, and the message is not transmitted.

No indication of failure to deliver is implicit in a send(). Locally detected errors are indicated by a return value of -1. "

java.net on Win32 or any flavor of Unix operates EXACTLY the way berekely sockets are defined, because thats what it is a wrapper on.

Next?

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

If you're still talking about IO, then please point out to me where in the API docs it says what you're referring to? As I said, I couldn't see it, having looked in the "obvious" places (e.g. java.io.OutputStream, 1.4.x docs; I may not have been looking at the absolute latest version, though...).

I am talking about the fundementals of Sockets. How they work. I gave you the Unix man page. That should settle it.

You seem to have come up with your own idea of what reliable "shoudl mean." Thats all well and good, but its not TCP/IP.

The reliability gaurantee of TCP/IP means that a packet will be delivered, in order, or you will eventually get an error. Saying that error has to be immeidate on send though would turn the entire inetrnet into one giant synchronous app and reduce the running of all the computers on it down to the slowest reponse time of any communciation they do.

The guys who invented TCP/IP were a whole lot smarter then that.

The difference between NIO and java.net, which is what seems to have thoroughly confused you, is equally simple. When you read from a java.net socket, it blocks until data arrives and that is the ONLY way to find out if there is data avilable. This is why it is called "blocking."

NIO gives you a select() call with which to find out what sockets have data BEFORE you try to read from them. Thats the fundemental difference and why NIO is called "non-blocking".

In EITHER case, when you write data out a socket, it first goes into a buffer in the sending computer. The only time this blocks is if the buffer is full. Doing anythign else again would make the entire computer potentially stop and wait on the ether card which you do not want.

This is all really really basic operating system Io stuff.

Got a question about Java and game programming? Just new to the Java Game Development Community? Try my FAQ. Its likely you'll learn something!

If that was meant to clarify the C situation, thanks. I happily admitted that I might have been using TCP for years without realising a fundamental problem with the API's I was using. However...

I've just re-read the TCP RFC 793, where it is "suggested" that OS developers writing a TCP API should offer precisely the information I've described (via an API function called "STATUS"), and which you say is not available on win32 nor unix. Because of the design of the RFC, *nothing* is mandated about the OS API.

(This would explain my "confusion"; I always use RFC's as my primary source for networking stuff, until/unless the implementation I'm using proves different from the RFC.)

So, as far as Java is concerned, developers have a reasonable expectation that java has access to a STATUS function - unless they happen to know platform-specific info AND they happen to know EXACTLY how the JVM implements networking.

Quote

Totally incorrect. from the Man pages:

I'm sorry, but this is the kind of rubbish that makes java networking so unnecessarily hard: you can't say "it's not documented, but you should know how we implemented this by using your psychic powers". WHEN the API docs for java start to reference the man pages for unix, THEN I will care. Up until then, it's accurate but irrelevant.

Java presents an API. Java != Unix. If the API doc doesn't explicitly state "this uses Berkeley sockets, c.f. XXX for documentation" then the docs are wrong, period. Just because it *happens* to wrap that is irrelevant - java programmers don't KNOW this unless the API *states* it.

I sometimes wish Sun would make up it's mind about API docs - either Sun intends to document, or it doesn't.

I am talking about the fundementals of Sockets. How they work. I gave you the Unix man page. That should settle it.

You seem to have come up with your own idea of what reliable "shoudl mean." Thats all well and good, but its not TCP/IP.

Well, we disagree on the definition of TCP/IP - I work on the basis that RFC's are authoritative, I'm afraid, not the man pages of a particular OS's implementation.

As pointed out, the RFC does not mandate the availability of STATUS, and even states "and could be excluded without adverse affect" (although, given the scenarios described in this thread, this is only true AFAICS if this functionality is offered elsewhere in the particular implementation).

Unfortunately, 793 seems to predate the use of well-defined MUST and SHOULD in RFC's; almost every RFC contains many "non-mandatory" parts, but usually every implementation is encouraged to implement the entire spec, unless there is a good reason not to do so.

There's enough room here to drive a bus through, and I still don't claim anything you've said is wrong, per se, but I don't agree that my assumptions are as unreasonable as you believe .

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org