Quick client/server socket question.

I've written a few client/server configurations in Java, and I just now realized I might be doing something a litte too paranoid. Basically this:

1. Get an address.2. Open a socket.3. Get an Input and Output buffer.4. START A THREAD and allow it to block on the output buffer, passing in a monitor.5. Write something through the input buffer.6. wait() on the monitor for the output buffer thread to read, and then notify() the original thread.7. Wake up, get the result from the (no longer running) output buffer thread, and return the result.

Is number 4 unnecessary? Can I:

4. Write to the input buffer.5. Block on the output buffer.

...and not drop bytes on the floor if I get the result before I block on the output buffer?

nameishere
Monday, October 23, 2006

Deleting …Approving …

Yes. If you're doing something like a HTTP request, it's perfectly fine to throw your request at the server, then start slurping input.

The socket has a buffer attached to it, and even if that backs up, TCPIP implements flow control; your TCPIP implementation says to the other end "no! no! slow down!! In fact, stop for a bit!"

So you shouldn't lose any data. It'll all be waiting for you.

{As long as the server is correctly written, it'll just stop sending stuff. If it's badly written, it'll... oh... hang in a "write()" or something... }

As others have said you don't need a separate thread in this case for receiving data from the server because it sounds like a typical request-response type scenario. You are sending everything you need to send before getting back a full response.

You will want to set your socket timeout values appropriately in case you never get anything back from the server. This can and will happen now and again. The typical default behavior in java is to block indefinitely. This is obviously not recommended.

The hardest part about socket programming of any kind is coming up with a handshaking scheme (especially if you don't have access to the server code). You will need to know when to stop reading data from the server. Simply attempting to read to "end of file" doesn't usually work. The socket doesn't know when the end of the stream is because in theory the server could send more data across the socket at any time. To get a clean disconnect you need to either use the shutdownOutput method on the server side or send some sort of sentinel in the data (length of message at beginning of transmission, special character, etc.). Note that shutdownOutput works well for http data but does not work for https.