Over the last months, I made some major improvements to my 'NIO Wrapper'. I believe this is as simple as it gets. You get notified of I/O event via a callback mechanism, so you can act on 'client connected' or 'data received from client' events. The API is single-threaded (and immediately throws exceptions if used otherwise), regardless of how many servers are listening, and how many clients are connected (as this is the point on NIO).

The API does not force any protocol on you, so the received byte[]s can be of any size, just how the OS received them. You can register ReadCallbacks, which cause you to be notified when a specified amount of bytes are received:

Ofcourse these are also provided for convenience:NioNetworkAdapter, NioNetworkLogger, NioNetworkExtendableHandler

Executing tasks on the NetworkThread

All reads and writes must be performed on the NetworkThread. This is the thread on which the NioNetwork instance was constructed. Now you probably have more threads in your app, so what to do when you need to send a client a message at some random moment?

The simplest task (like a Time Server/Client) may seem like a bit complex, but keep in mind NIO is a very powerful API, so to take advantage of it, there is a *bit* of boilerplate code, even in a wrapper...

The ChatServer + ChatClient (see test.jawnae.net2.NioTcpChatServer/Client) are a much better example of how powerful the API is - I coded it within 10 minutes. It feels just like reading and writing lines in plain old java.io.* / java.net.*

The sample code is horrendous, but should give you a good idea how to use the API, and how to do better.

There are slightly less that 1000 lines of code in the core API handling the dirty bits of NIO. Not counting interfaces, loggers and adapters. Keep this in mind when you want to roll-your-own, it is very close to the metal, and this simple wrapper will certainly safe countless hours.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

This looks very good, especially the Packet/PacketCallback stuff which waits for the whole message to be recieved. Almost all NIO frameworks don't handle this out-of-the-box, including MINA! Unless streaming of audio or video is needed, I don't know why anyone would be interested in receiving half a message.

I just got rid of the networking part of my game since it was holding me back from developing faster. When I refactor it back in Ill use this API.

When f0 yields a value, f1 will be written sooner or later. Now bb0 might not be fully written, so you lose data when the data of bb1 gets appended. You'll have to code around this, like waiting for f0 to yield, check how many bytes are remaining, potentially resubmit bb0 OR initiate the f1 write - you'll have to loop this, because b00 might have bytes remaining after N writes...

Update:Now that I read the javadocs, it even seems to be worse. The second write will throw a PendingWriteException - what's the use of having a 'multithreaded api', when you only allow 1 thread at a time, in an ASYNC api... i mean... even if you synchronize on your channel, it will throw exceptions at you for being 'concurrent' as the Future has not yet yielded. This basically SCREAMS for a queueing mechanism!

You simply need yet another abstraction layer, again.

Further, the read/write-timeout mechanism is simply b0rked. If the read or write didn't happen in time, it will throw an Exception (ok...) but it will also leave your channel in an UNDEFINED state! Totally worthless! Reconnect and try again, I guess. What's the point of specifiying the timeout per read/write - if it destroys your channel, as apposed to per channel...

Further, in my wrapper API, all data is batched up for you. If you write/enqueue, nothing happens until network.select() is called, which groups all enqueued data as efficiently as possible, and sends it to the socket. Same for reading: the API reads as much as possible, and spreads the bytes over the ReadCallbacks.

I guess my wrapper is just more convenient. (although NIO2 is a major improvement!).

Plus you get the Packet protocol (16 bit header) out of the box.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

I was programming this singlethreaded proxy server, and noticed that in that situation, my ReadCallbacks where a royal PITA, as they would only return if N bytes were fully read. If you don't know the amount of bytes to expect, like for example in a line-based protocol like HTTP, FTP, SMTP, POP3, etc etc etc... then you need to the able to handle the incoming bytes instantly...

So!

You can now pass a NioReadStrategy (enum) for each NioClient: READ_CALLBACK and READ_DIRECT.

NioNetworkHandler (interface) ... public NioReadStrategy acceptedClient(NioClient client); public NioReadStrategy connectedClient(NioClient client); ... public void receivedBytes(NioClient client, int bytes); // will be called in READ_CALLBACK mode public void receivedData(NioClient client, byte[] data); // will be called in READ_DIRECT mode ...

Now it gets very simply to dump data between two connections, so can code a non-blocking simply proxy in very few lines:

Soon as client is started both parties throw an exception. Quick look indicates its server crashing due to a NioReadStrategy attribute being a NULL. Same error applies to ChatClient/Server example. Do you have a working examples?

This is what I modified but how can I read incoming reply packets from the server?

If you use a READ_DIRECT however, all data ends up at:NioNetworkListener.receivedData(NioClient client, byte[] data)but in that case your packets won't have a guaranteed length, you receive the data as received by the OS.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

Before I read your reply I managed to implement this version. I do however download an updated code and continue experimenting with your NIO wrapper. I like the packet approach its so much easier to create application logic such as sending short xml messages.

What is it loop(client) call you prefer, I am now a bit lost of whether to use .register(client) call once or loop(client). Any suggestions?

edit: And I think previous fix does not fix the original problem, my very limited understanding of the code logic indicates its NioClient.registerReadCallback method should use a default strategy. Or maybe better a way earlier should default stragegy be initialized.

Network data is copied 1 time now when using the HANDLER strategy.Network data is copied 2 times when using the CALLBACK strategy, due to the ByteBuffer->byte[] conversion.

It used to depend on the (mis)match between incoming fragments and ReadCallback size...

Everything is handled internally using ByteBuffers, created by a {Direct|Heap}ByteBufferFactory.The copies are efficient, not the occasional byte-by-byte loops anymore: when using direct ByteBuffers you get superior performance.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

So I totally revamped the API. It's a lot more gentle on the user now.

You don't have to squeeze eventhing into one NioNetworkHandler

You can simply call network.addHandler(NioNetworkHandler) and it will receive events.

You can now also, for ultimate performance, mess with the 'transfer buffer', for zero copy transfers, further, there is a NioClient.flushOrDie() method added, so that you can attempt a non-blocking flush, which will throw an exception if it fails.

The classes are now nicely spread over 2 packages, and the dependency on my custom Logger is now gone.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

I just hope my user base will increase from 0 to 1, that'd make my day.

Besides that, it really shows how incredibly horrific NIO is. My wrapper is (becoming) quite chunky, yet it basically does nothing else than what you (you, Abuse!) would code up in a few minutes in java.io.* + java.net.*

And then, Linux threads are getting so darn efficient that with enough memory (!) they seriously outperform fine-tuned Selectors, with a one-thread-per-connection model. NIO is only really useful on memory limited systems these days (that means indies, yay!).

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org