I'm preparing to make the conversion from Serialization to my own lightweight "protocol," which will initially be on top of TCP (may move to datagrams at some point in the future if there seems to be a need). In order to facilitate this changeover, I am going to have all of my classes which currently implement the Serializible interface to implement this:

1 2 3 4 5

publicinterfaceActiveSerializable {

public voidfillByteBuffer(ByteBufferin); public voidreadByteBuffer(ByteBufferin);}

Basically, the class uses fillByteBuffer to call in.putXXX() for each of its members and conversely calls in.getXXX() in readByteBuffer(). Assuming that I have an active TCP connection to my client, what's the most efficient way to get this data through the Socket? I'm assuming that I should just call ByteBuffer.array() and then pop that through my OutputStream, but how do I delimit the messages? Anything else I need to watch out for?

TLVs are a very convenient and efficient method of putting data into an organized format, esp variable length strings, etc. TLV litterally stands for "Type, Length, Value". And that's exactly what it is: a 16bit Type code, a 16bit value for the length of the Value field, and then the actual data in the Value field (variable length).

If you're already working with ByteBuffers, I'd recommend to go all the way and use channels (java.nio) for your tcp conncetions. That way you avoid buffer.array() and the like. nio code might even save you a buffer copy.

Say that my intended design is this: each incoming Socket from my ServerSocket is encapsulated in a Session class, which has a process() method that is invoked during every run loop of my networking thread. This process function checks to see if there is anything available() in the stream and, if so, categorizes it, checks out a GameEvent instance from my event pool, and puts it in a queue for the game logic thread.

Other than performance and working with the most natural network API when you are already dealing with ByteBuffers, you get to drop your explicit network thread. By using non blocking networking you can check for incoming bytes in your gameloop without worrying about blocking.

Having used both the old stream interface and the new nio stuff, I actually think nio is easier to work with.

Is there a reason you are avoiding Externalizable objects as a lightweight protocol? Its essentially Serializable, where you have to define the raw byte order used to disassemble and reassemble it. Its much much lighter and faster than Serializable and built right in to the Java language.

Externalizable is lighter than the regular Serializable but I do believe it still has:* Class of the object* Class signaturein the stream.In many cases that can be bigger than you object data, making the overhead still annoyingly high. Often a message or command passing system is a faster/cleaner/more-efficient way to get "events" around the network than object serialization.

Object serialization is great for "saving" objects or object state, but might not be the best way to communicate game state changes in a networked game. (IMHO)

If performance is at a premium its definitely worth writing your own. Externalizable does still include the class header which typically adds about 20-30 bytes, the rest can be easily done as pure raw bytes. You could write your own data handling methods, using a coded version of the packet type and save some of those bytes per packet over the Externalized packets.

However, converting from Serializable to Externalizable is usually a fairly easy middle point before going to your own byte handlers and often gets the performance needed with a fraction of the work.

In my experience Externalizable usually nets a significant improvement in processing time (in the range of 300-400% improvement in decomposing and rebuilding) and significant reduction in raw data (approximately 20-30% of the Serializable size) being pushed through the wire as compared to Serialization.

In my experience Externalizable usually nets a significant improvement in processing time (in the range of 300-400% improvement in decomposing and rebuilding) and significant reduction in raw data (approximately 20-30% of the Serializable size) being pushed through the wire as compared to Serialization.

Have you used java.util.zip.GZIPInputStream and java.util.zip.GZIPOutputStream in any of your networking? I'm curious what effect it has on performance.

I've used java.util.zip.GZIPOutputStream to compress HTML data which was a totally off topic project. Since many browsers can decompress that data it was a significant difference on throughput for slower connections. Though it got tricky when exposed to various browsers and content-types. But again thats off topic.

In terms of game data I'm not sure you would see the same level of results because it tends to be bytes with much less reoccurance than Strings. Still an interesting idea and one I'm looking forward to trying.

It also depends entirely on what your performance shows. Compression of packets also means that the server has to decompress packets (unless its just a traffic cop routing messages blindly to everyone else). If you have a very chatty protocol you will quickly discover that you will spend an obscene amount of time compressing traffic on the client and decompressing traffic on the server so you can generate a response and compress it and send it to your clients who much then decompress it.

I've found that compression didn't help me much because I had gone out of my way to keep my packets relatively small in order to avoid them being split into multiple packets and potentially lost or damaged due to UDP transmissions or waiting for TCP to get the larger packets back together again.

Latency would probably be a big problem as well. GZip compression works by scanning blocks of data for similar data chunks. The compression algo is most likely to hold onto your messages until it has enough to create an entire block of data to transmit. Even if it does transmit immediately (via poor compression methods), the receiving end will still need to wait for the translation table to be written out in order to begin decompression.

That being said, compression can possibly help with relatively high-bandwidth data such as real-time voice communications. The one gotcha there is that you'll need as much control over the compression process as possible. Since you generally have between one and two milliseconds for each frame of animation, you'll most likely need a progressive compressor that can be preempted at any time. (A concession to the real-time nature of video games)

The compression algo is most likely to hold onto your messages until it has enough to create an entire block of data to transmit.

Yea, I assumed that too but then I saw the GZIPOutputSream.finish() method in addition to the OutputStream.flush() method which I though may solve the buffering problems. Maybe I should just use them and see what happens.

Yea, I assumed that too but then I saw the GZIPOutputSream.finish() method in addition to the OutputStream.flush() method which I though may solve the buffering problems. Maybe I should just use them and see what happens.

Unless you have higher-bandwidth data, you're probably going to *increase* the size of the packets from having to send the translation table. Of course, if your game works by sending complete game-data snapshots every half-second, you might get a decent bang. (Apparently, this method does work for RTS type games. Never tried it myself tho.) The ptoblem of course, is that you're probably already tuning your data to be as small as possible. Given that, there's most likely little that the compressor can do.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org