Somewhat of a general question but I'm new at this. I have a client applet and a server application set up. It uses TCP sockets and works pretty well.

This is an outline of the server architechture:

main class creates a server thread that loops on serversocket and accepts new clients making an object out of each and adding them to a concurrentLinkedQueue of waiting players.main class also creates a thread that continually checks the queue of waiting players and once two are connected creates a game object out of the two players.

each client is a new thread that loops on the in-stream of that client.

each game object is also a thread that handles protocol back and forth between the clients during game play.

as i said before it works pretty well as is but I have two questions:1. I'm worried that I am creating too many threads and the server may crash if it has to handle a lot of players. is the architecture i have set up a solid one or is there a better aproach?2. Whats a good way to track whether or not a client is still connected? clients close their browsers and reload etc. from a server back end stand point i dont need to be manipulating client objects that no longer exist.

hopefully i explained myself pretty well but essentially i just need some guidance on the general set up of a server designed to handle many clients and many games going on at one time.

1. You've described the acceptor/reactor pattern. You can google that for more info. Your server shouldn't crash for too many players, but it may slow down. Unless you have thousands of players, you're probably fine. You can use NIO or an NIO library (see KryoNet in my signature). This can simplify threading by using only one thread for network stuff, but using NIO is a major pain. Note you don't need NIO to scale, you can scale with the one-thread-per-client approach your are doing, especially with NPTL.

That said, it doesn't sound ideal to use a thread solely to see if two clients are connected. You should know that the moment the second client connects. Though if you have this working it probably isn't worth rewriting.

You may also be able to eliminate using a thread per game. Maybe you can share thread safe objects/code between the two client threads.

2. TCP is a stateful connection. It does magic to detect when the other end has been smoked and throws an exception on your next read/write or if you are blocked reading/writing.

So just as an example what exception would the server "see" if an applet was closed? The game is 1 vs 1 so in the event that a client has left or lost connection the game should not continue.

@ddyer

in essence my server is just multiplexing the I/O. all the game thread does is continously check to see if either player has sent anything and in the event that they have it then sends it to the other player.

I think i may be missing something though. I did not know there was such thing as non-blocking I/O. Thus my need for making a thread for each player. The player thread is blocked on the in.readLine and in the event that something is sent it takes the message and puts it in a queue. the game thread then checks each players queue for messages sent. How do you implement non-blocking I/O?

So just as an example what exception would the server "see" if an applet was closed?

You will see IOException, specifically the JDK does: throw new SocketException("socket closed");

Actually, sometimes the connection simply timeouts, if a timeout is set. Otherwise it will be in limbo. Local connections will definitely throw an IOException, but once you send data over the interwebs it's a lot less predictable.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

If the connection is dropped (no clean disconnect) during a read(), and no timeout is set, your call will block indefinitely. I mean, how would you be able to tell the difference between a really long read-delay (like hours) and a lost connection where nothing is received. There is no keep-alive I/O.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

Check out the JavaDoc for Socket. You can set the timeout with Socket.setSOTimeout(miliseconds). Waiting 2x the turn time doesn't sound too bad, another way is to have the client send a stay alive message every so often to reset the timeout.

Another question... kind of off topic but if I wanted to put my game out on the web are there any hosting sites yall would suggest? Also what all do I need? I have an html page with an applet that would need to be served up to all the clients and then a java application that would need to be running on the server. I know most hosting sites will allow the html page but for the server application do I need root access or anything specific?

Also is google app engine worth looking into? Will it work for what I'm trying to do?

If the connection is dropped (no clean disconnect) during a read(), and no timeout is set, your call will block indefinitely. I mean, how would you be able to tell the difference between a really long read-delay (like hours) and a lost connection where nothing is received. There is no keep-alive I/O.

That doesn't sound right... TCP should know the connection is smoked and throw an exception even if there is no timeout on the read.

That doesn't sound right... TCP should know the connection is smoked and throw an exception even if there is no timeout on the read.

Welcome to reality TCP can only figure out the connection is smoked on a write(), not on a read().

Again, how would you know whether the server intentionally isn't sending bytes for a long time, or the server suddenly lost power? Or a switch blew up... if the 'remote end' doesn't send a FIN packet, you'll never know the TCP connection is lost (unless you write(), which requires sync/ack).

I'm surprised you don't know about this... have you never had tcp-timeouts where the server was up and running, but according to its socket-state the connection is lost, but the client thinks it's still connected (or vice versa)? Ofcourse, when testing locally, these things never happen, but 'in the real world' it's a real problem.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

Hmm. Nope, never had the pleasure of losing my connection that spectacularly. Even when a process gets killed the socket is closed properly. You'd need to pull a plug or otherwise have some catastrophe, as you mentioned.

KryoNet does send keep alive messages. The default for TCP is 59000ms and UDP is 19000ms.

Create a tcp channel, that handles both your traffic *and* ping/pong behind the scenes (interleave streams). If the ping fails or the pong timeouts, reconnect. Again, this is all behind an abstraction layer. At the game-code level, you're simply sending and receiving bytes (or highlevel game-packets), regardless of all the mess that occurs behind the scenes.

It's fairly easy. The only thing you need to do upon reconnect is determine which packets got lost, or renegotiate some basic state.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

Create a tcp channel, that handles both your traffic *and* ping/pong behind the scenes (interleave streams). If the ping fails or the pong timeouts, reconnect. Again, this is all behind an abstraction layer. At the game-code level, you're simply sending and receiving bytes (or highlevel game-packets), regardless of all the mess that occurs behind the scenes.

It's fairly easy. The only thing you need to do upon reconnect is determine which packets got lost, or renegotiate some basic state.

Oh, pity, and here I was thinking it was super easy and the client and server always got interruptions if the other side disappeared (I only stested my socket on my home intranet). I guess I'll have to put some ping pong mechanism in there every minute or so.

Oh, pity, and here I was thinking it was super easy and the client and server always got interruptions if the other side disappeared (I only stested my socket on my home intranet). I guess I'll have to put some ping pong mechanism in there every minute or so.

Thanks for the heads up!

I'd do a ping/pong every (few) second(s). Not to keep the tcp session active, but to be notified immediately if something breaks.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

How often does it actually break from a real life scenario? Does anyone have an idea? Is it once every few hours/days or once every 20 minutes? If it is the first it might be okay to reload most data from the server to get common state but if it is the later then I'd need to figure out what was missing and recommunicate only that part.

How often does it actually break from a real life scenario? Does anyone have an idea? Is it once every few hours/days or once every 20 minutes?

I too would like to know this. And does physical distance between client and server have anything to do with it? As I expressed before are my clients even going to be able to finish playing a game (~10 - 20 min) without having to reconnect?

You can have really stable connections from Germany to the USA, while Germany to Spain can be kinda crap. It's all about the quality of the local infrastructure. Naturally the farther away, the higher the odds you get routed through crappy routers.

For 'most of us' it's safe to assume TCP is reliable. If you want to provide high quality to customers, either for business or for a commercial game, you have to keep in mind that TCP is far from reliable when you have long running sessions. If you can afford the time/work: prepare to recover.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

I would add to Rivens advice. If you *can* have long sessions, then you should be able to deal with them and recover when things go pear shaped.

I am in Austria and have 8Mbit unlimited. I really do get that too --ran it at 7Mbit for almost a month. However my internet ip number changes 3 times a day, and that causes all my TCP connections to go dark. They don't get any kind of notification.

If you expect to keep a single TCP connection live for more than an hour i would recommend that you assume this problem is common place.

I have no special talents. I am only passionately curious.--Albert Einstein

You can have really stable connections from Germany to the USA, while Germany to Spain can be kinda crap. It's all about the quality of the local infrastructure. Naturally the farther away, the higher the odds you get routed through crappy routers.

For 'most of us' it's safe to assume TCP is reliable. If you want to provide high quality to customers, either for business or for a commercial game, you have to keep in mind that TCP is far from reliable when you have long running sessions. If you can afford the time/work: prepare to recover.

Thanks Riven,

That sounds good. If it is quite stable I'll put up a system of synching up with the server again in a less fancy way than if it happened every 10 minutes

Create a tcp channel, that handles both your traffic *and* ping/pong behind the scenes (interleave streams). If the ping fails or the pong timeouts, reconnect. Again, this is all behind an abstraction layer. At the game-code level, you're simply sending and receiving bytes (or highlevel game-packets), regardless of all the mess that occurs behind the scenes.

sorry to bring up an old thread but I need some advice again. How do you set up this ping/pong? Is socket timeout set on both the client and the server?

for the game data i send strings over the socket and then parse the strings upon arrival. in order to set up this ping/pong would i create another socket and have the client and server constantly sending data back and forth?

Upon a timeout, exactly nothing will happen to the underlying connection: it will remain open and valid, it's just that the read() call will throw a SocketTimeoutException.

You set a timeout to make sure your app doesn't endlessly wait if the tcp-connection gets screwed up (unclean disconnect). You have to set the timeout on both server and client.

I understand all of this. Seems to me like it would be best to somehow have two streams going. One that has a socket timeout and the other that doesnt. The one with timout is constantly sending data back and forth while the one without timeout sends game data. In the event of a timeout both streams are killed or reconnected. Is this how it is done or are they merged together?

edit: nevermind i guess it would be really easy to merge them. two streams arent necessary

I understand all of this. Seems to me like it would be best to somehow have two streams going. One that has a socket timeout and the other that doesnt. The one with timout is constantly sending data back and forth while the one without timeout sends game data. In the event of a timeout both streams are killed or reconnected. Is this how it is done or are they merged together?

by two stream, do you mean two socket ? (you wont have any benefit in using two)

[size=10pt]similar seems to have been already mentioned above but :

server side you should keep timestamp for the last packet received for each client and if lastpacket received for one client is too old disconnect it and free its ressource / kill thread etc...

client side you should send a ping and wait for a pong if no pong received within XX ms/seconds/minute (depending on your game) you should try to re-connect to the server

both are done in a higher level layer (application/software layer), this is not dependent of what network protocol you use ( you could even use SMTP (email) )[/size]

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org