The sprites server has smashed through the 250k barrier with 250,001 concurrent, active connections. In addition to the stuff talked about in my previous two blog entries, it took a few more optimizations.

1) Latest tagged revision of V8, which appears to perform a little better

The 250k limit is right on the fringe of what this server can pull off without violating the 1.4GB heap limitation in V8. It’s not clear to me why this limitation hasn’t bubbled up in priority enough to be taken care of yet. Just look at those free CPU cycles and unused memory! V8 is a complete bad-ass, and it’s just this one limitation that is holding it back from some really extreme capabilities.

I had tried 250k a few times, and couldn’t get it stable until after upgrading the /deps/v8/ directory of Node.js with the latest version tagged in SVN. I was really hoping the 1.4GB limit had been removed, but alas. Instead, just had to settle with some significant improvements to garbage collection and performance in general.

2) Workers via “cluster” module

I figured the onslaught of HTTP GET requests the 100 EC2 servers were unleashing at a rate of 100,000 JSON packets per second was contributing both to CPU and memory consumption, which was agitating the garbage collector enough to resist the 250k mark.

So, to reduce the overhead of these transient requests, which are in addition to the 250k concurrent connections (which really leaves us on average at about 350k connections at any given second, 100k of which are transients), I decided to leverage the cluster module.

The master performs the exact same tasks it did before. Except, now there are workers spawned, one for each CPU on the system, and they listen on a separate port from the master process. This port is used by the clients for these transient requests, and as they arrive they are parsed and forwarded to the master using the Node.js send() function.

The only reason this was a critical adjustment is, again, the 1.4GB heap limitation in V8. Keeping all the resources associated with those requests off the master process saves just enough memory to get past the 250k milestone.

TL;DR – V8, you’re so good, but it’s time for you to support more memory!

It still apparently has a “soft” limit — it will let you go over 1.4GB, but will begin to get more aggressive about it’s garbage collection to try and stay near that value. I would love to be able to raise the soft ceiling, but haven’t found a way yet.

The master performs the exact same tasks it did before. Except, now there are workers spawned, one for each CPU on the system, and they listen on a separate port from the master process. This port is used by the clients for these transient requests, and as they arrive they are parsed and forwarded to the master using the Node.js send() function.

That is not true, when the worker start it send a request to master for a server handler there is binded to the specified port. If the a similar request has been received in the master before the master will send the same server handler. This way the workers share the underling file descriptor and the OS mangage which worker there will get a new TCP connection.

The other method you suggest would properly not perform as well since the balancer would be in JavaScript and process.send is actually a sync method (I hope the core team will fix this soon).

I believe you’ve accurately described the typical use of the cluster module – but, that is not what is happening in sprites.js — The workers are listening on a separate port number. The relative inefficiency of forwarding over “send” (which, is debatable considering it’s able handle >250,000 clients in real time) was a specifically chosen compromise designed to pull as many sockets as possible out of the heap-starved master process. There was only one relevant factor : the server could not run without getting rid of a bunch of sockets and their associated heap allocated memory.

Super-interesting, great to see Node and V8 keeping up.
BTW, did you try to analyze latency (average and avg. deviation, for instance) as you increase the number of clients? I think something along these lines would be interesting.
Thanks again for the great posts!

The only change that seems to happen around the 1.4GB point, in the GC trace, is some mention about going over the external memory allocation limit. I tried defining V8_MAX_SEMISPACE_SIZE in the SConstruct file for “all” but it still happens, forcing garbage collection.

I don’t have numeric stats on latency, but what I was doing is join a few clients and toss sprites between their desktops (my PC and laptop). Since they are right next to each other, you can at least “see” the latency. Aside from the 1.4GB point when garbage collection begins to be more aggressive, there is no apparent latency slow down between 2 users and 250k users. As you push the server to the point where V8 gets hungry for GC, the latency only occurs -during- the “stop the world” garbage collection pass.

Do you have an analogous “real world” application for this level of throughput? Sending large numbers of sprites is a good benchmark but how does it simulate usage in a real application…or is that just an example and what it’s really simulating is the transport of JSON packets?

[…] have to start a new thread for every incoming tcp connection. This allows node to service hundreds of thousands request concurrently , as long as you aren’t calculating the first 1000 prime numbers for each […]