The end goal wasn’t to find the fastest server per say, but to see how the various JRuby servers act and compare them against MRI versions.

Mainly I wanted to see how Reel is, I’m a fan of Erlang’s actor model and Celluloid and Reel-Rack are pretty interesting to me.

While it wasn’t the fastest at serving simple content, it felt very robust to test with, it threw the least amount errors, and recovered when crashed.

Here’s some of the results I got:
(The way to read the graphs is look at the number of requests done at a given time interval. A high number of requests done at a low time interval is a very fast server, but don’t be mislead, there’s a lot of factors at play and there’s also a cutoff on the time scale to keep the graph readable)

JRuby Simple Hello

Thick is very fast at simple content, so it blasted the other servers in terms of speed.

By comparison, MRI is pretty fast, Webmachine here is screaming fast. much more than Thin is, which was surprising, but not shocking, since Webmachine is more raw HTTP engine.

JRuby Simple DB Queries

Here Thick and Camping win out.

MRI Simple DB Queries

Reel and Webmachine pop up here also, while it took more time to serve on MRI than JRuby, it’s completing nearly 3x to 4x the amount of requests. Might have to do some head to head tests with Reel and Webmachine on MRI vs JRuby.

JRuby 20 number Fibonacci

On this one Thick server wins out, again these numbers can be really misleading, the graphs don’t show the full time scale or give you clear picture of the total request times, but just to show that looking at the spikes aren’t super off target, here’s the Apache benchmark output for Thick server:

And you can indeed see Thick is doing well. To read the percentage values listed, it’s saying (for Thick for instance) that within 54ms 50% of the requests have been completed, whereas for reel-rack, by 244ms 50% of the requests have been completed.

You can see this on the graphs, the huge spike for Thick is below 50ms, and the one for reel-rack is closer to 210ms.

MRI 20 number Fibonacci

Once again Reel Webmachine blasts through.

So what does all this mean? Really, nothing, I mean it gives me some idea how the servers act on my hardware, more info on how hard they are to work with or how stable they are, but I wouldn’t base a production choice on the data above.

Though I would probably narrow my focus on what I would test on production hardware.

Since my end goal was around JRuby, it’s nice to see how the various servers act, and some gave me more trouble than others, puma for instance gave me a few issues with the config.ru files, some others weren’t very graceful on shutdown.

I will probably try to come up with a much more complex test to really see how Reel does, the actor model can make concurrent tasks very robust.

Hopefully someone finds some of the info above useful for next time they need to look for a web server for their Ruby app.

Realistically, I would offload the static content handling to something like Nginx, though this could be done with some servers like Thin.