Do I really need nginx with my meteor/node.js app?

Quick results with the Meteor platform and amazing success stories with Node.js lead to more developers slowly turning into sysadmins. Unfortunately being able to quickly deploy something to the web does not always mean you’re doing it right. One of the most often discussed topics when running Meteor on the web is whether to use a proxy such as nginx or not. As it happens, this topic was brought up again in Josh Owens’ Meteor Club over the weekend so I finally got around to do some benchmarking.

tl;dr: YES!

Let’s talk about proxies

A proxy is a middle-man, just like Postman Pat. It improves the process of delivering data over the web in one way or another. Typically it comes down to various flavors of caching frequently requested data.

nginx, haproxy, varnish and even Apache can be used to proxy requests to other servers. They each have advantages and focus on other areas, but this should not be our main topic for today.

How to determine whether I need a proxy?

We’re going to focus on the Meteor platform for the sake of argument. Basically once deployed, it is nothing more than a Node.js application, so all said for Meteor applies to Node.js and io.js (more or less).

In order to determine whether you need a proxy you can rely on expert opinions or look at the numbers yourself. Let’s do the latter as there are no experts around right now. To generate some numbers we need to conduct a couple of tests. But first we need to pick our proxy of choice.

Why choose nginx?

For me choosing nginx is a no-brainer because it does all I need and I have worked with it for so long now (think PHP) that I know many of its pitfalls and capabilities. It brings certain qualities to the table:

Testing performance

Let’s only focus on a tiny aspect of running a Meteor application: Serving static assets. That means we shall limit our tests to the root path and static images. Certainly this is not a spot-on real-world scenario, but it gives a good overview of the differences in this specific use case.

Setup and Scenarios

We will use two machines, one that is the system under test (let’s call it meteor) and another to test the first (named comet). As a test tool we use the simple but proven Apache Bench, also known as ab. To simulate users we set the concurrency level to 100 and run through a series of 100k requests to the server.

The systems have a single CPU core, 2 GB RAM and are KVM machines on the same host. They connect via a virtualized Intel E1000 NIC.

There are three environments to test:

Accessing Node.js directly at port 3000

Configuring nginx to pass all requests to Node.js, no additional optimizations but adding gzip compression at port 80

Configuring nginx to serve all static files directly at port 80

Two test cases shall be executed:

Request /

Request a 100k image.jpg file

Results

The results for the root path are not as clear as I’d hoped. Using nginx it seems that requests are just passed to Node.js, regardless of what happens then, while all requests that went straight to Node.js brought some additional side-effects. I will have to re-do this test with some proper monitoring in place to see whether Node.js got swamped or if it was capable to responding that much quicker (which I highly doubt).

That leaves us with the results from the static image.jpg tests. Here the results are quite clear:

Node.js alone takes 145 seconds to complete the test, nginx serves the same within 62 seconds. It more than twice as effective at serving static content than Node.js is!

Let’s compare the results over time. This is how Node.js answers the requests:

And here’s nginx:

As you can see both respond well within 200ms, but (unfortunately it’s hard to see due to the two outliars) nginx does most of its work even below 100 ms (99% get served at 98ms or better) while Node.js serves 50% at 139ms or better.

Many people put nginx in front of their Node.js app, but don’t care at all about serving static assets by nginx. They lack a configuration block like this:

Since this is where the magic happens for serving images, let’s look at the picture without this optimization:

Wow, these response times are all over the place! This is even worse than using Node.js on its own. The test duration was actually longer than before, and the mean times significantly slower. Sure, for a low number of requests this does not matter, when Node.js was done the proxy-only nginx was already at 98% finished, but nonetheless – these requests can (and will!) stack up and leave a tail in your performance reports.

Verdict

For serving images you gain impressive performance benefits when coupling Node.js with an optimized nginx configuration. Simply using nginx as a proxy but not serving images will even give you a penalty that can be easily avoided by adding static resources to the same machine than nginx and serving them directly.

Potential improvements on the root path seem impressive at first glance as well, yet I am not sure this is the honest truth. Once I have proper monitoring for all components in place I shall repeat this test to verify that initial requests see a comparable benefit than images.

Raw results and config files

You can find all details for this test and results at my GitHub repo https://github.com/yauh/meteor-benchapp. This includes more graphs and – most importantly – the configuration files used for nginx.