Search

A while back, I posted about some testing we were doing of various software load balancers for WordPress.com. We chose to use Pound and have been using it past 2-ish years. We started to run into some issues, however, so we starting looking elsewhere. Some of these problems were:

Lack of true configuration reload support made managing our 20+ load balancers cumbersome. We had a solution (hack) in place, but it was getting to be a pain.

When something would break on the backend and cause 20-50k connections to pile up, the thread creation would cause huge load spikes and sometimes render the servers useless.

As we started to push 700-1000 requests per second per load balancer, it seemed things started to slow down. Hard to get quantitative data on this because page load times are dependent on so many things.

So… A couple weeks ago we finished converting all our load balancers to Nginx. We have been using Nginx for Gravatar for a few months and have been impressed by its performance, so moving WordPress.com over was the obvious next step. Here is a graph that shows CPU usage before and after the switch. Pretty impressive!

Before choosing nginx, we looked at HAProxy, Perlbal, and LVS. Here are some of the reasons we chose Nginx:

Easy and flexible configuration (true config “reload” support has made my life easier)

Can also be used as a web server, which allows us to simplify our software stack (we are not using nginx as a web server currently, but may switch at some point).

Only software we tested which could handle 8000 (live traffic, not benchmark) requests/second on a single server

We are currently using Nginx 0.6.29 with the upstream hash module which gives us the static hashing we need to proxy to varnish. We are regularly serving about 8-9k requests/second and about 1.2Gbit/sec through a few Nginx instances and have plenty of room to grow!

Thanks for the rundown Barry. We’re about to go live with nginx in a similar role, it’s a really nice piece of software. We’ve got it behind ipvs / keepalived to handle simple layer 4 load balancing and failover, the combination works well.

Have you seen any issues with ssl or ssl+gzip? This seems to be an area where 0.5 and 0.6 have both had a few bugs recently — and something that seems not too easy to exercise without real traffic. Thanks!

From everything I’m reading, there’s not many reasons *not* to switch to nginx. I’m building my network with it starting out, so I can use its various capabilities in the future. What kind of load balancing does it do? It has built in round-robin, with a weight measurement, right? It doesn’t have anything to check the upstream servers’ health as far as I know. I’m esp interested in the static gzip module and passing things off to Varnish – can you explain more how those tie together? I assume Varnish is upstream from the nginx load balancer?

We have tested it up to about 10k req/sec. Memory footprint is minimal, and Nginx doesn’t use much CPU time. Where you end up with problems is in the TCP overhead and the time spent handling software interrupts. It gets much worse with iptables and connection tracking. Performance here is probably better on FreeBSD than Linux (we run Linux), but I haven’t tested it.

Depending on the request type some requests are then passed to Varnish and others are sent directly to the web servers. We currently use Varnish only to serve on static images and video content (reverse caching proxy to Amazon’s S3).

This is a follow up to Mike’s question on 4/28 about the failover configuration of nginx. We are specifically interested in understanding if and how nginx can be configured for a traditional active/active failover pair. We want to know if nginx supports state sharing between the failover pair so as to maintain continuation of service for such features as server affinity.
Any light and/or guidance you can share is greatly appreciated.

hi,
we plan a website with around 10000-50000 online users.
we plan to use nginx as a loadbalancer and will have the webservers within an internal ip-network.
my question is: if the nginx LB has to route+NAT all the users to the internal webservers, how much load will that make on the nginx server? Is it possible?
Thank u very much for your help!

I cant believe your comment that nginx was the only solution that could reach 8000 cons/sec. HaProxy (latest) I’ve had doing full cookie inserts at 27,000 cons/sec. A graph here compares connections/sec on the Kemp 1500 and Loadbalancer.org R16 which are both based on LVS here http://www.loadbalancer.org/whyr16.html (we also use Pound & Haproxy). Blatant commercial link but still relevant.

Thanks, though I found this post a bit late, it saved my job. We have decided to port our latest word press news site to nginx. We are already getting 10K hits per day, and expect around 50K once new features and channels are added..

I came to this post from nginx official website, your post is like a case study. You should make a small pdf and publish it as a white paper or case study. I’m sure many people would like to learn more. Mikrowelle Edelstahl

So how does nginx handling massive 1000PPS+ DDOS attacks? Especially the http ones. In that case you would put a filtering device before it which stops the “bad” packets but im curious by itself how does it deal with it.

I know this item is very old, but I have a question.
I have 2 NginX load balancers which DNS spreads the load between them, but if one of load balancer servers stops, what will happen? half of users will get 404?

nope, half of users will get an error page from their browser telling that the connection is not possible, because your nginx does not answer anymore.
that’s the problem with DNS…
In this case you should set a very low TTL on your DNS records, in order to switch quickly if neededd ;-)

[…] more than a million sites; more than doubling in numbers. The WordPress blogging system recently converted all of its load balancers to nginx, using the upstream hash module to serve 8-9 thousand requests […]

[…] from Jamie’s pointer, in doing the initial research, what got me excited was reading that WordPress.com had switched to nginx for their load balancing (and might eventually switch for their web serving as well), and that Fastmail is using nginx for […]

[…] more than a million sites; more than doubling in numbers. The WordPress blogging system recently converted all of its load balancers to nginx, using the upstream hash module to serve 8-9 thousand requests […]

[…] more than a million sites; more than doubling in numbers. The WordPress blogging system recently converted all of its load balancers to nginx, using the upstream hash module to serve 8-9 thousand requests […]

[…] which were delivering static assets like CSS, JavaScript and (some) image files. Recently the WordPress.com load balancers were upgraded to nginx and since then nginx has been proving to be a very high performance piece of software, with some […]

[…] what you’re probably thinking of is WordPress.com recently switching to using Nginx as a frontend load-balancing HTTP proxy or to serve static images and files instead of Lighttpd. Those are both excellent use cases for […]

[…] Load Balancer Update « Barry on WordPress (tags: architecture wordpress varnish nginx) This entry was written by bairos, posted on December 23, 2008 at 1:30 am, filed under delicious-daily. Bookmark the permalink. Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL. « links for 2008-12-21 […]

[…] nginx has been running for more than four years on many heavily loaded Russian sites including Rambler (RamblerMedia.com). In March 2007 about 20% of all Russian virtual hosts were served or proxied by nginx. According to Google Online Security Blog year ago nginx served or proxied about 4% of all Internet virtual hosts, although Netcraft showed much less percent. According to Netcraft in March 2008 nginx served or proxied 1 million virtual hosts. The growing in picture and colour! According to Netcraft in December 2008 nginx served or proxied 3.5 millions virtual hosts. And now it is on 3rd place and ahead of lighttpd. According to Netcraft in March 2009 nginx served or proxied 3.06% busiest sites. 2 of Alexa US Top100 sites use nginx. Here are some of success stories: FastMail.FM, Wordpress.com. […]

[…] more than a million sites; more than doubling in numbers. The WordPress blogging system recently converted all of its load balancers to nginx, using the upstream hash module to serve 8-9 thousand requests […]

[…] nginx [engine x] is a HTTP and reverse proxy server, as well as a mail proxy server written by Igor Sysoev. It has been running for more than five years on many heavily loaded Russian sites including Rambler (RamblerMedia.com). According to Netcraft nginx erved or proxied 4.24% busiest sites in January 2010. Here are some of success stories: FastMail.FM, WordPress.com. […]

[…] nginx [engine x] is a HTTP and reverse proxy server, as well as a mail proxy server written by Igor Sysoev. It has been running for more than five years on many heavily loaded Russian sites includingRambler (RamblerMedia.com). According to Netcraft nginx served or proxied 4.70% busiest sites in April 2010. Here are some of success stories: FastMail.FM, WordPress.com. […]

[…] tend to modify the LAMP stack, for instance replacing Apache with the faster and newer nginx (eg wordpress.com) which is becoming increasingly popular and can substantially exceed Apache performance with more […]

[…] or proxied 4.70% of the busiest sites in April 2010. Here are some of success stories: FastMail.FM, WordPress.com.The sources are licensed under 2-clause BSD-like license. The source can be downloaded at […]

[…] nginx [engine x] is an HTTP and reverse proxy server, as well as a mail proxy server, written by Igor Sysoev. For a long time, it has been running on many heavily loaded Russian sites including Yandex, Mail.Ru, VKontakte, and Rambler. According to Netcraft nginx served or proxied 7.84% busiest sites in October 2011. Here are some of the success stories: FastMail.FM, WordPress.com. […]