I've seen people recommend combining all of these in a flow, but they seem to have lots of overlapping features so I'd like to dig in to why you might want to pass through 3 different programs before hitting your actual web server.

nginx:

ssl: yes

compress: yes

cache: yes

backend pool: yes

varnish:

ssl: no (stunnel?)

compress: ?

cache: yes (primary feature)

backend pool: yes

haproxy:

ssl: no (stunnel)

compress: ?

cache: no

backend pool: yes (primary feature)

Is the intent of chaining all of these in front of your main web servers just to gain some of their primary feature benefits?

It seems quite fragile to have so many daemons stream together doing similar things.

Overall, this model fits a scalable and growing architecture ( take haproxy out, if you dont have multiple servers )

Hope this helps :D

Note: I actually also introduce Pound for SSL queries aswell :D
You can have a server dedicated to decrypting SSL requests, and passing out standard requests to the backend stack :D (It makes the whole stack run quicker and simpler )

Why would you cache static and serve dynamic? Nginx is lightning fast for serving up static files. I prefer to use a stack like [HAProxy -> Nginx] for static and [HAProxy -> Nginx -> Varnish -> Apache] to implement a cache on the dynamic. Terminating SSL at the load balancer as you stated with dedicated terminating nodes.
–
Steve BuzonasDec 11 '13 at 18:26

It's true that the 3 tools share common features. Most setups are fine with any combination of 2 among the 3. It depends what their main purpose is. It's common to accept to sacrifice some caching if you know your application server is fast on statics (eg: nginx). It's common to sacrifice some load balancing features if you're going to install tens or hundreds of servers and don't care about getting the most out of them nor about troubleshooting issues. It's common to sacrifice some web server features if you're intending to run a distributed application with many components everywhere. Still, some people build interesting farms with all of them.

You should keep in mind that you're talking about 3 solid products. Generally you won't need to load balance them. If you need front SSL, then nginx first as a reverse-proxy is fine. If you don't need that, then varnish on the front is fine. Then you can put haproxy to load balance your apps. Sometimes, you'll like to also switch to different server farms on the haproxy itself, depending on file types or paths.

Sometimes you'll have to protect against heavy DDoS attacks, and haproxy in front will be more suited than the other ones.

In general, you should not worry about what compromise to do between your choices. You should choose how to assemble them to get the best flexibility for your needs now and to come. Even if you stack several of them multiple times it may sometimes be right depending on your needs.

Simpler, being just a tcp proxy without an http implementation, which
makes it faster and less bug prone.

So the best method seems to be implementing all of them in an appropriate order.

However, for general purpose, Nginx is best as you get above-average performance for all: Caching, Reverse proxying, Load balancing, with very little overhead on resource utilization. And then you have SSL and full web server features.

I would simply configure this with varnish + stunnel. If I needed nginx for some other reason, I would just use nginx + varnish. You can have nginx accept SSL connections and proxy them to varnish, then have varnish talk to nginx via http.

Some people may throw nginx (or Apache) into the mix because these are somewhat more general purpose tools than Varnish. For example, if you want to transform content (e.g., using XDV, apache filters, etc) at the proxy layer you would need one of these, because Varnish can't do that by itself. Some people may just be more familiar with the configuration of these tools, so it's easier to use Varnish as a simple cache and do the load balancing at another layer because they're already familiar with Apache/nginx/haproxy as a load balancer.

Right -- "backend pool" was meant to point out that all three of these have load balancing features. From my initial investigation it seems that HAProxy has the most tunable load balancing options.
–
Joel KNov 19 '10 at 18:00

That sounds reasonable, since it's been purpose-built as a load balancing tool. On the other hand, Varnish's load balancing features are pretty good, and removing one process from this mix gets your a simpler configuration with less latency.
–
larsksNov 19 '10 at 18:13