Fronting Apache with Nginx

I’ve been exploring using Nginx to front our Apache websites. I found a fair amount of documentation online but most of it was for nginx on top of a backend application running on the same host – so, lots of examples of load balancing among various ports on 127.0.0.1. In my case I have name-based Apache virtual hosts running at different IP addresses. Also my goal is not to set up automatic load balancing – our session-based application will not work well with it – but rather I want to be able to rapidly and manually switch a web address to a different machine so I can perform maintenance on or updates to the offline systems.

To summarise, I want the publically accessible crashingdaily.com website to be a proxy server to our internal, Apache name-based virtual hosts w1 or w2. I want to be able to choose which of w1 or w2 services the client’s request.
A diagram of my setup.

Not illustrated here is the possibility for authorized client browsers (restricted by IP address and/or authentication) to directly access w1 and w2. This is important because it allows our internal developers and testing team to verify a working host before we put it into service.

I present here my installation and setup notes. The hostnames and IP addresses used here are artificial.

I am running Red Hat Enterprise Linux 4 and I have the openssl and openssl-devel packages installed but I was unable to use them for compiling nginx’s SSL module. The compile would fail with “undefined reference to 'krb5_free_data_contents'“. So, I download and untar the source for openssl. I’m not compiling or installing it, just providing the source to the nginx compiler. I don’t really need nginx with SSL support at this time, I’m including it for later testing.

Let’s take a look at the round-trip chain of communications through the servers.

Client to Proxy:
crashingdaily.com resolves in DNS to 216.52.184.243, so the general public is routed to that machine. An Nginx virtual server there answers requests for crashingdaily.com and proxies the request to one of the IP addresses defined in the ‘upstream’ block. It does not use host names and so does not need DNS. This was a part that threw me at first. I kept trying to put backend host names, ‘w1.crashingdaily.com’ and ‘w2.crashingdaily.com’ into the upstream block. That can be made to work but it isn’t pretty and, if you configure Apache appropriately (see below), it is unnecessary. As I said earlier, I’m not doing load balancing at this time so all but one of the IP addresses are marked ‘down’. I can easily switch which IP address serves client requests by adjusting this flag.

Proxy to Backend:
When Nginx makes its requests to the backend Apache HTTP servers, 216.52.184.15 or 216.52.184.30, it sets the Host attribute in the HTTP headers to ‘crashingdaily.com’ (defined with the nginx configuration directive ‘proxy_set_header Host $host‘). The backend Apache servers use this Host header attribute of the incoming request to determine which name-based VirtualHost will handle the request. The Apache virtual host has ServerName set to w1.crashingdaily.com (similarly for w2) and also has a ServerAlias set to ‘crashingdaily.com’. So, when nginx connects to 216.52.184.15 with the ‘Host: crashingdaily.com’ HTTP header, Apache knows which virtual host to use. Defining a ‘ServerAlias’ was the missing piece I needed so I could use IP addresses in nginx’s upstream block.

The w1 and w2 ServerName configuration is optional but allows us to connect directly to the individual backend servers for testing. That is, we can privately work on the ‘down’ server via the w1/w2 host name while public interacts, by proxy, with the up server.

Backend to Proxy:
The HTTP header of the Apache response from 216.52.184.15 should contain ‘Location: http://crashingdaily.com/&#8217; because that is the host that was requested (Recall from above that Nginx sends “Host: crashingdaily.com” in its HTTP headers).

Proxy to Client:
The client receives the proxied response from the backend server. It should have no idea that it’s been proxied.

Logging:
Normally Apache on 216.52.184.15 will log the incoming request as coming from 216.52.184.243 (the nginx server), but here the X-Forwarded-For header in Nginx’s request passes along the client’s IP address to the mod_rpaf module I have installed in Apache and allows for logging the client’s IP.

This is great, exactly what I needed. Nginx examples are hard to find (working ones anyway). However other than having a failover what benefit can there be of such a setup? I am more interested in some LB examples if you care to share. Thanks a lot.

vangel, Thanks for the feedback. Failover is the only benefit I can think of for this setup. I primarily use it so I can update/repair backend installations (a form of planned failover).

I don’t have any examples of load balancing, it’s not suitable for my particular application environment. I believe the above setup will do basic round-robin load balancing if there is more than one active server (not marked ‘down’) in the upstream{} configuration block.

thanks for replying to me.
I read above that definitely this setup is not LB. Though I know LB can be used as failover? After I thought it through I was thinking another good point would be that you can use this setup to serve static content via nginx for your apache hosted sites. I have seen considerable speed improvement and ability to hadnle more load per site. I use nginx (dedicated) servers for hosting images at http://www.3ezy.net

Even without LB your deployment is nevertheless a good idea. when we switched over large sites from one server (one one DC) to another it would have been a lot of help to have this sort of setup.

is sufficient for simple round-robin load balancing. It also does failover if I shutdown one of the upstream Apache servers. There may be more advanced load balancing techniques; that’s beyond my personal experience.

yes you are right. all we need to do is remove the down status.
plus we can assign weight to each server.http://wiki.nginx.org/LoadBalanceExample
as i mentioned yours is the only complete example I found. nginx only has bits and pieces of info so a lot of config was trial and error for me as well. So there you go you have load balancer + failover :)