Installing and Configuring LVS-TUN

EDITOR’S NOTE: Due to a global shortage of IPv4 addresses, we are no longer allowed to give out additional IPs unless they are for SSL certs. This is a Rackspace policy and we can not add an IP for any other reason.

The throughput of an individual server can be the bottleneck for almost any Cloud based server. With restrictions placed on a shared environment, you may find that you can no longer grow your solution horizontally behind a proxy load balancer. There are options to work around this, the first and easiest being DNS round robin load balancing. DNS round robin load balancing isn’t smart and will just keep rotating, even if a node goes down – this is a huge downside.

You’ve got other solutions though; I want to introduce you to LVS-TUN, a tunneling load balancer. The Linux Virtual Server project has been around for a while now, and has had different iterations, but LVS-TUN seems to work best on our infrastructure.

LVS-TUN is a tunneling load balancer solution that will take all incoming requests through the load balancer and forward the packet to the web nodes. The web nodes will then respond directly to the client without having to proxy through the Load Balancer. This type of solution can allow for geo-load balancing, but will more importantly allow a customer use the bandwidth pool available from all web nodes, instead of relying on the limited through put of the load balancer.

An additional IP on Load Balancer 1, shared with Load Balancer 2 and all Web nodes.

All servers must be on the same Huddle.

All servers must share an IP (explained later in this article)

INSTALLATION

Networking:

A shared IP address must be common among all Load Balancers and Web nodes in this cluster. Also another thing to remember is that all of your servers will need to be on the same Huddle in cloud servers; the IP pools available in Cloud Servers are restricted by Huddle.

Load Balancers (aka Directors):

What to install on your Load Balancers:

piranha

# This list is pretty short, but installing this package through YUM will install all necessary dependencies to get you working.

# Make sure to make it start on boot with “chkconfig pulse on”.

The LVS-TUN Load Balancer’s configuration will be identical on both servers. First we need to make a couple of changes to the sysctl.conf file; these kernel parameters will allow for things like the shared IP to be bounced between Load Balancing Nodes.

Once you’ve made changes to this file, run ‘sysctl –p’ to make them live inside your system.

Moving on to the LVS-TUN service configuration file, which is found at /etc/sysconfig/ha/lvs.cf; this will configure both the load balancing and fail over service. The service that will read this file is pulse, and is installed as part of piranha. Pulse is a heartbeating daemon for monitoring the health of the nodes on your cluster.

Remember to increment the serial number each time that you make a change to this file, or you won’t see the change go into effect.

If you want to run HTTPS, don’t run a health check on port 443; the system doesn’t fully support health checks over HTTPS that I could find.

You’ll want to ifdown the shared IP, usually eth0:1, and then delete its configuration file in /etc/sysconfig/network-scripts/. The pulse service will bring the IP live on the currently active load balancer, instead of the networking service used in most RPM based systems.

Install your Web Head as you normally would – apache, lighttpd or nginx. It shouldn’t make a difference which http service you are running. The main changes that we will be worried about will be to the sysctl parameters and a Tunneled IP address. Just like the Load Balancers, let’s start by running ifdown on the VIP, eth0:1, and then deleting its configuration file. We’ll reconfigure the IP as a tunnel on the web nodes a little later. First let’s make a few changes to sysctl.conf; we’re doing a few things in this file. These changes need to be made due to the default settings that come from Centos, which would normally block this type of transaction.

Ok, here is a very important step: this tunneled IP will allow the web-node to respond directly to client giving the Virtual IP address shared amongst the cluster. This shared IP makes the client think that it is still talking to the load balancer, so the connection is never terminated.

/etc/sysconfig/network-scripts/tunl0:

DEVICE=tunl0
TYPE=ipip
# this is the shared IP from the Load Balancer
IPADDR=172.x.x.x
NETMASK=255.255.255.255
ONBOOT=yes

Once you’ve saved this file, you can bring up the tunnel with ‘ifup tunl0’. It should show up in ifconfig like any other IP and it should be listed as an ‘ipip’ type.

Once your load balancer is up and running, you’ll want to run ‘ipvsadm’; this will tell you which load balancer is currently active and which web-nodes are active behind that load balancer. The active balancer will give a similar answer:

Once everything is up and running, your load balancer should take a request and forward the packet to your web-node. Then, the web-node will respond directly to the client. With this type of traffic pattern, your load balancer is essentially only taking incoming traffic. This will significantly increase its ability to handle traffic. The outgoing traffic back to the client will be shared amongst your web-nodes. You’ve just allowed your cluster to handle a much larger scale of traffic, since your traffic is now spread against your cluster.

Helpful Notes

ipvsadm – this command will tell you what LVS solutions are currently running. While the server is listening on port 80, it will not show up in netstat. It’s simply forwarding that port traffic to another server. The web server itself will respond directly to the client.

The heartbeat solution used in the configuration is pulse and not heartbeat like we would normally use. It has a slightly irregular behavior compared to heartbeat. While there is a master and slave named in the configuration file, the Master Load Balancer will not capture the IP whenever it is online. Whoever has the shared IP at the time is the true master, and there will not be a change unless there is a failure.

piranha has a gui editor. You can turn it off, you won’t need to use it.

You’ll want to move sessioning to your database or a shared disk solution, if you haven’t already.

About the Author

Great Post Brandon,
But how do I get rackspace cloud to switch the ips for me.

-Ben

http://www.olark.com Ben

I.e. If I am not using something fancy like heartbeat is there a manual way to take a shared IP down on one computer and move it to another computer in the same shared ip group?

-ben

Brandon Woodward

Hi Ben,

The article goes over setting up Pulse, this is very similiar to heartbeat, and will automatically move the IP to next priority node in case of failure. If you are looking for the manual way to move the IP around you can use the ifup and ifdown commands.

Brandon
@whitenhiemer

http://teeboxer.com Yuri

I setup something similar with Teeboxer (in the RS cloud of course) but on Ubuntu without Piranha and with two VIPs for the balancers plus DNS round-robin to spread the traffic across both balancers, and heartbeat triggering one balancer to take on both IP’s when one is down and return it when up again. (Not that LVS-TUN needs load balancing but having the traffic spread makes a few things easier and shrinks the pool of users who’ll see a temporary failure/lag.)

@Ben: heartbeat, and presumably pulse that Brandon refers to, is just doing what you’d do with ifconfig but automatically. You can do the same yourself. When one is setting up heartbeat/ipfail and screws it up, doing it by hand is required a few times. But, heartbeat with one ip is very simple to get going—create a pair of disposable instances in the cloud and reboot them back and forth to get a handle on the configs and see the ip moving. Slicehost (RS owned now) also has a good article on heartbeat only: http://articles.slicehost.com/2008/10/28/ip-failover-slice-setup-and-installing-heartbeat

http://rackspace.com cliff turner

This sounds very much like Direct Server Return: an option for asymmetrical load distribution, where request and reply have different network paths which has been around for a long time. It can cause issues for web analytics, so make sure that you have a plan to capture stats correctly. there are other cons as well. here is a link from one of our partners- http://devcentral.f5.com/weblogs/macvittie/archive/2008/07/03/3423.aspx

http://www.productionscale.com Kent Langley

Not all DNS load balancing solutions are blind to node health. Some can monitor your nodes (and add/remove nodes via API) so that individual nodes can more easily come and go. Something to take a look at as well depending on what types of problems you are trying to solve.

Hello,
Thank you so much for this tutorial, had to confirm few things, the webhead to be installed shall have virtual host as the shared ip or the pvt ip of the server?

http://www.vanillaforums.com Tim

Great article, thankyou! Be warned tho, readers – at the time of my writing, pulse will die if you try to have a “serial no =” line in your lvs.cf file. It is apparently no longer needed.

http://www.varyhost.com James

You can do the same yourself. When one is setting up heartbeat/ipfail and screws it up, doing it by hand is required a few times. But, heartbeat with one ip is very simple to get going—create a pair of disposable instances in the cloud and reboot them back and forth to get a handle on the configs and see the ip moving.

Bindk

Hi

I have a lvs-tun with round robin working or almost.

2 problems…

1) Responses are taking much longer time than say Nginx loadbalancing 600 ms v. 4-5 sec.
2) Realservers are not responding in turn but only if you press f5 and then wait approx. 4-5 min and then refresh again.