Breadcrumbs

CentOS: Configure Piranha as Load Balancer (Direct Routing Method)

I am currently working on a web cluster project using CentOS. In this project, I have 2 web servers running on Apache and mounted the same document root to serve the HTTP content. I also have 2 servers in front of it to become the load balancer and failover to increase high availability of the two-node web server cluster. The virtual IP will be hold by load balancer #1 with auto failover to load balancer #2.

4. Now we need to configure the virtual IP and virtual HTTP server to map into the real HTTP server. Go to Virtual Servers > Real Server and add into the list as below:

Make sure you activate the real server once the adding completed by clicking the (DE)ACTIVATE button.

5. Now copy the configuration file to load balancer #2 to as below:

$ scp/etc/sysconfig/ha/lvs.cf 192.168.0.232:/etc/sysconfig/ha/

6. Restart Pulse service to apply the new configuration:

$ service pulse restart

You can monitor what is happening with Pulse by tailing the /var/log/message output as below:

$ tail-f/var/log/message

Load Balancer #2

No need to configure anything in this server. We just need to restart Pulse service to get affected with the new configuration changes which being copied over from LB1.

$ service pulse restart

If you see the /var/log/message, pulse in this server will report that it will run on BACKUP mode.

Web Servers

1. Since we are using direct-routing method, regards to your Apache installation, we also need to install another package called arptables_jf. Here is some quote from RedHat documentation page:

Using the arptables_jf method, applications may bind to each individual VIP or port that the real server is servicing. For example, the arptables_jf method allows multiple instances of Apache HTTP Server to be running bound explicitly to different VIPs on the system. There are also significant performance advantages to usingarptables_jf over the IPTables option.

However, using the arptables_jf method, VIPs can not be configured to start on boot using standard Red Hat Enterprise Linux system configuration tools.

first of all thanks for the post, very helpfull =p
Have you ever used persistence? I set some servers equal than in your post but I’m facing a problem.
I set the persistence time to 120s and this is working fine. If a real server turn down the director remove it from the pool of ipvs and doesn’t send new connections to this server BUT if there are some open connections active by persistence time they are mantained in the pool and the user keeps trying to connect on a dead server and every time he tries the persistence time is refreshed, so if he keep refreshing the page he will never get out of this loop…This is normal? I forgot something?
Thanks very much and congrats for the post

I have no experience using persistence, but from my understanding LVS remembers the last connection for a specified period of time (120s). If that same client IP address connects again within that period, it will be sent to the same server it connected to previously — bypassing the load-balancing mechanisms.

Since it says, BYPASSING the load-balancing mechanism, what you were facing is an expected behaviour.

U need to add the VIP into the web servers as well. Every packet should have source and destination address. If the Web Server do not have that IP, the packet will never get ready because the system cannot bind the VIP (source address) into that packet. This will create invalid packet and your packet will never get delivered to the recipient.

I’m facing also another problem during failover – the virtual IP is added to the passive LB but it seems like it’s still pointing to the second LB for some time and I can’t access the web servers – is it possible that “send_arp” needs some time to broadcast the new MAC (this IPtable rule solve this problem but I’m not sure if it’s safe: iptables -A FORWARD -d FLOATING_IP -p tcp -m multiport –dports 80,443 -j ACCEPT)?

May I know what is the packet being rejected? Do you have some logs on that? Depending on firewall rules, you have ‘state NEW’ in your ACCEPT rules, so it will reject any packet which is INVALID.

Every router/switch should have ARP cache. Try check and disable this feature, or you can try to follow this method to clear ARP cache. Depending on caching, it will need to follow his timeout before these devices refresh their ARP table.

Is your server really down when the virtual IP is added to passive LB? Or you just turn off pulse service? If you put that rule (-A FORWARD) and it is working, it means that your 1st LB is still up (network and iptables) and do the forwarding to multiport to floating IP which located on the 2nd LB. This method should work but not recommended.

Are you forwarding to correct rtsp port on real server? How do you do back-end verification on the monitoring script section? Kindly take note that this tutorial is focusing on using LB on HTTP protocol (tcp port 80).

i’m trying to implement the same architecture using the direct routing method. In case i have an additionnal layer under the 2 apache real servers and this layer is composed of some tomcat and jboss instances : Will the tomcat instance send the response directly to http user ? or it must move throught the reel server before arriving to http user ?

I have a question about sharing sessions between the real servers.
I don’t know if it’s managed by piranha .. ? For example if a real server crashes, does the second keeps dealing with its sessions ? if not, how can we do that ?

I have a question about sharing sessions between the real servers.
I don’t know if it’s managed by piranha .. ? For example if a real server crashes, does the second keeps dealing with its sessions ? if not, how can we do that ?