Oracle Blog

A closer look at using Oracle Solaris

Tuesday Jul 19, 2011

I'm not sure how well known it is that Solaris 11 contains a load balancer. The official documentation, starting with the Integrated Load Balancer Overview, does a great job of explaining this feature. In this blog entry my goal is to provide an implementation example.

For starters, I will be using the HALF-NAT operation mode. Basically, HALF-NAT means that the client's IP address is not mapped so that the servers know the real client address. This is usually preferred for server logging (see ILB Operation Modes for more).

I will load balance traffic across 2 zones, each running the Apache Tomcat server. The load balancer itself will be configured as a multi-homed zone. The configuration will look as follows:

Since our ilb-zone has 2 network interfaces, we also want to make
sure a packet arriving on one network interface and addressed to a host
on a
different network is forwarded to the appropriate interface.

root@ilb-ext:~# svcadm enable ipv4-forwarding

Step 4: Install the Serve 1 Zone

We'll create the first server zone as a clone of the ilb-zone. We'll then configure the server 1 zone and clone it to server 2.

Then define a load balancing rule. This is the most complicated part of the process. For starters, I'll try to keep the rule as simple as possible. The rule is enabled (-e), will persist (-p), incoming packets (-i) are matched against destination virtual IP address (vip) and port 10.0.2.20:80. The packet is handled (-m) using round robin (rr). The destination for the packets (-o) is server group tomcatgroup. The rule is called tomcatrule_rr.

Step 10: Load Balance!

You can now point your browser to the virtual IP address and get a result back from one of the Tomcat servers:

Very cool! But from which server was I served? I modified the example snoop.jsp to return the server's hostname and IP Address. Save the snoop.jsp to the /var/tomcat6/webapps/examples/jsp/snp directory in each of your zones.

The load balancer provides the following variables to use with your script, of which I'm only using $2:

$1 - VIP (literal IPv4 or IPv6 address)$2 - Server IP (literal IPv4 or IPv6 address)$3 - Protocol (UDP, TCP as a string)$4 - Numeric port range (the user-specified value for hc-port)$5 - maximum time (in seconds) that the test should wait before returning a failure. If the test runs beyond the specified time, it might be stopped, and the test would be considered failed. This value is user-defined and specified in hc-timeout.

Ensure the script has execute permissions (the ilbd deamon, which runs the health check, is not run as root):

The hc-timeout is how many seconds the health check will wait for a response before giving up. The hc-count is how many times the script will attempt to succeed before claiming the server to be dead. The hc-interval is how often the health-check is performed.

Now that we have a health check, we need to add it to our load balancing rule. Unfortunately, ilbadm doesn't have a command to modify an existing load balancing rule, so we have to delete it and create it again:

root@ilb-ext:~# ilbadm delete-rule tomcatrule_rr

We'll create the same rule as before, this time including the health check:

So now, if probe.jsp is showing that you're hitting server 2 and we then disable Tomcat on Server 1:

root@server1-zone:~# svcadm disable tomcat6

When you refresh your browser you will be directed to Server 2. Of course, any state they you may have been maintaining on Server 1 will be lost. You can also see the status as dead using ilbadm show-hc-result: