Linux on Carrier Grade Web Servers

The goal of the benchmarking tests was to test the
scalability of the LVS-NAT implementation. For this purpose, we
carried two kinds of tests: the first one was a direct approach
that consisted of sending traffic directly to the CPUs; the second
approach was to direct the traffic to the NAT director that is the
front-end server for the CPUs.

For the tests conducted without LVS, we sent HTTP requests
directly to the real servers. WebBench supports this configuration
by generating web traffic and sending them to multiple servers
(Figure 5). As for the tests with LVS (Figure 6), we configured
WebBench to send the HTTP requests to the LVS server (the NAT
director), which in turn directed the traffic to the real
servers.

Figure 5. The Benchmarking Setup without LVS

Figure 6. The Benchmarking Setup with LVS-NAT

Benchmarking Results

Figure 7 shows the number of requests per second our LVS
setup was able to achieve versus a direct setup without LVS. It
clearly shows that the LVS-NAT implementation suffers from a
bottleneck at the director level once it reaches 2,000 requests per
second.

Figure 7. LVS vs. Non-LVS Results

We decided to conduct a third test using only one machine to
generate traffic with WebBench. We measured a latency of 0.3
milliseconds for answering HTTP requests. LVS handled the load
successfully answering more than 178 requests per second.

After analyzing the results, we concluded that the bottleneck
problem is due to the number of simultaneous TCP connections per
second that the LVS director can handle. The results show that the
maximum number of connections handled simultaneously by LVS is not
more than 1,790 valid TCP connections per second. Without using
LVS, by sending requests directly to servers, we have been able to
achieve more than 7,116 valid TCP connections per second. We plan
to investigate this issue in more detail in the coming
weeks.

Evaluation of LVS via NAT

The NAT implementation of LVS has several advantages. First,
the real servers can run any operating system that supports the
TCP/IP protocol and they can use private internet addresses. As a
result, the whole setup would only require one IP address for the
load balancer.

However, the drawback of using the NAT implementation is the
scalability of the virtual server. As we have seen in the
benchmarking tests, the load balancer presents a bottleneck for the
whole system.

LVS via NAT can meet the performance request of many small to
mid-size servers. When the load balancer becomes a bottleneck, then
you need to consider the other two methods offered by LVS: IP
tunneling or direct routing.

Conclusion

We tested LVS in an industrial environment with one LVS
Server and eight traffic CPUs. We found some restrictions when
using LVS under heavy load. However, we also found LVS to be easy
to install and manage and very useful.

We believe that LVS is a potential solution for small to
mid-size web farms that need a software-based solution for traffic
distribution. However, for the kind of servers we are building, the
requirements necessitate a higher number of transactions per second
than the NAT implementation of LVS can handle.

LVS's future is promising with the determination to add more
load-balancing algorithms and geographic-based scheduling for the
virtual server via IP tunneling. Another promising future feature
is the integration of the heartbeat code and the CODA distributed
fault-tolerant filesystem into the virtual server. LVS's developers
are also planning to explore higher degrees of fault-tolerance and
how to implement the virtual server in IPv6.

Compared to other packages, LVS provides many unique features
such as the support for multiple-scheduling algorithms and for
various methods of requests forwarding (NAT, direct routing,
tunneling). Our next step regarding LVS is to try out the other two
implementations (direct routing and IP tunneling) and compare the
performance with the NAT implementation on the same setup.

Acknowledgments

The Systems Research Department at Ericsson
Research Canada for approving the publication of this
article.

Evangeline Paquin, Ericsson Research Canada, for
her contributions to the overall LVS-related activities.

Marc Chatel, Ericsson Research Canada, for his
considerable help in the ECUR Lab.

Wensong Zhang, the LVS Project, for the permission
to use Figures 1 and 2 from the LVS web site.

Ibrahim Haddad
(ibrahim.haddad@lmc.ericsson.se) works for Ericsson Research Canada
in the Systems Research Department researching carrier class server
nodes in real-time all-IP networks. He is currently a DrSc
candidate in the Computer Science Department at Concordia
University in Montréal, Canada.

Makan Pourzandi
(makan.pourzandi@lmc.ericsson.se) works for
Ericsson Research Canada in the Systems Research Department. His
research domains are security, cluster computing and
component-based methods for distributed programming. He received
his Doctorate in Parallel Computing in 1995 from the University of
Lyon, France.

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.