As Linux matures, so does its ability to handle larger clusters using commodity hardware, as well as mission-critical data across HA clusters. Clustering under Linux is similar to clusters under other operating systems, although it excels in its ability to run under many different commodity hardware configurations.

This chapter is from the book

Ask anyone on the street, and they'll tend to agree with you. Bigger is
better. Get more bang for the buck. He who dies with the most toys wins. It
stands to reason that, in most cases, more is better than only one. If one is
good, two must be great.

It comes as no surprise, then, that computing has followed this trend from
its infancy. Why even the ENIAC, widely regarded as the world's first computer,
didn't have just a few parts. It had 19,000 vacuum tubes, 1,500 relays,
and hundreds of thousands of resistors, capacitors, and inductors (http://ftp.arl.army.mil/~mike/comphist/eniac-story.html).
Why did the founders use so many parts? If you had a device that could add two
numbers together, that would be one thing. But given the budget, why not add
three or even four together? More must be better.

As time went on and computing started to mature, the "more is better"
approach seemed to function quite well. Where one processor worked well, two
processors could at least double the processing power. When computer manufacturers
started making larger and more efficient servers, companies could use the increased
horsepower to process more data. It became evident early on that more processors
equaled more computing power. Even with the advent of Intel's 386 processor,
magazines reported that a single computer could handle the workload of 15 employees!

The descendants of ENIAC were monster computers in their own right, although
as we know, with the advent of the transistor, the parts inside got smaller
and smaller. More parts were added to machines the size of refrigerators to
make them faster, yet these supercomputers were out of the financial reach of
most corporations. It didn't take long to realize (okay, it happened in
the 1990s) that supercomputer-like performance also could be achieved through
a number of low-cost personal computers. More computers were better indeedor
simply cost much less.

Clustering for the Enterprise

Today's computing environments require the needs of many computers to
solve the tasks that only one wouldn't be able to handle. Today's
large-scale computing environments involve the use of large server farms, with
each node connected to each other in a clustered environment. The ASCI Project
(Accelerated Strategic Computing Initiative), for instance, consists of several
different clustered environments in a bid to provide "tera-scale"
computing. ASCI White, capable of 12 trillion calculations per second, runs on
IBM's RS/6000 hardware and is becoming increasingly typical of solutions
to large-scale computing problems. The ASCI plant at Sandia National
Laboratories is comprised of Linux machines that run on Intel hardware and are
part of the growing trend to emulate supercomputer performance.

Clustered computing, at its most basic level, involves two or more computers
serving a single resource. Applications have become clustered as a way of
handling increased data load. The practice of spreading attributes from a single
application onto many computers not only improves performance, but also creates
redundancy in case of failure. A prime example of a basic cluster is the Domain
Name Service (DNS), with its built in primary, secondary, and cache servers.
Other protocols have also built in clustered/redundancy characteristics, such as
NIS and SMTP.

How Clustering Can Help

Although clustering might not be a panacea for today's ills, it might
help the organization that is trying to maximize some of its existing resources.
Although not every program can benefit from clustering, organizations that serve
applications, such as web servers, databases, and ftp servers, could benefit
from the technology as loads on the systems increased. Clusters can easily be
designed with scalability in mind; more systems can be added as the requirements
increase, which spreads the load across multiple subsystems or machines.

Entities that require a great deal of data crunching can benefit from high-
performance computing, which greatly reduces the amount of time needed to crunch
numbers. Organizations such as the National Oceanic and Atmospheric
Administration are able to use clusters to forecast trends in potentially deadly
weather conditions. The staff at Lawrence Livermore Lab use clustered computers
to simulate an entire nuclear explosion without harm to anyone (except the
backup operators who have to maintain all that data).

Companies serving a great deal of bandwidth can benefit from load- balanced
clusters. This type of cluster takes information from a centralized server and
spreads it across multiple computers. Although this might seem trivial at first,
load balancing can take place in a local server room or across wide-area
networks (WANs) spanning the globe. Larger web portals use load balancing to
serve data from multiple access points worldwide to serve local customers. Not
only does this cut down on bandwidth costs, but visitors are served that much
more quickly.

These load-balanced servers also will benefit from the High Availability (HA)
model. This model can include redundancy at all levels. Servers in a HA cluster
benefit from having two power supplies, two network cards, two RAID controllers,
and so on. It's unlikely that all the duplicate devices of a HA cluster
will fail at once, barring some major catastrophe. With the addition of an extra
component to the primary subsystem or the addition of an extra server, an extra
component can be put in place to help in case of failover. This is known as
N+1 redundancy and is found in clusters, RAID configurations, power
arrays, or wherever another component can take over in case of failure.

Using Linux for Clustering

With all the possible platforms from which you could choose, one might
wonder why you would choose Linux as an operating system (OS) in which to house
your critical applications. After all, with clustering being such a hot topic,
each vendor has its own implementation of clustering software, often more mature
than the homegrown efforts of dozens of programmers. All major OS vendors
support clustering. Microsoft includes its clustering application directly into
its Windows 2000 Advanced Server OS. Sun Microsystems offers its High
Performance Cluster technology for parallel computing, as well as Sun Cluster
for high availability. Even Compaq, Hewlett Packard, IBM, and SGI support
clustered solutions.

So why are these companies starting to embrace Linux when they have their own
product lines? With the exception of Microsoft, these vendors are starting to
recognize the value of open source software. They realize that, by incorporating
Linux into their business strategies, they'll utilize the benefits of
hundreds, if not thousands, of programmers scrutinizing their code and making
helpful suggestions. Although open source methodology remains to be seen as a
viable business model, large companies reap the socialistic benefits of having
such a philosophy.

Linux runs on just about any hardware platform imaginable. Just as it's
proven to be more than capable of powering large mainframes and server farms as
well as desktop machines, the versatile OS has been ported to handheld devices,
television recorders, game consoles, Amiga, Atari, and even Apple 040 computers.
Linux is well known for being an easy-to-use commodity, off the shelf parts.
Although the availability for Linux drivers might not be as prevalent as other
operating systems, there is still plenty of hardware that works without a hitch.
Linux also supports a great deal of legacy hardware, enabling older computers to
be brought back into service. The creators of Linux even envision it as the
premiere OS of embedded devices because the kernel can be modified in any shape
or form. (Although Linus Torvalds invented Linux and holds the copyright, he
didn't write the entire thing himself.)

No other OS allows for this level of versatility. It's this approach to
modular computing that makes Linux perfect for clusters.

Disadvantages of Using Linux

Although Linux has many advantages for clustering, it also has faults that
might make it an unattractive solution for certain types of clusters. The bottom
line is that Linux is a relatively new OS (albeit based on tried-and-true
technologies). For the most part, you've got an OS written by volunteers in
their spare time. Though the code is readily available for scrutiny by anyone,
the thought does exist that top-notch programmers might be whisked away by
companies that can afford to pay top salaries. (Of course, that does happen, and
for some reason, programmers even manage to work on Linux with something called
spare time.)

The level of support is not as robust as you can get with other operating
systems. That isn't to say that you can't get good vendor support; on
the contrary, the quality of support for Linux is top notch. There just
isn't as much support out there for the product as there is for other
operating systems.

A few bugs are still inherent with the OS and kernel. The native file system,
ext2, doesn't support journaling. USB support has typically been spotty.
There tends to be a smaller amount of drivers for Linux than there are for other
operating systems, even though the most common solutions are addressed.

However, most, if not all, of these issues are being addressed. Robust file
systems are available for Linux other than ext2, and support for USB is
improving with each release (as of 2.2.18, anyway). Typically, most of these
issues don't come into play when you're deploying large cluster farms.
Most of these limitations will only be applicable when you use Linux on the
desktop. But the development of the Linux kernel is rapidly outpacing the
development of other operating systems as the developers strive to fix issues
such as USB support.

The system administrator has to keep a sense of perspective when rolling out
any OS. Linux is primarily a server class OS. It's designed to handle large
tasks supporting many users and processors, and it does that well. With the
support of projects such as Gnome and KDE (not to mention every other window
manager out there), Linux can be used primarily as a workstation in addition to
a server. The development for Linux-based workstation class computers is more
advanced than most other UNIX systems. However, both Macintosh and Microsoft
tend to own more market share and usability than the rapidly advancing Linux
desktop.