One of the beauties of beowulfery is that anybody can afford
supercomputing when the supercomputer is made out of off-the-shelf
components. Over the last several decades, supercomputers have
generally been export-controlled by the fully developed countries as a
weapon. This has engendered a number of ``interesting'' discussions on
the beowulf list over the years.

The rationale for restricting supercomputers as a weapon has always been
that supercomputers can be used, in principle, to design a nuclear
device and to model explosions for different designs without the need to
build an actual device. The interesting thing is that this restriction
was held from the early 1970's through the end of the millenium. During
that time, a ``supercomputer'' went from a speed of a few million
floating point operations per second through billions to trillions. We
have now reached the point where personal digital assistants have the
computing capacity and speed of the original restricted supercomputers.
Computers such as the half-obsolete laptop on which I am writing this
execute a billion instructions per second and would have been highly
restricted technology a decade ago.

Add to this mix cluster computing methodologies that combine hundreds to
thousands of of GFLOPs into an aggregate power of teraflops, at a cost
of perhaps $0.50 per FLOP and (rapidly) falling. Any country on
the planet can now build a supercomputer out of commodity parts, for
good or for ill, capable of simulating a nuclear blast or doing anything
else that one might wish to restrict.

It is difficult to judge whether or not these concerns have ever had any
real validity. Of course, being a physicist, I never let a little thing
like difficulty stop me. In my own personal opinion, the export
restrictions haven't had the slightest effect on the weapons design
process in any country, nuclear or not. Nuclear bombs have never been
particularly difficult to design (remember that they were originally
built with early 1940's technology!). The issue has generally not been
how good a bomb one can design or modeling thermonuclear blasts, but
whether one can build one at all, even from a time-tested design. In a
nutshell, whether or not one could lay hands on plutonium or the
appropriate isotopes of uranium.

In the meantime, restricting exports of supercomputers to only those
countries already in possession of nuclear bombs or capable of managing
the diplomacy required to certify their use in specific companies and
applications had a huge, negative impact on the technological
development of countries that didn't have nuclear bombs already or
the right diplomatic or corporate pull. Engineering disciplines of all
sorts (not just nuclear engineering) rely on advanced computing
resources - aerospace engineering, chemical engineering, computer
engineering, mechanical engineering, all rely heavily on visualization,
finite element analysis, simulation and other tasks in the general arena
of high performance computing.

Now, was this repeated extension of the definition of a ``restricted
supercomputer'' really a matter of national security and bomb
design, or was it (and is it today) a way of perpetuating the control
certain large industrial concerns had over the computing resources upon
which their competitive advantage is based? A proper cynic might
conclude that both are true to some degree, but restricting the
development of nuclear bombs alone is hardly credible in a world where
we have increasing evidence that any country with the will and the
plutonium (such as North Korea, India, Pakistan, at the time of this
writing) easily built nuclear devices as soon as they had the
plutonium, and only a lack of plutonium and a war stopped Iraq.

In any event, this amusing little bit of political cynicism aside, the
playing field is now considerably leveled, and is unlikely to ever again
become as uneven as it has been for the last fifty years.

I have personally built a cluster supercomputer in my home that has
between eight and ten Intel and AMD CPUs in the range between
(currently) 400 MHz and around 2 GHz (that is, half the cluster is
really semi-obsolete and none of it is bleeding edge current). Together
they easily cumulate to more than a GFLOP, making it a ``restricted
armament'' as of a few short years ago.

The total cost of all the systems involved is on the order of four or
five thousand dollars spent over five or six years. Spending five
thousand dollars all at once I could easily afford eight to ten 2.4 GHz
CPUs and quite a few aggregate GFLOPS (by whatever measure you choose to
use), and this is chickenfeed spent on a home beowulf.

Using even this resource in clever off-the-shelf ways, I'm
confident that I could do anything from design basic nuclear devices to
model/simulate those nuclear devices in action, with the biggest problem
being the creation of the software required from general sources and
initializing it with the right data, not any lack of power of the
computers. All the components of this compute cluster are readily
available in any country in the world and are impossible to restrict.
Export restrictions may or may not be dead, but they are certainly moot.

At this point any country in the world can afford beowulf-style
supercomputing - a bunch of cheap CPUs strung together on a network
switch as good as one needs for the task or can afford. And nearly
every country in the world does build beowulfs. I've helped
people on and off of the beowulf list seeking to build clusters in
India, Korea, Argentina, Brazil, Mexico and elsewhere. Some of these
clusters were associated with Universities. Others were being built by
hobbyists, or people associated with small businesses or research
groups.

``Armed''18.1 with a cluster costing a few
thousand dollars (and up), even a small school in a relatively poor
country can afford to teach high performance computing - design,
management, operation, programming - to prepare a future generation of
systems managers and engineers to support their country's technological
infrastructure and growth! Those trained programmers and managers, in
turn, can run beowulf-style clusters in small businesses and for the
first time enable them to tackle designs and concepts limited only by
their imagination and the quality of their programmers and scientists,
not by a lack of the raw FLOPS required for their imagination to become
reality in a timely and competitive way. Compute clusters can also
nucleate other infrastructure developments both public and private.

In the best Darwinian tradition, those companies that succeed in using
these new resources in clever and profitable ways will fund further
growth and development. Suddenly even quite small ``start up''
companies in any country in the world have at least a snowball's chance
in hell of making the big time, at least if their primary obstacle in
the past has been access to high performance compute resources.

In this country, I've watched beowulf-style compute clusters literally
explode in ``popularity'' (measured by the number of individuals and
research groups who use such a resource as a key component of their
work). At Duke alone, ten years ago I was just starting to use PVM and
workstation networks as a ``supercomputer'' upon which to do
simulations in condensed matter physics. Today there are so many
compute clusters in operation or being built that the administration is
having trouble keeping track of the all and we're developing new models
for cluster support at the institutional level. My own cluster
resources have gone from a handful of systems, some of which belong to
other people, to close to 100 CPUs, most of which I own, sharing a
cluster facility in our building with four other groups all doing
cluster computations of different sorts.

The same thing is happening on a global basis. The beowulf in
particular and cluster computing in general are no longer in any sense
rare - they are becoming the standard and are gradually driving more
traditional supercomputer designs into increasingly small markets with
increasingly specialized clients, characterized primary by the deep
pockets necessary to own a ``real'' supercomputer. In terms of price performance, though, the beowulf model is unchallenged and likely
to remain so.

Beowulfs in developing countries do encounter difficulties that we only
rarely see here. Unstable electrical grids, import restrictions and
duties, a lack of local infrastructure that we take for granted, theft,
and graft, all make their local efforts more difficult and more
expensive than they should be, but even so they remain far more
affordable than the supercomputing alternatives and well within the
means of most universities or businesses that might need them. As is
the case here, the human cost of a supercomputing operation is very
likely as large or larger than the hardware or infrastructure cost, at
least if the supercomputer is a beowulf or other compute cluster.

Even with these difficulties, I expect the global explosion in COTS
compute clusters to continue. Based on my personal experiences, this
book (in particular, given that it is available online and free for
personal use) is likely to help people all over the world get started in
supercomputing, nucleating new science, new engineering, new
development.

To those people, especially those in developing countries trying to
overcome their own special problems with funding, with environment, with
people, all I can say is welcome to beowulfery, my brothers and sisters!
The journey will prove, in the end, worthwhile. Please let me know if I
can help in any way.