(Update 11/17/06-9:30AM) The return from sunny Tampa took me right into some nasty weather as I flew into Philadelphia. It was very cool seeing the storm
front from above and afar (super cells and all). Once we flew into the storm my mood changed, however. As we touched down people began clapping. A bit premature I thought. Going 200 mph on wet concrete is not my idea of "we made it". We did stop,
at which point the clapping had subsided so that everyone on the plane could
turn on there cell phones and announce to someone (anyone) they had landed. How we lived without cell phones is beyond me. In any case, I'll be updating our coverage later today. Jeff and I have plenty things to say. Jeff's Day 2 Blog is up and go sign up at ClusterCast to get notified when the SC06
cast is ready. We have interviews from Don Becker, Greg Lindahl, and others. We also will be posting a cast of the
Open Source Panel as well.

(Update 11/13/06-9AM) It is sunny and warm here in Tampa. When I told my friends back home about this trip they said, "You are lucky, warm sunny Tampa, enjoy the weather, the beach, the water" Of course, I tried to explain that I'll be spending almost all of my time in large exposition hall with a bunch of other HPC vacationers. "But why", they asked? "Just blow it off and enjoy yourself", they continued. The beach not withstanding, I did not try to explain to them that there will be quad-cores, Cell's, interconnects, papers, and about a thousand other things that are guaranteed to keep a cluster geek indoors in Tampa.

Pre-SC06 Press Releases/Announcements

There are usually a ton of press releases at any SC show. But
in the last few years, companies have started to put out press releases
before the show. It helps keep their press releases from being drowned
in the sea of releases at the show. It also makes my like a bit easier because
I can review some of them before the show kicks into high gear. So let's
take a look at some of what I consider to be the more interesting ones.

Let's go back about a week when the releases started coming.
Panasas was one of the first companies
to make some significant announcements (IMHO).
In this releases they
announced Version 3.0 of their operating environment, ActiveScale. The
new version has new predictive self-management capabilites that scans the
media and file system and proactively corrects media defects. Furthermore,
they improved the performance of ActiveScale by a factor of 3 to over
500 MB/s per client and up to 10 GB/s in aggregate.

More over, in this
press release Pansas announced two new products - ActiveScale 3000 and
ActiveScale 5000. The Activescale 5000 is targeted at mixed cluster (batch) and
workstation environments (interactive) that desire a single storage fabric. It
can scale to 100 of TBs. The Activescale 3000 is targeted at cluster (batch)
environments with the ability to scale to 100 TB in a single rack and combining
multiple racks allows you to scale to Petabytes.

To me, what is significant is that Panasas is rapidly gaining popularity
for high performance storage for Linux clusters. Part of the reasons for the
popularity is that Panasas has very good performance while still being a very
easy to deploy, management, and maintain storage. Plus it is very scalable.
Be sure to watch Panasas

People have been saying that one of the barriers to making clusters more
pervasive is the lack of good programming tools. Intel
announced
some new products to help the HPC community. First and
foremost, they will launch the Intel Xeon 5300 series of processor, code-name
"Clovertown." This is the first commodity quad-core processor. They also
announced the Cluster Toolkit 3.0 with the Intel MPI Library, Intel Math Kernel
Library Cluster Edition, and Intel Trace Analyzer and Collector. They also
announced Cluster OpenMP for Intel compilers. It is a new product that "... extends
OpenMP to be applicable to distributed memory clusters, helping OpenMP become a
programming method that works well for dual-core and quad-core processors
as well as clusters."

Also last week, SiCortex formally
announced
their new cluster. Their focus has been on reducing the power required for
clusters while keeping the performance as high as possible and keeping a balance
between compute and communication. They have used a 64-bit MIPS processor as a
a basis for a new chip that has six 64-bit processor cores, multiple memory
controllers, a new high performance cluster interconnect and a PCI-express
connection to storage and internetworking. According to the press release,
"A complete SiCortex cluster node with DDR-2 memory consumes 15 watts of
power, an order of magnitude less than the 250 watts used in a conventional
cluster node." The SiCortex systems will use Linux as their OS. The company
claims that, "Current Linux application software will operate on SiCortex systems
without modification."

SiCortex is introducing two models. The first one, the SC5832 is an enterprise
class system with 5,832 cores, 8 TB of memory and 2.1 terabits per second of
I/O capability. The company claims that it will deliver up to 5.8 TFLOPS of
performance in a low power cabinet (that means each core delivers about 100
MFLOPS). The second model, the SC648 is targeted at
deparmental users (smaller number of users) and offers up to 648 GFLOPS of
performance using up to 864 GB of memory and 250 gigabits/s of I/O capability.
The really interesting part is that all of this fits into less than half a
rack (less than 21U) and uses so little power that it can be plugged into a
single 110 Volt wall socket. Perhaps this will be the holy grail of personal
clusters that people, myself included, have been seeking for years.

The last announcement that I thought was significant was from
NVIDIA. They
announced
the first C compiler environment for GPUs (Graphics Processing Units). Called
CUDA, the environment allows developers to build applications for GPUs using
the traditional C language rather than specialized languages such as Brook, Cg,
or Sh. CUDA works on Geforce 8800 or future graphics cards and offers some unique
features. For example, you can use CUDA enable GPUs to create a Parallel Data
Cache, that allows up to 128, 1.35 GHz processor cores to cooperate with each
other while computing (however I don't know if these cores have to be in a
single node or multiple nodes).

This announcement is significant because with CUDA you can know, perhaps,
utilize GPUs for computation in addition to using for playing those interesting,
ah, "educational" applications. GPUs have the potential for great levels of
performance but have required re-thinking algorithms to take advantage of the
processing power. CUDA will now allow programmers to use the GPUs with the
standard C language. However, I'm curious if you have to re-think your algorithms
to take advantage of the GPUs or if CUDA helps in that regard. Hmm... Looks like
I have a good excuse to get the Geforce 8800 card for Christmas.