Posted
by
timothy
on Tuesday November 09, 2004 @12:03AM
from the win-some-lose-some-send-me-some dept.

daveschroeder writes "The November Top 500 supercomputer list has been published at SC2004. Topping the charts is IBM and the US Department of Energy's 'BlueGene/L DD2' beta system, at 70.72 TFlops, followed by NASA's 'Columbia' at 51.87.TFlops. For the first time in several publications of this list, Japan's Earth Simulator is no longer in the number one slot, falling to third. Virginia Tech's 'System X' Xserve G5 cluster, while 20% faster than the original cluster that debuted at number 3 last November, has fallen to number 7 due to the new entries, but remains the fastest supercomputer at an academic institution. Here's an excellent cost comparison (Google cache) of the top machines ('System X' is significantly cheaper than anything else in the top 20, not to mention cheaper than many things far below it in performance)."

Err, I'm not sure if the costs can be accurately compared in this way. One needs to remember that a cluster of separate computers acting as a supercomputer compared to a custom designed hardwired system isn't exactly the same thing! Otherwise you can start comparing stuff like SETI which I'm sure is the world's cheapest supercomputer because it technically didn't cost anything to SETI themselves.

Before anyone says "Of course System X is cheaper! Virginia Tech had free student labor to put it together! They paid them in pizza!"

The only thing anywhere close to System X is NCSA's Tungsten, a 2500 processor Pentium IV Xeon Dell Linux cluster. It cost $12 million, just for the asset (comparable to System X's $5.8 million overall price, including the upgrade to Xserve G5s). That's twice the cost, and over 2Tflops less performance. 2Tflops is a top 100 supercomputer...so it's a whole top 100 supercomputer poorer in performance, for an extra $6.2 million.

Another example is PNNL's 1936 processor Itanium2 cluster: 3.5Tflops less performance than System X, for $25 million.

Any way you slice it - no pun intended - System X is still a LOT cheaper, even if you allot, say $2M for professional installation and systems integration - an EXTREMELY liberal estimate, probably by an order of magnitude.

System X also has the highest Rmax per CPU of any system on the list, except for specialty non-commodity systems like Earth Simulator.

And on top of it all, last November, they hit #3 in the world, #2 in the US, and #1 academic, as well as the first academic site to ever exceed 10Tflops, all for less than $7 million in total - including all improvements to buildings, physical plant, and other infrastructure.

That first system might not have had ECC, but what it did do is break into the top 5, following all the rules of the Top 500 organization, for relative pocket change - for a price that was absolutely unheard of, sharing the spotlight with systems that cost $100 million or more - and also catapulted Virginia Tech to a supercomputing center of national prominence overnight, able to attract additional attention, funding, grants, and publicity. Not to mention testing and proving the suitability of a completely new OS, platform, processor, and interconnect for high-performance computing, increasing choice for all (and resulting in new clusters based on the same technology, such as the US Army/COLSA cluster). And even as new systems enter the top ten in the tens and hundreds of millions of dollars, System X retains the title of #1 at any academic institution, and shares the top 10 with the best of the best.

Seems to me that Virginia Tech pulled a real coup here, and a full year later, is still considerably cheaper that anything else. And now, it's being used for real scientific work. To bring a whole new platform onto the scene in essentially under a year and break into the ranks of the supercomputing elite virtually overnight, and to do it significantly, and sometimes ridiculously, cheaper than everyone else, is a feat that can't be ignored.

It's nice that Off The Shelf boxes like Apple and Intel can make a super computer cluster. When do the stories stop? We know that if you put enough PCs together, you get a very powerful machine. What we should be looking at is cutting edge technology in specialized CPUs. Give me 10,000 vanilla boxes and some good custom software, but give me a cutting edge CPU designed for super computing, that's science. We already know that it is possible to fill a fucking building with Pentiums, or better 68000s.

While you are correct that clusters are not the ultimate solution for high performance computing, single-image computers are not a great solution either. They require specific optimizations to be done for the particular system and do not lend for easy system upgrades.

Depending on application, System X's 'bang for buck' could actually be an unmitigated loss compared to IBM's offering. Any problem that isn't embarassingly parallel, for instance, would destroy System X in a value comparison with even crazy-expensive Cray machines. On such a problem even SETI, were it forced to utilize all nodes, would be orders of magnitude slower than a dual opteron workstation (if not, it might be faster if any single node (or tight cluster of nodes) is faster than the aforementioned workstation and all work is delegated to it, idling the rest).

System X only has respectable bang for buck if you limit the application domain to problems System X was specifically designed to solve effectively, which, incidentally, aren't the same set of problems that BlueGene/L can solve effectively. So no, it is not a valid comparison. It's like saying that the space shuttle transport is less cost-effective than a fleet of pickup trucks with an equivalent combined cargo weight capacity.

Well, I really am getting sick of this Apple fan-boyism. What is up with that? People here scratch their heads all day long and try to find some calculations that may show that Apple higher than the others even though the comparison is a just an apples to oranges comparison.
One writes that Flops per $ for apple's are better? How the hell can they make a real comparison? Why dont they look at the Flops per CPU? What are the other hardware in those systems? Why dont they compare all the machines? Do they have the same HDD/RAM/..etc other parts. If not (and it is not) this is a totally useless comparison.

One other fan boy writes that AMD Opterons beats the crap out of Intel systems. Looking at the link he provided, it is pretty clear that Intel Itaniums beat the crap out AMD systems and then the fan boy defends himself by saying that AMDs are cheaper!

Oh come on now people, be a little more objective! The article says the following: A total of 320 systems are now using Intel processors. Six months ago there were 287 Intel-based systems on the list and one year ago only 189. # The second most common processor family is the IBM Power processor (54 systems), ahead of PA Risc processors (48) and AMD processors (31). # At present, IBM and Hewlett-Packard sell the bulk of systems at all performance levels of the TOP500.

These are much more important numbers than some uber-geek-fan boys calculations. It is apparent that Intel has increased its percentae *A LOT*. AMD also started to putting many systems into top 500. In my opinion, Apple's success in this list is MUCH MUCH LESS than Intel's or AMD's or IBM's or HP's successes.

Be a little more logical, open minded, less fanatic people. Apple is just a freaking computer like any other computers around. It is not some sort of a super/splendid/magnificent/God-Like computer.

Keeping track of the very high frequency of postings of these supercomputer rankings on Slashdot.

Can I vote for a supercomputer thread so that I can elect to have it not displayed in my preference? I wouldn't want to miss out all the other tasty hardware goodness. I don't mind news about new Supercomputer technology, but whoever holds the most teraflops at a certain point in time is not of interest.

So we should say "It's really no achievement to have a supercomputer in a pricerange available to institutions other than military. The fact that they use Apple's G5 and OS X, an almost out of the box solution is totally irrelevant. If you like you can build your own courtesy of Virginia University, but who would want a supercomputer that's cheaper than the other twenty first contenders in the list of supercomputers. Remember, they're Apple, so they're crap. And expensive, whatever the calculations say. They must be. They're Apple. I repeat, they're Apple. Crap. Be realistic, don't be a fan-boy."

$5 million of their tuition probably landed several hundred million dollars worth of research grants. Paying tuition does not entitle you to everything a university does with the money. And that's even if the assumption is correct that it was tuition money in the first place.

It doesn't make much sense to compare operating temperature differences between machines with different cooling systems. There's a much easier way to figure out how much heat a processor generates: just look at how much power it consumes. An Opteron at 2.2 GHz sucks 89W. A PPC 970fx at 2.5 GHz uses around 50W.

Apple worship because we all knew they were going to die and are overpriced and whatnot, then we see their products being used to make one of the most powerful clusters in the world for very cheep.

Apple worship because it's a smack in the face to those who still continue to bash Apple for reasons that no longer exist.

"OS 9 sucks!""We're on OS X now, and it's unix-like""oh...um...Well one button~""And yet I'm still more productive on it than my Windows box""Well um, I want linu-""You can install that too.""Well, they're slow and-""I suggest you actually try using one before saying that.""They're overpric-""Really? I didn't think $900 was that expensive for a mid-range machine."

We do the Apple worship thing just to fustrate the anti-mac crowd even more.

Of course, if you want the numbers to be even more meaningful you should reference the cost of the systems, not today, but the length of time it takes to build them before today. Right? Of course there are advantages to quick deployment that in many cases are just as important as cost, so perhaps you should also add to the cost, the price of hiring out comparable computing resources for the time it takes to build them. Of course all of this is moot since the Big Mac cluster was so much cheaper AND faster to build than anything else in the top 10 it wins any non-biased comparison hands down. (Given that your goal is to cheaply and quickly develop a cluster for a purpose for which the LINPACK test is a good benchmark.)

Only if we're counting fictional computers thought up by conspiracy theorists.

So, no.

"Big Brother uses in the very near future if not already."

You mean filming some no-marks around the clock in the name of entertainment? Or the fairly silly idea that Europe is spearheading an effort to slap everyone into a database. Have you ever seen the EU decide anything? Do you know that the EC meets in Brussels, not the UN?