Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

coolmacdude writes "Well it seems that the early estimates were a bit overzealous. According to preliminary test results (in postscript format) on the full range of CPUs at Virginia Tech, the Rmax score on Linpack comes in at around 7.4 TFlops. This puts it at number four on the Top 500 List. It also represents an efficiency of about 44 percent, down from the previous result of 80 achieved on a subset of the computers. Perhaps in light of this, apparantly VT is now planning to devote an additional two months to improve the stability and efficiency of the system before any research can begin. While these numbers will no doubt come as a disappointment for Mac zealots who wanted to blow away all the Intel machines, it should still be noted that this is the best price/performance ratio ever achieved on a supercomputer. In addition, the project was successful at meeting VT's goal of developing an inexpensive top 5 machine. The results have also been posted at Ars Technica's openforum."

I've always been sort of intrigued by,a href="http://www.top500.org/">Top500. Has there ever been a good comparison written about the similarities/differences between a 'supercomputer' and the lowly pc sitting on my desk running Linux/XP? At what point does the computer in question earn the title "Super"?

The big difference is that a "supercomputer" is usually heavily optimized towards vector operations: performing the same operation on many data elements at once. Think of it as SIMD (MMX, SSE, etc), only more so. A "supercomputer" would be pretty useless at ordinary tasks such as web browsing or word processing, as those can't be vectorized or parallelized very well. A "supercomputer" might be good as a graphics or physics engine for gaming, but that's sort of like using a cannon to swat a fly: a lot of

supercomputer pronunciation key(spr-km-pytr)n.A mainframe computer that, as the result of birth on an alien planet, is impervious to bullets, is capable of flight, has x-ray vision, can run faster than a speeding train, etc.

"Is it a bird? Is it a plane? No it's a Cray XM-P!"- Seymour Fights The Demon World, Action Comics, 1932

Source: The American Heritage(R) Dictionary of the English Language, Fourth EditionCopyright (C) 2000 by Houghton Mifflin Company.Published by Houghton Mifflin Company. All righ

Jack Dongarra says that a "supercomputer" is simply a computer that, for todays's standards, is REALLY fast. I saw one presentation from him, and he said he run the Linpack benchmark on his notebook (2.4 GHz Pentium 4) and it would get to the bottom of the Top500 list in 1992. So, this supercomputer definition is very fluid.

While these numbers will no doubt come as a disappointment for Mac zealots who wanted to blow away all the Intel machines, it should still be noted that this is the best price/performance ratio ever achieved on a supercomputer.

Officials at the school said that they were still finalizing their results and that the final speed number might be significantly higher.

This will likely be the case.

Second, they're only 0.224 Tflops away from the only Intel-based cluster above it. So saying "all the Intel machines" in the story is kind of inaccurate, as if there are all kinds of Intel-based clusters that will still be faster; there is only one Intel-based cluster above it, and with only preliminary numbers for the Virgina Tech cluster at that.

Third, this figure is with around 2112 processors, not the full 2200 processors. With all 1100 nodes, even with no efficiency gain, it will be number 3, as-is.

Finally, this is the a cluster of several firsts:

First major cluster with PowerPC 970

First major cluster with Apple hardware

First major cluster with Infiniband

First major cluster with Mac OS X (Yes, it is running Mac OS X 10.2.7, NOT Linux or Panther [yet])

Linux on Intel has been at this for years. This cluster was assembled in 3 months. There is no reason for the Virginia Tech cluster to remain at ~40% efficiency. It is more than reasonable to expect higher than 50%.

It's still destined for number 3, and its performance will likely even climb for the next Top 500 list as the cluster is optimized. The final results will not be officially announced until a session on November 18 at Supercomputing 2003.

On the other side of the issue is that it places 4th in the current Top 500 list, which was released in June. We won't really know where it places on this "moving target" until the next list is released in November.

On the other side of the issue is that it places 4th in the current Top 500 list, which was released in June. We won't really know where it places on this "moving target" until the next list is released in November.

The deadline for submission to the Nov 2003 Top 500 list was Oct. 1st (see call for proposals) [top500.org], so it has already passed. Any further improvements that they make to the scalability of the cluster should not be included. This is true for all the machines.

If you read the fine print, the Nmax for the G5 was 100,000 higher than for the Linux cluster. Now, that's kind of interesting, because the G5 cluster was then only slightly slower doing a much bigger (450,000 Nmax vs 350,000 Nmax on the Xeons) problem. I wonder why they don't somehow scale the FLOPs to reflect this fact.

Anyone know how much merit there is to using Nmax (or N1/2) to compare different systems?

No they could not bond NICs, because they're using Infiniband and not ethernet. Besides, I think that they are being limited more by latency than bandwidth, so therefore adding bandwidth isn't going to help much. What's worse, their bandwidth limit is being reached inside the computer, with their chip to chip interconnect having less bandwidth than their computer to computer interconnect.

This is not altogether surprising, given that they are using a desktop computer and trying to shoehorn it into a super

Second, they're only 0.224 Tflops away from the only Intel-based cluster above it.So saying "all the Intel machines" in the story is kind of inaccurate

I was trying to refer to the fact that sometimes the Mac zealots, in the midst of their zealotry, lose sight of reality and simply lump all non-Mac related things into one huge category, even if it really isn't one.

The Linpack benchmark, as compiled to the G5, is not utilizing the processor to its fullest. The school is still in the process of adding Altivec compiler optimizations, which should drastically improve the results.

The AltiVec instructions support only single precision (32-bit) floating point operations, and the core routine in the Parallel Linpack Benchmark is DGEMM() which is double precision (64-bit). The G5 already has two double precision FPUs, each of which can do a multiply/add op every clock cycle.

My feeling is that the ~40% efficiency seen on the larger scale run is an indication that either VA Tech spent very little time tuning the problem size or they didn't design their InfiniBand fabric to really handle 1100 nodes hammering away at Parallel Linpack. (Given that they've been extremely vague about how their IB network is structured, I fear it may be the latter.)

Right now, the processor is behaving essentially as a G4 with a bigger fan and more memory addresses. Rumor has it that tweeking the compiler to abuse the Altivec unit may push the system above the theoretical limit in some calculations.

I doubt that's true, especially if they're using the IBM PPC compilers. The G4 has both significantly less memory bandwidth and a single double-precision-capable FPU, whereas the G5 is basically a single-core Power4 with an AltiVec unit in place of some cache. IBM's compilers (despite being a little wonky as far as naming and argument syntax) generally produce pretty fast code.

I have been sitting here by my 1100 node G5 cluster trying to copy a 17.6 MB file for the last 20 minutes. It is so freaking slow now that I only get 44% efficiency. On my 1.5 Ghz P3 I would be able to do this in under 20 seconds......

1 Cal (uppercase C) is the amount of heat required to raise the temperature of 1g of water 1 degree celsius

which brings up a totally off topic question.... a can of coke is 350 ml. it contains 300 calories.

now, let's say i drink this coke. it is really cold - say 4 degrees. my body temperature is a nice, mamallish 37 degrees. by drinking this coke i am warming up 350 g of what is essentially water from the temperature of the can to that of my body - a difference of 33 degress.

On the other hand, if you only have a few pounds of fat to remove, if you're already in lean condition, or if you just want to give Superhydration an informal trial for whatever reason, here are the most efficient guidelines to utilize:

1) Purchase a 32-ounce, insulated, plastic bottle from which to sip your water.

2) Start by sipping one gallon, or 128 ounces, of water a day. Do not go higher than 128 ounces per day for this informal trial period.

1 Cal (uppercase C) is the amount of heat required to raise the temperature of 1g of water 1 degree celsius

A Calorie (the one used on food labels) is actually a kilocalorie. A Calorie is therefore 1000 calories. 1 calorie is basically the amount of heat needed to raise 1g of water 1 degree celsius. (A calorie is actually 1/100 of amount of heat needed to get 1 gram of water from 0 degrees C to 100 degrees C, but that works out almost the same.)

So warming a 4 degrees C, 350mL Coke to 37 degrees C would take (37 - 4) * 350 = 11550 calories. This is 11.55 kilocalories or 11.55 Calories. The Coke has around 300 Calories in nutritive value therefore you would gain 300 - 11.55 = 288.45 Calories of energy from a 4 degrees C, 350mL can of Coke.

It's not a measure of heat transfer, it's a measure of energy. You could measure the output of an automobile engine in calories if you like. Convert calories to watts to HP to torque(more or less) to thrust, it's all a different scale of the same thing.

Seeing as a large apple is about 100 kilocalories, you'd need a cluster of maybe 580 apples to best your Big Mac Cluster. If you go to an apple orchard I'm sure you could find a better price-performance ratio with apples than you could with Big Macs at McDonalds. Plus, most orchards will probably let you gather virtually unlimited quantities of fallen apples for free.

Not terribly surprising. Much like estimated death tolls for disasters, never believe the first set of benchmarks for a computer. Wait until thorough testing can be done before you start believing the numbers.

While some people have given the parent a flamebait mod and hostile replies, the poster makes a good (and humorous) point. Apple is not typically thought of in terms best price performance any more than, say, Cadillac is in the car industry. Macs are bought by those willing to pay a premium for that distinct Apple stying, OSX's slick interface with the power of Unix behind the scenes, the "it just works" factor, and so on. Those who don't care about the amenities and just want bang for the buck go for a

I guess the original submission didn't see the slashdot article [slashdot.org] from August 23 about our KASY0 [aggregate.org] supercomputer breaking the $100 per GFLOPS barrier.

KASY0 achieved 187.3 GFLOPS on the 64-bit floating point version of HPL, the same benchmark used on "Big Mac". While "Big Mac" is about 40 times faster on that benchmark, it is about 130 times the cost of KASY0 (~$40K vs ~$5200K). Considering the size difference, "Big Mac" is VERY impressive, but it can't claim to be the best price/performance supercomputer on

The PowerPC architecture was always defined as a true 64-bit environment with 32-bit operation defined as a sub-set of that environment and a 32/64-bit 'bridge', as used by the 970, to "facilitate the migration of operating systems from 32-bit processor designs to 64-bit p

Uhh, you really don't know what you're talking about here do you? We're talking floating point code here, not integer code! You don't need Smeagol or Panther or any other cat to get 64-bit floating point code, DOS can handle that just fine!

Essentially ALL processors with a floating point unit do 64-bit precision calculations. The old G4 and G3 did, the Pentium 4 does, the old 486 did, etc. etc. The whole 32-bit vs. 64-bit argument with these PowerPC 970 chips (and, in a similarly light, AMD64 chips) ha

Correct. Also note that one of the strengths of the G5 (and G4) is its vector units, which (afaik) can't be used for Linpack, because of the 64-bit precision requirements. For jobs that can use Altivec, the performance should be substantially better.

Grumble... Go take a look at Apple's description of the G5 architecture [apple.com] before spouting.. Here's the relevant lines:

Each PowerPC G5 processor has its own dedicated 1GHz bidirectional interface to the system controller for a mind-boggling 16GB per second of total bandwidth -- more than twice the 6.4-GBps maximum bandwidth of Pentium 4-based systems using the latest PC architecture

800MHz HyperTransport interconnects for a maximum throughput of 3.2GB per second.

Err, Apple's G5 and the AMD Opteron don't have an even remotely related memory setup. The G5 looks a lot more like the AthlonXP and AthlonMP setups. The Opteron has an integrated 128-bit wide DDR memory controller, connects multiple CPUs directly through cache-coherent Hyptertransport links, and uses additional 32-bit, 1600MT/s HT links (3.2GB/s in each direction) to connect the CPU directly to the I/O chips.

The Powermac G5 uses up to 1GT/s, 64-bit wide version of IBM's Elastic I/O bus to connect each pr

While these numbers will no doubt come as a disappointment for Mac zealots who wanted to blow away all the Intel machines, it should still be noted that this is the best price/performance ratio ever achieved on a supercomputer.

It still bests all other Intel hardware with only the Alpha hardware on top. And given the CPU count, even the Alpha hardware does not match it. Look at the numbers.....The Linux based 2.4Ghz cluster has almost 200 more CPU's on board with a 217 Gflop/sec difference. The Alpha clusters are running anywhere from 1,984 to 6,048 more CPU's.

I did say It still bests all other Intel hardware... Commodity clusters are entirely different beasts than dedicated supercomputers and this is exactly why I chose the terminology "clusters" rather than supercomputers. Also, check out the architecture of real "supercomputers". Most of the real costs are in CPU interconnectivity.

Remember them? Manufacturer of the highest performance x86 processors available? An array of dual-Opteron systems could be built with dramatically lower price/performance ratio than any other platform, especially G5s or Intel Xeons.

The G5's memory controller is built into the U3 IC, which is essentially the "north bridge"- it is NOT built into the CPU.

It connects to the CPU via the "Apple Processor Interface" NOT via hypertransport. It connects to it's memory controller at 1/2 the CPU speed, unlike Opteron and Athlon 64 which connect to the memory controller at FULL CPU SPEED.

Besides, performance per CPU doesn't matter much in these benchmarks, what matters is total bang for total buck, at the prices at which regular folks can get these machines (no special "we need a showcase" kind of deals). I suspect the 2.4GHz-based clusters are still a better deal than either the G5 or a 3.2GHz cluster, more CPUs or not.

Actually, if you read back a little bit, you will find that the contract was awarded to Apple because they gave the best bang for the buck and it turns out that Dell opti

If someone used off-the-shelf machines that my company made, and got even into the top-10, you can bet your bottom dollar that the next thing in my job-pile would be a "make an announcement that we're in the top-10 fastest computers in the world."

This is fantastic, no matter what way you cut it! Using commodity components, these folk have turned the G5 into a real champion. No longer do budgets have to be in the hundreds, or even tens of millions to get a top-notch supercomputer. And this is not even th

Umm, they've got the POWER4, which is internally the same thing as the G5 (which they also make). WHY would they use the consumer-grade G5 (that Apple is demanding in mass quantities) when they can use the POWER4 that does the same thing and is server-grade (and IBM already uses)?

Because the Power4 is hotter and uses more current than the G5. To use 2200 Power 4 CPUs they would have to about triple the cooling capacity of the room. For all the heat and power, the Power4 lacks the AltiVec units that allow the G4/G5 to process vector operations so quickly.

> I'd like to see hot machines and clusters built out of something I could afford> to buy on a couple month's wages.

Well, im sure we all do.I also want a house for what I can pay in two months wages.

But these things do have costs.Even if each computer was $1 total, for 2000 of them thats $2000 right there.So even as much as $10 a computer would be 'affordable' thou definatly more than two months pay. But I have hope of actually saving up $20,000 after awhile.

Yet another Apple product that failed to save the world. Lately they do nothing but disappoint us. Boo.

First you have the iTunes store which doesn't do anything but give the average user basically anything he or she might have wanted to have in on online music store. Despite its being free, we're all cheesed off that it doesn't support OGG, or it's meant partly to push iPods (duh), or whatever.

Now this -- a supercomputer that has, to quote that again, the "best price/performance ratio ever achieved on a supercomputer." But dang it all, it doesn't completely blow away every established precedent -- it's just in the top five on the usual list of comparisons. One more crushing disappointment.

From Microsoft, we just want products that don't completely ream us. From Apple, we want the entire world to seem a little friendlier and cooler with every product release, every dot-incremenent OS update. They both disappoint us, but the expectations seem a little different...

I know this is really nitpicking, and is somewhat offtopic (but there isn't a front page iTunes thread at the moment), but it probably needs to be said.

iTunes for Windows, just like Mac iTunes, does it's decoding using Quicktime. As crappy as you think Quicktime Player software is, the backend Quicktime library is very nice, especially in regards to it's modularity.

Any app that uses Quicktime Lib can now play AAC files (even the iTMS 'protected' ones), not just iTunes. Of course, not may Windows apps us

of all of these so-called "benchmark" discussions. Everyone really knows, in their heart of hearts, that the only valid benchmark is to be found in real-world applications such as Quake III. I want to know how many fps this alleged "supercomputer" gets.

A very excellent point. I was also wondering how much time has passed between the time the Intel cluster and this Apple cluster were constructed. Would put things into a little more perspective regarding cost.

Efficiency is strongly dependent on the interconnect. Does anyone know if the 128 node benchmark (that supposedly showed ~80% efficiency) was run with only one Infiniband switch -- i.e. all nodes connected through only one switch?

BTW, the performance never was stated to be 17 TF, so it did notdrop to 7.4 (or whatever it ends up to be).

While I am amazed at the initial price vs preformance that this cluster of macs have obtained I am worried about the eventual cost all the electricity and cooling will be for the cluster. I remeber reading in some random article that the electricity used to cool and power the computer was extimated around 3,000 midrange homes. Just from a quick calculation of homes x $100 x 12 months we get the horrible figure of 3.6mil. So over a 10 year lifespan of the cluster it will cost 36mil more the the current price

I remeber reading in some random article that the electricity used to cool and power the computer was extimated around 3,000 midrange homes.

That can't possibly be right. There's no way that the cluster's power requirements are over 1 home's worth per CPU. Maybe they just added a zero and it's supposed to be 300, but even that sounds very high.

I think that magazine article must be wrong. If 1100 Macs use as much power as 3000 homes, then each mac is using about 3 houses worth of power. That seems excessive unless the home is in a 3rd world country or those 9 fans are really really running full blast. More likely, each G5 (with networking and cooling equipment) uses a few hundred watts. Even at 500 W/Mac, 1100 Macs, $0.15/kWH, 24 Hr/day, 365 day/year the cluster costs about $722,700/year. More likely, each Mac probably only consumes an average of 300 W max and is not running full tilt 24x7, so the cost is maybe around $300-$400k/year.

But your point is a good one. I often wonder about the environmental economics of people running SETI, Folding@Home, etc. on older machines. Most of those older "spare" CPU-cycles are quite costly in terms of electricity relative to newer faster machines that do an order of magnitude more computing with the same amount of electricity.

You're forgetting the AC costs... If you've ever worked in a DC you know that the room itself can get mighty toasty, and toasty air leads to cooked systems.

Each processor, drive, and switch generates heat which is dissipated into the air. Untouched that heat accumulates and will kill the entire thing.
With 1100 dual processor nodes running (and you can be they'll each be running at pretty close to full tilt) constantly that's a hell of a lot of heat that needs to be removed from the air.

... it should still be noted that this is the best price/performance ratio ever achieved on a supercomputer.

Noted. And go VT, go Apple! Now, with the cheerleading out of the way, I wonder something - with Moore's law and all still applying pretty well, just getting the latest-and-greatest any home computer architecture will all but guarantee you pretty good price/performance.

As another poster pointed out, someone's recent laptop could do as well on Linpack as a 1992 supercomputer.

First, scalability is highly non-linear. See Amdahl's Law. Thus, the loss of performance is nothing remarkable, in and of itself.

The degree of loss is interesting, and suggests that their algorithm for distributing work needs tightening up on the high-end. Nonetheless, none of these are bad figures. When this story first broke, you'll recall the quote from the top500 list maintainer who pointed out that very few machines had high performance ratings, when they got into the large numbers of nodes.

I'd say these are extremely credible results, well worth the project team congratulating themselves. If the team could open-source the distribution algorithms, it would be interesting to take a look. I'm sure plenty of Mosix and BProc fans would love to know how to ramp the scaling up.

(The problem of scaling is why jokes about making a Beowulf cluster of these would be just dumb. At the rate at which performance is lost, two Big Macs linked in a cluster would run slower than a single Big Mac. A large cluster would run slower than any of the nodes within it. Such is the Curse that Amdahl inflicted upon the superscaler world.)

The problem of producing superscalar architectures is non-trivial. It's also NP-complete, which means there isn't a single solution which will fit all situations, or even a way to trivially derive a solution for any given situation. You've got to make an educated guess, see what happens, and then make a better informed educated guess. Repeat until bored, funding is cut, the world ends, or you reach a result you like.

This is why it's so valuable to know how this team managed such a good performance in their first test. Knowing how to build high-performing clusters is extremely valuable. I think it not unreasonable to say that 99% of the money in supercomputing goes into researching how to squeeze a bit more speed out of reconfiguring. It's cheaper to do a bit of rewiring than to build a complete machine, so it's a lot more attractive.

On the flip-side, if superscaling ever becomes something mere mortals can actively make use of, understand, and refine, we can expect to see vastly superior - and cheaper - SMP technology, vastly more powerful PCs, and a continuation of the erosion of the differences between micros, minis, mainframes and supercomputers.

It will also make packing the car easier. (* This is actually a related NP-complete problem. If you can "solve" one, you can solve the other.)

Most responses in here are about how the G5 should be performing better, or should have better numbers than the Xenon or Sparc, or whatever.What seems to be missing from most of the conversation is that it's not the Mac's that are loosing efficiency per se, it's the network (the interconnects) that is slowing the machine as a whole down. I know little about the LinPac test, but I would assume that it's written to test/stress the entire machine: CPU, disk, memory and interconnects. If the Macs can finish par

So, in all these "maximum speed tests", what is being used, 32 bit reals or 64 bit reals? The difference is that in solving large non-linear systems, the higher precision numbers result in a faster solution, but operations involving doubles will resulting a lower gflops measurement with benchmarks (although a solution may in fact take 10x less iterations).

The 21st version of this list does notshow the SETI@Home project. The top entryis NEC at 35 terraflops. Today's SETI@Homeaverage for the last 24 hours is 61 terraflops.It may be a virtual supercomputer, but itis producing real results.

Yes, the G5 should be capable of more than a little better performance than "a Xeon", but what I find interesting is that it is a Xeon which was initially released well over a year ago by Intel. What I am curious about is if someone could build an equally "cost-efficient" super computer based on more recent intel hardware. The differences in speed, cache, front side bus, etc. that Intel has made in the past year would no doubt lead to higher numbers. If I were comparing a Xeon Cluster to a G4 cluster, pe