A government computer in New Mexico is the first supercomputer to perform at one petaflop (one thousand trillion calculations per second). Located at the Los Alamos National Laboratory of the Department of Energy's National Nuclear Security Administration, Roadrunner is twice as fast as IBM Blue Gene system at Lawrence Livermore National Lab, which was until now the fastest computer in the world. The new supercomputer is designed and built by IBM using both traditional computer chips and IBM's Cell Broadband Engine. Roadrunner occupies 6,000 square feet and weighs 500,000 lbs. It is also aiming to take place among the top energy-efficient systems on the official "Green 500" list of supercomputers. Roadrunner will be used primarily to ensure the safety and reliability of the U.S. nuclear weapons stockpile. It will also do research into astronomy, energy, human genome science and climate change. Learn more here.

Reminds me of back in the 60s and 70s when one computer (a desktop we call it today) took up as much if not more room than this super computer and weighed as much if not more. Anyone remember War Games with Matthew Broderick and Ally Sheedy? W.O.P.R.

Roadrunner, named after the New Mexico state bird, cost about $100 million

Roadrunner is the world’s first hybrid supercomputer. In a first-of-a-kind design, the Cell Broadband Engine® -- originally designed for video game platforms such as the Sony Playstation 3® -- will work in conjunction with x86 processors from AMD®.

Custom Configuration. Two IBM QS22 blade servers and one IBM LS21 blade server are combined into a specialized “tri-blade” configuration for Roadrunner. The machine is composed of a total of 3,456 tri-blades built in IBM’s Rochester, Minn. plant. Standard processing (e.g., file system I/O) is handled by the Opteron processors. Mathematically and CPU-intensive elements are directed to the Cell processors. Each tri-blade unit can run at 400 billion operations per second (400 Gigaflops).

Reminds me of back in the 60s and 70s when one computer (a desktop we call it today) took up as much if not more room than this super computer and weighed as much if not more. Anyone remember War Games with Matthew Broderick and Ally Sheedy? W.O.P.R.

Click to expand...

Hmm dude, those computers were not what we call today desktop PCs... They were the supercomputers of that time, which happened to perform like desktops we have today.

Remember that these new-style "supercomputers" are actually a shed of small PCs. They cannot approach anywhere near one petaflop (one thousand trillion calculations per second) on a single threaded calculation algorithm. What is also important is whether this statistic is theoretic CPU core flops or actual software delivered flops.

What happens is that the network needs to be managed with a scheduler and marshaller to send "task" over ethernet to each blade. So if you can partition a problem so that separate blades can do different INDEPENDENT calculations, then great.

Remember that these new-style "supercomputers" are actually a shed of small PCs. They cannot approach anywhere near one petaflop (one thousand trillion calculations per second) on a single threaded calculation algorithm. What is also important is whether this statistic is theoretic CPU core flops or actual software delivered flops.

What happens is that the network needs to be managed with a scheduler and marshaller to send "task" over ethernet to each blade. So if you can partition a problem so that separate blades can do different INDEPENDENT calculations, then great.

The F@H network is more actual performance than Roadrunner in terms of theoretic CPU flops. That's pretty cool for F@H.

Click to expand...

F@H does ONLY 0.2 petaflops? I thought it was a lot more than that. They said they could achieve 100 more power with F@H than with any supercomputer. Looking at some supercomputer statistics it seems that the ACTUAL performance is around 60-80% of the peak performance, so this computer is fairly powerful. Now I know they can't use the whole supercomputer for them or 24/7 but I thought F@H was just more.

EDIT: Oh and BTW. The only difference between these "new-style supercomputers" and the "good-old supercomputers" is that the new ones use "standard" open net protocol such as Ethernet instead of the propietary-made-only-for-that-computer ones used in previos supercomputers. The working mechanism is the same.

Yeah Los Alamos National Laboratories and Sandia National Laboratories always have a lot of cool stuff. I live in Albuquerque, New Mexico, so I usually hear a lot about this kind of thing on the local news.

Good stuff, New Mexico is a great place for high tech research and industry, the more the better.

EDIT: Oh and BTW. The only difference between these "new-style supercomputers" and the "good-old supercomputers" is that the new ones use "standard" open net protocol such as Ethernet instead of the propietary-made-only-for-that-computer ones used in previos supercomputers. The working mechanism is the same.

Click to expand...

Not exactly. The "old-style" were vector based architectures, like the Cray's of yesteryear. They didnt suffer from the von Neumann control bottleneck when scaling like modern clusters, esp. Beowulf that is now very common. However the Crays were ultra powerful at vector-scalable problems but easily outperformed by much cheaper scalar CPUs for more regular (and easier to program) computing tasks.

Pricing for COTS has just made Cluster supercomputing so cheap relative to SIMD and Vector that SIMD and Vector is essentially dead. However, if you have a SIMD or vector type problem and try to cluster it you hit von Neumann very fast where the marginal gain from an extra node has exponentially decreasing returns.