That has been the case for the last several years, but the evidence is not the performance, but rather the number of systems IBM has in the top 10. It will be interesting to see if IBM can keep up. I understand they abandoned a contract they had with Univ. of IL, to Cray's benefit. Also, I wonder how easy it is to program IBM's BlueGene/Q. Anyone know?Reply

According to TOP 500 list BlueGene/Q has a theoretical max of 20 petaflops, but achieves 16 Pflops in Linpack. Titan has a max of 27 Pflops, yet 'only' achieves 17 Pflops in Linpack. It seems much harder to fully utilize the full power of CPU / GPU systems, maybe Intel is right about Xeon Phi.Reply

Not necessarily - the systems with Xeon Phi are only averaging a similar efficiency of around 65% theoretical. The 'advantage' of Xeon Phi has always been comparative ease of programming, so we'll have to see if that actually plays out or not. The fact that there are already 6 entries in the TOP500 list using it, with Stampede at the #7 spot despite only being ~1/3 population is certainly a good start for a new product line.Reply

My wild guess is that it is popular because Intel is providing grants to institutions willing to harbor these machines. Note, however, that they are not nearly as power efficient as IBM or Cray. I would guess they will argue that power efficiency will improve with each generation of their accelerators.Reply

Where do you get that it's easy to code for (other than Intel engineer's saying it). Larabee failed on the desktop and never came out (if memory serves) because (partially) it was thought to be complicated to keep it running loaded and wouldn't be optimized for by coders who'd prefer to stay with old lazy tech coding they already knew. It would have been highly programmable, but I never thought that meant easy.

""Larrabee silicon and software development are behind where we hoped to be at this point in the project," stated Intel in a email to DailyTech."

"Intel recognized the importance of software and drivers to Larrabee's success, leading to the creating of the Intel Visual Computing Institute at Saarland University in Saarbrücken, Germany. The lab conducts basic and applied research in visual and parallel computing."

If it was easy to code for I'd think they shouldn't have been behind with their own software efforts. Intel has huge resources compared to NV/AMD. But was beaten by both to 2TFlops and then 5TF etc.

I agree with Ktracho...I think their seeding these, not people investing in them (yet? could change). If Intel offers some free chips, you use them unless your stupid :) On the other hand, I think CRAY pays for Nvidia's ;)Reply

Actually, what I really wonder is why we are paying for this project. What is ORNL going to do with it other than set records and boost the chipmakers' bottom lines? What simulations are we relying on that just have to run ten times faster than they were before? The only obvious one is weather prediction, where real-time could matter a lot. So I would suppose that the weather service would own the computer - not ORNL.

At least it appears to be a success as a project, which is all too rare in government agencies.Reply

Lots of scientific research can benefit from as much computing power as you can throw at it. You already mentioned one of the big ones: weather. In addition to that however, aerodynamics research can use a huge amount of CPU power to run CFD, which will help future aircraft fly more efficiently and possibly faster. Coupled multiphysics simulations (aerodynamics, thermal, and structural simulations rolled into one) also require enormous computational power, and they allow for improved design of high speed aircraft, spacecraft, and reentry vehicles for the space program. Nuclear simulations can both help simulate nuclear detonations, including how aging has affected the nuclear arsenal (which is probably going to happen more on LLNL's Sequoia supercomputer rather than this one, but it is a comparably powerful machine) and simulate nuclear reactors, which could be a substantial portion of our move away from fossil fuels in the future.

More in the realm of basic, rather than applied research, astronomical simulations also require a huge amount of computing power. Everything from galactic and universal structure formation in the early universe to stellar evolution, including simulations of the last moments of a massive star going supernova requires hugely more computational power than even the most powerful computers today.

They can even be used for simulations of things we take for granted - one good example of this is combustion simulation in internal combustion engines. Yes, we have perfectly functional engines already, but to properly simulate the combustion takes an enormous amount of computing power, and once it can be properly simulated, it can be improved and optimized, helping reduce fuel usage and emissions from new cars.Reply

Anand went over in great detail about the capabilities of ORNL's super computer. You should really read it and watch the videos he posted because they are pretty interesting. Even with the super computer as powerful as it is, it isn't powerful enough for many things scientist still wants to do. Reply