IBM is currently developing a supercomputer it hopes will be able to deliver 20 petaflops per second

IBM announced ambitious plans to create a new supercomputer that will be 20 times faster than its current Roadrunner supercomputer. The new supercomputer, dubbed "Sequoia," will operate at a whopping 20 petaflops, and is significantly faster than IBM's previous supercomputers.

The system will be stored and used in a 3,422 sq. ft. building in Livermore -- it will be energy efficient, with IBM expecting it to use 6 megawatts per year, which is equivalent to 500 American homes.

Sequoia may be able to provide a 40- to 50-fold improvement in the country's ability to provide data, including severe storm forecasting, earthquake predictions and evacuation routes due to national emergency, IBM said in a statement.

The system will use 45nm processors that have up to 16 cores per chip, and will have 1.6 petabytes of memory shared by 1.6 million cores. It will be 15 times faster than BlueGene/P and have the same footprint with only a "modest" increase in power consumption.

IBM's latest announcement comes just seven months after IBM delivered the fastest supercomputer, Roadrunner, to the U.S. Department of Energy's Los Alamos National Laboratory. The supercomputer was the first system to break the 1 petaflop barrier, clocking in at 1.026 petaflops.

IBM also is working on other supercomputers that will be used by the Defense Advanced Research Projects Agency (DARPA), and should be available before 2011.

Comments

Threshold

Username

Password

remember me

This article is over a month old, voting and posting comments is disabled

More computing power is meaningful ... the faster the computer the less optimizing needs to be done by the program designer ... just look at desktop computers.

The bare operating system today cannot run at an acceptable speed (in many cases not at all) on early Intel & Motorola cpus.

There is so much bloat using processing power that it is ridiculous. A 64 bit version of one of the multitasking MS-DOS clones with GEM or GEOS for a GUI, would be a speed demon on a modern entry level computer barely able to run windows. More useful would be an early Linux recoded to use 64 bit addressing, but maintaining the tight fast coding of the designed for i286 Linux. Even in the Linux world bloat is stealing cpu cycles.

It's much easier to use the compiler libraries instead of custom high speed code. When you are doing this for small simple functions you are trading ease of coding for speed. For complex functions where the compiler is hiding a lot of machine specific code variations then you need to look at the library source to make sure that the customized binary isn't loading all the unused code that your version will never use. This is the weakness of generic .dll files.

The science mainframes have always used computing power to overcome slow executables. FORTRAN was developed to make it easy to write a science program, but it was not designed to generate the fastest executables. It is still used today due to the number of legacy applications that are still useful, as well as allowing researchers who have learned FORTRAN to continue writing code without taking time off to learn a new language.

A really good programmer can take a few extra weeks or months to go through the flowcharts and final code and find places where a rewrite will accelerate the program. Researchers will instead allow the program to run a bit slower while they get other work done. Instead of spending the time they would save, saving the time, they buy time on a faster machine and speed up their code the lazy way :P

"A politician stumbles over himself... Then they pick it out. They edit it. He runs the clip, and then he makes a funny face, and the whole audience has a Pavlovian response." -- Joe Scarborough on John Stewart over Jim Cramer