The IBM Sequoia BlueGene/Q supercomputer, installed at the Department of Energy's Lawrence Livermore National Laboratory, runs 16.32 petaflops, using 1.6 million compute cores in 96 racks, each roughly the size of a large refrigerator, Parris said.

To grasp how fast that is, he said, "If you have to understand a drug interaction on the heart, on a one-petaflop computer it would take two years for one simulation; for a 10-petaflop it would drop to two days. A 16-petaflop can do it now in a few hours."

By comparison, the second-ranked computer on the list, Fujitsu's "K Computer" at the RIKEN Advanced Institute for Computational Science in Kobe, Japan, runs at 10.51 petaflops, and No. 3, the Mira supercomputer--another IBM BlueGene/Q system, located at Argonne National Laboratory in Illinois--runs at 8.15 petaflops.

Aussies pull off supercomputer breakthrough...Sydney-based team makes breakthrough in supercomputersFri, Sep 21, 2012 - An Australian-led research team yesterday said it had made a technological breakthrough in the race for a quantum supercomputer that could revolutionize data encryption and medicine.

Engineers from Sydneys University of New South Wales said they had created the first working quantum bit or qubit  the fundamental unit of a quantum supercomputer  with the findings published in the latest edition of Nature. Lead researcher Andrew Dzurak said the team used a microwave field to gain unprecedented control over en electron bound to a single phosphorous atom that was implanted in a silicon transistor device.

They were able to both write and read information using the electrons spin, or magnetic orientation, which Dzurak said was a key advance towards realizing a silicon quantum computer based on single atoms. This is a remarkable scientific achievement, governing nature at its most fundamental level, and has profound implications for quantum computing, he said. Quantum computing harnesses the power of atoms and molecules to perform calculations and store data, with the potential to be millions of times more powerful than the most advanced modern computers.

Dzuraks research partner Andrea Morello said quantum computers, which could run 1 million parallel computations at once compared with a desktop PCs single-computation capacity, could do things that were currently impossible. These include data-intensive problems, such as cracking modern encryption codes, searching databases, and modeling biological molecules and drugs, he said. Morello said the study was significant because it was the first time silicon had been used  a well understood and easily accessed material used in countless everyday electronics devices.

Uncle Ferd says is `cause Chinese computers got squirrel's workin' lil' abacuses in `em...US Titan supercomputer clocked as world's fastest12 November 2012 - The fastest supercomputer, Titan, was sixth on the list when it was was compiled in June

The top two spots on the list of the world's most powerful supercomputers have both been captured by the US. The last time the country was in a similar position was three years ago. The fastest machine - Titan, at Oak Ridge National Laboratory in Tennessee - is an upgrade of Jaguar, the system which held the top spot in 2009. The supercomputer will be used to help develop more energy-efficient engines for vehicles, model climate change and research biofuels.

It can also be rented to third-parties, and is operated as part of the US Department of Energy's network of research labs. The Top 500 list of supercomputers was published by Hans Muer, professor of computer science at Mannheim, who has been keeping track of developments since 1986. It was released at the SC12 supercomputing conference in Salt Lake City, Utah.

Mixed processors

Titan leapfrogged the previous champion IBM's Sequoia - which is used to carry out simulations to help extend the life of nuclear weapons - thanks to its mix of central processing unit (CPU) and graphics processing unit (GPU) technologies. According to the Linpack benchmark it operates at 17.59 petaflop/sec - the equivalent of 17,590 trillion calculations per second. The benchmark measures real-world performance - but in theory the machine can boost that to a "peak performance" of more than 20 petaflop/sec.

To achieve this the device has been fitted with 18,688 Tesla K20x GPU modules made by Nvidia to work alongside its pre-existing CPUs. Traditionally supercomputers relied only on CPUs. CPU cores are designed to carry out a single set of instructions at a time, making them well suited for tasks in which the answer to one calculation is used to work out the next. GPU cores are typically slower at carrying out individual calculations, but make up for this by being able to carry out many at the same time. This makes them best suited for "parallellisable jobs" - processes that can be broken down into several parts that are then run simultaneously.

Mixing CPUs and GPUs together allows the most appropriate core to carry out each process. Nvidia said that in most instances its GPUs now carried out about 90% of Titan's workload. "Basing Titan on Tesla GPUs allows Oak Ridge to run phenomenally complex applications at scale, and validates the use of 'accelerated computing' to address our most pressing scientific problems," said Steve Scott, chief technology officer of the GPU accelerated computing business at Nvidia.

possum can do addin' an' subtractin' in his head real fast, although it don't always come out right...Chinese supercomputer named as world's fastestJun 17,`13 -- China has built the world's fastest supercomputer, almost twice as fast as the previous U.S. holder and underlining the country's rise as a science and technology powerhouse.

The semiannual TOP500 official listing of the world's fastest supercomputers released Monday says the Tianhe-2 developed by the National University of Defense Technology in central China's Changsha city is capable of sustained computing of 33.86 petaflops per second. That's the equivalent of 33,860 trillion calculations per second.

The Tianhe-2, which means Milky Way-2, knocks the U.S. Department of Energy's Titan machine off the no. 1 spot. It achieved 17.59 petaflops per second. Supercomputers are used for complex work such as modeling weather systems, simulating nuclear explosions and designing jetliners. It's the second time China has been named as having built the world's fastest supercomputer. In November 2010, the Tianhe-2's predecessor, Tianhe-1A, had that honor before Japan's K computer overtook it a few months later.

The Tianhe-2's achievement shows how China is leveraging rapid economic growth and sharp increases in research spending to join the United States, Europe and Japan in the global technology elite. "Most of the features of the system were developed in China, and they are only using Intel for the main compute part," said TOP500 editor Jack Dongarra in a news release accompanying the announcement. "That is, the interconnect, operating system, front-end processors and software are mainly Chinese," said Dongarra, who toured the Tianhe-2 development facility in May.

A new supercomputer simulation of blood moving around the entire human body compares extremely well with real-world flow measurements, researchers say. The software uses a 3D representation of every artery that is 1mm across or wider, scanned from a single person. Its accuracy passed a first key test when physicists compared blood flow in the virtual aorta with the that of real fluid in a 3D-printed replica. Flow patterns seen in the physical copy were a good match for the simulation.

This was the case even when the fluid passing through the plastic aorta - and the virtual blood passing through the simulated aorta - was moving in pulses, to simulate the way blood is pumped by the heart. "We're getting extremely close results both in the steady flow and the pulsatile, which is very exciting," lead researcher Amanda Randles, from Duke University in Durham, North Carolina, told BBC News. She presented the findings - including the comparison with a 3D-printed aorta - this week at the American Physical Society's March Meeting in Baltimore. The whole-body simulation itself was first unveiled at a computer science conference in November.

​

It is called "Harvey" - a tribute to the 17th-century physician William Harvey who first discovered that blood is pumped in a loop around the body. At the core of Harvey's computer code is a 3D framework, built up from full-body CT and MRI scans of a single patient. "It's not a common practice," said Dr Randles of the full-body scan. "But if we have it, then we can extract the arterial network. "We get a surface mesh representing the vessel geometry, then we decide what's a fluid node and what's a wall node, and then model fluid flow through there."

That modelling takes place on a supercomputer at the Lawrence Livermore National Laboratory in California. "It has 1.6 million processors, so it's one of the top 10 supercomputers," said Dr Randles, who worked in supercomputing at IBM before doing a physics PhD at Harvard, where she started work on Harvey. "The first stage was simply a proof of concept: can we actually model at this scale?" Most other simulations, she explained, have focussed on smaller sections of the circulatory system. "The largest, I think, before this, was maybe the aortal-femoral region - so, the aorta down to about the knees."

Useful Searches

About USMessageBoard.com

USMessageBoard.com was founded in 2003 with the intent of allowing all voices to be heard. With a wildly diverse community from all sides of the political spectrum, USMessageBoard.com continues to build on that tradition. We welcome everyone despite political and/or religious beliefs, and we continue to encourage the right to free speech.

Come on in and join the discussion. Thank you for stopping by USMessageBoard.com!