Posted
by
Soulskillon Friday May 03, 2013 @04:21PM
from the let's-do-the-timewarp-again dept.

Nerval's Lobster writes "The 'Sequoia' Blue Gene/Q supercomputer at the Lawrence Livermore National Laboratory (LLNL) has topped a new HPC record, helped along by a new 'Time Warp' protocol and benchmark that detects parallelism and automatically improves performance as the system scales out to more cores. Scientists at the Rensselaer Polytechnic Institute and LLNL said Sequoia topped 504 billion events per second, breaking the previous record of 12.2 billion events per second set in 2009. The scientists believe that such performance enables them to reach so-called "planetary"-scale calculations, enough to factor in all 7 billion people in the world, or the billions of hosts found on the Internet. 'We are reaching an interesting transition point where our simulation capability is limited more by our ability to develop, maintain, and validate models of complex systems than by our ability to execute them in a timely manner,' Chris Carothers, director of the Computational Center for Nanotechnology Innovations at RPI, wrote in a statement."

That raises up the question of how can a supercomputer simulate what our species would do if it had access to a supercomputer that could simulate what our species would do if it had access to a supercomputer that could simulate...Out of memory [core dump]% rm -rf *

That raises up the question of how can a supercomputer simulate what our species would do if it had access to a supercomputer that could simulate what our species would do if it had access to a supercomputer that could simulate...Out of memory [core dump]% rm -rf *

Which of course begs the question...

Questions I've posed to astrophysicist friends that have never gotten good answers:1) Why should we not think of galaxies as simply accretion disks - ie, we're all circling the giant 10M solar mass black hole drain that's the center of the galaxy.2) Why is it irrational for me to think of the big bang as basically the opposite of a black hole - and how do we know it's not continuing to spew matter?

Other factors: based on the parallelism model used, and the current state of electronics, what is the maximum number of cores that can be included before performance degrades from additional nodes?

(Eg, it takes x time to transmit data over a bus (any bus). How may cores, before the time penalty for transmitting the data over the bus to the allocated processor becomes greater than the penalty for just waiting for a processor to become free?)

There *must* be an upper bound on parallel computing potential before we need pure unobtanium semiconductors.

It's most likely dependent on the problem size / nature. Apparently, they were able to achieve super-linear scaling on their particular problem because more and more of the data structures they used fit into CPU cache. For problems that use less memory or more inter-processor communication, I imagine the sweet spot would be very different.

That would be close to running on pure unobtanium. (Energy costs would become untenable to maintain the quantum states of many thousands of entangled particles, and keep them cold.) Also, quantum computing can only efficiently serve a subset of parallelized tasks, and are a poor fit for general parallelism as is. (Improvements may fix this in time however.)

Google should invest in such things. Data is worthless if can't do something useful with it. Then Google in the best case could become something like a Mind [wikipedia.org] or at least an embryo of one. In the worst case... Facebook... no! We're doomed!