Posted
by
CmdrTaco
on Tuesday September 14, 2010 @12:55PM
from the delete-the-pool-ladder-quick dept.

An anonymous reader writes "Over in the UK Durham University is tasking its supercomputing cluster with nothing less than recreating how galaxies are born and evolve over the course of billions of years. Even with 800 AMD processor cores at its disposal the university is still hitting the limits of what is possible."

That's correct. In a simulation like this, the most important physical effect to model is gravity. Gravity doesn't have a range. For each timestep in the simulation, all the mass in the simulation has to interact with all the other mass in the simulation. There are a variety of numerical tricks that people who write these codes use to make the problem feasible, so that the computation time required doesn't scale as N^2, with N = number of particles in the simulation. But even with these tricks, to calc

That's a rather 'tricky' statement don't you think? First, I'll agree with you in that gravity doesn't technically reach zero. But it does appear to have to propagate. In a system many thousand of lightyears across, propagation delay would be significant.

Not only that, but wouldn't the galaxy have expanded several million, if not billions of miles in the 27,000 years it would take for light to travel from one end to the other? (I'm not trusting my back of the envelope calcua

That's a rather 'tricky' statement don't you think? First, I'll agree with you in that gravity doesn't technically reach zero. But it does appear to have to propagate. In a system many thousand of lightyears across, propagation delay would be significant.

I'm not sure how this makes my statement about gravity not having a range "tricky"; but it's definitely something worth thinking about. Several thousand lightyears is actually a pretty small-scale simulation; the simulation volume wouldn't be large enough to contain a typical bright galaxy. Cosmological simulations incorporating galaxy formation typically use volumes tens or even hundreds of megaparsecs (Mpc) on a side. Imagine you're working with a 100 Mpc per-side cube (for the cognoscenti: taking h=1

In a simulation like this, the most important physical effect to model is gravity.

No, it's not. Or it is. Actually, you can't say until you run it and compare the results with other setups and, better yet, with what you see in the Cosmos.

Imagine that every galaxy is, more or less, surrounded by all the others- thus, unless you have superclusters in proximity gravity cancels out. Seen as a system with components, though, the gravitational interactions between individual stars (from which the shape of the galaxy emerges from) are important. Yet, forces such as e/m might be mild, but operat

Dr Lydia Heck, the ICC's computer cluster manager, said the ICC had maxed out its supercomputing cluster's processors and memory by running a simulation of the effect of dark matter on how galaxies are formed.

And the maxed-out cluster is not even using large scale models.

Physicists have to simplify the cosmological models they use in order to get ones that produce data sets small enough to be accurately processed by the 64-bit chips in the supercomputing cluster, and which can fit into the cluster's available memory.

The Cosmology Machine (COSMA) was first switched on in July 2001. From the original system only 2 TByte of dataspace is still on line. 64 SunBlade 1000s with a total of 64 GByte of RAM were donated by the ICC in February 2006 to the Kigali Institute for Science and Technology in Kigali, the Capital of Rwanda. Sun Microsystems payed for their transport by air.

In February 2004, QUINTOR was installed. QUINTOR consists of a 256 SunFire V210s w

no we just need the doctor to loan us the TARDIS and scientist could learn how the universe started and how we aren't alone maybe even about the threats to our the planets including the Daleks, Cybermen, and the real policemen of the universe the Judoon the r real enforcers of the universe

Let's simulate a single cell, then an organism, then aging. Then we can start extending our lifespan. THEN we can start living, not just this handful of years between being a powerless child and a weak, aging adult. Then you can worry about galaxies.

First of all, people are trying to do this already, probably on much more powerful hardware. Second, simulating an organism to the level that we need is probably a lot more demanding than simulating a galaxy to the level that these people need.

Let's simulate a single cell, then an organism, then aging. Then we can start extending our lifespan. THEN we can start living, not just this handful of years between being a powerless child and a weak, aging adult. Then you can worry about galaxies.

What does an astrophysicist know about cellular biology? Probably about as much as a biologist knows about astrophysics.

Compounding that, we wouldn't have made a fraction of the scientific progress to date if we focused on a single discipline until it was maste

I wonder if (theoretically) in a simulated galaxy, in which all particles and physical rules have been considered, some kind of simulated life-form can evolve from strange interactions of those particles and rules. Then, does this simulated life have any difference from real-life (reference to "The 13th floor")? I mean, for the living forms inside the simulation it will be like real life. I'm thinking if in theory there is anything to contradict this. Is it possible?

THEN we can start living, not just this handful of years between being a powerless child and a weak, aging adult. Then you can worry about galaxies.

Nature already has a process for eternal renewal - death and birth. Our species as a whole has no pre-determined expiration date, and the ability to pass information through the generations. What difference does it really make whether it's us individually that live on, or our descendants?

I am not my descendants. I'd love to see the new discoveries that will be made about our place in the cosmos, the new technologies and art and culture that will be created in the distant future for myself, as would a great many others.

There's a big difference in the problem. Namely, its possible to work at a coarser level of granularity when dealing with galaxies. You might not be able to simulate individual stars, but you can simulate star clusters and the clumps of dark matter to get approximations. With the brain simulation, its not possible to abstract away as much detail, hence the higher hardware requirements.

When you simulate a continuous medium by dividing it into small space and time steps, there's a speed "c" that's equal to the space step divided by the time step which cannot be exceeded by anything in the simulation.

The simulation argument [simulation-argument.com] paper proposes a philosophical argument about this sort of thing. The consequences that they come up with are pretty interesting. Of course, there are arguments [pagesperso-orange.fr] against [imminst.org] such a configuration of the universe as well...

Even with 800 AMD processor cores at its disposal the university is still hitting the limits of what is possible..

Meaningless uninformed journalist bs filler puff. What is possible, is simulating every subatomic particle in the universe at planck time intervals for the total age of the universe, repeatedly for an infinite combination of different cosmological constants to see what you get. That will never be done, of course.

But how would you set up the computer to compute its own particles? If you really want a truly accurate representation of the entire universe you would need to include the computer doing this... I sense a paradox coming on.

What happens when the simulation get to the point where humanity is 'advanced' enough technologically to try to model the universe with supercomputers?
its an obvious infinite loop that will cause the universe to crash.... and they are professors? sheesh

Eight hundred cores of any type is TINY as far as supercomputers go. Most large US universities generally have at least one (if not several) supercomputers that are multiple times (if not an order of magnitude) larger than this. Never mind that most research projects on supercomputers NEVER use the whole system at once - it's more of a timeshare thing where you book however many threads for a certain length of time.
Yes, the prospect of the research is interesting. That being said, other than that there

Let them simulate the milky way. I'm curious as to whether or not they will be able to simulate the genesis of life on Earth. That will be interesting..
Hey, maybe if they let the simulation run long enough, the simulated earthlings will make their own simulation.

There must be a principle out there somewhere that says the universe cannot be accurately simulated by anything smaller than the universe. And if there isn't can I invent it and call it The Principle of Computational Hopelessness?

Technically it would depend on the properties of the universe. If it's turing complete, we can already simulate it given enough time and/or space. Since we're not doign a simulation of the whole timeframe of the universe but only it's beginning, time (and hence space) is not an issue.

be more careful with article summaries. They're wore than newspaper headlines these days. The "Over in the UK Durham University is tasking its supercomputing cluster with nothing less than recreating how galaxies are born and evolve over the course of billions of year" could describe any of the countless galaxy evolution simulations that have been done for a couple of decades already at various places, and gives no indication as to what's new about this instance. In other words, the headline is at best absolutely uninformative, and at worst, misleading.

8 cores = 2-3 cores + 4 GB for the game's cpu and memory requirements and 5-6 cores + 12 GB to model a GPU with dedicated memory access. Not sure if GPU modelling has ever been done before, but I bet its possible with that much cpu access and memory. Games used to run with software graphics acceleration back when I was in grade school. I remember I bought my first dedicated graphics card back in the Voodoo 3 days and could start selecting "Hardware Acceleration" in PC games.

You seem to forget that emulating a modern GPU would require somewhat up to a hundred cores from a generic CPU.

I don't believe that at all; it sounds like marketing-speak. Intel's using, what, eight CPUs to do real-time RAY TRACING, and that's MORE demanding than the rasterizering paradigm that modern GPUs are based on. Certainly a GPU is more specialized and efficient than a similar-scale general purpose CPU, but I think the performance ratio is closer to 4::1 than 100::1.

IIRC those CPUs have some GPU components built into the die which is why that is possible. In a straight competition on most normal GFX rendering type equations the actual is closer to the 100:1 than 4:1.

Please correct me if I'm wrong. However it very much depends on the equations. There are things that GPUs can do but are bad at and things they can't do at all.

Either way having a good setup to make use of the strengths of both types of processor is going to be the optimal solution.

I don't believe that at all; it sounds like marketing-speak. Intel's using, what, eight CPUs to do real-time RAY TRACING, and that's MORE demanding than the rasterizering paradigm that modern GPUs are based on. Certainly a GPU is more specialized and efficient than a similar-scale general purpose CPU, but I think the performance ratio is closer to 4::1 than 100::1.

It's all in the design. (YAY! Car analogy time)

You have to transport 1,000 people from NY to Miami faster than another person. Complete the chal

You can think about whatever ratio that makes you happy, that doesn't change actual ratios shown when implementing algorithms in the GPU vs a general purpose CPU. A GPU does hundreds of very simple calculations in parallel, the CPU doesn't, it's quite simple.

Actually, comparing FLOPS, about 10 Core i7 980 XE (107 GFLOPS) processors could handle the work of a GTX 285 (1062 GFLOPS). Add another one to orchestrate the whole mess, and you should be good to go. Of course, your latency will most likely be significantly increased...

On a physical problem that can be described by a handful of equations (as in a per-particle simulation of gravity and electromagnetism, or a per-mesh element of a fluid) the calculations are very simple; the caveat is that they have to be iterated a couple of trillion times before getting a result; GPUs are designed to do exactly this (probably as a colaterral of how they are designed to handle vertices). Why