Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

DoctorBit writes "NASA scientists have achieved a breakthrough in simulating the merging of two same-size non-spinning black holes based on a new translation of Einstein's general relativity equations. The scientists accomplished the feat by using some brand-new tensor calculus translations on the Linux-running, 10,240 Itanium processor SGI Altix Columbia supercomputer. These are reportedly the largest astrophysical calculations ever performed on a NASA supercomputer. According to NASA's Chief Scientist, "Now when we observe a black hole merger with LIGO or LISA, we can test Einstein's theory and see whether or not he was right.""

a year ago Itanium2 actually lead the floating point benchmarks, they do after all have a vector supercomputer architecture. But of course with delays and scaling back the operating frequencies, the dual cores still aren't out yet and the clock will be about the same, 1.6 GHz as the older models. So unless intel does something really suprising and stellar, I don't see Itanium leading in the supercomputing field anymore, and SGI is stubbornly refusing to consider AMD, so the chip and SGI may soon be toast

HL2 is singlethreaded so the performance would be the same as on one Itanium. Also x86 code has to be emulated on Itaniums = slow. Oh and no GPU which means pixel/vertex shaders would have to run on software. Educated guess: 0.1 fps.

There are experimenters. The guys who ran the simulation were experimenters.There are theoreticians. Einstein was a theoretician. He asked relatively simple questions and followed the logical consequences. I suspect that having to use a computer would have been a giant distraction and might have delayed or prevented the theory of relativity.

There are experimenters. The guys who ran the simulation were experimenters.

Nope. The guys who run these simulations are all theoretical physicists and/or mathematicians. The various forms of Einstein's equations suitable for simulation on a computer are all based around extremely tricky systems of elliptic or hyperbolic partial differential equations. The guys who actually ran the simulations were tech monkeys at Goddard. The guys who designed and developed the code and the mathematical analysis were

People with addictive personalities will find something to be addicted to.

It is important to have self-awareness that this is an issue and put hard-line limits on things, including drinking or playing a game. "I will only play 3 hours a day" or "I will stop playing at midnight". Hard stops are usually easier to deal with than "I won't play too much" as that leaves too much open for interpretation, which is bad if you have an addictive personality.

It is important to have self-awareness that this is an issue and put hard-line limits on things...

Although I completely agree with you, you left out one point. While adults are often able to make the kind of analysis that you are suggesting, children generally are not.

This is not a perfect world and if someone can take advantage of the imperfections by selling Tobacco, Alcohol, Games, Christianity and drugs to children who are unprepared to recognize their addictive and dangerous effects, they will. Paren

Although I completely agree with you, you left out one point. While adults are often able to make the kind of analysis that you are suggesting, children generally are not.

This is not a perfect world and if someone can take advantage of the imperfections by selling Tobacco, Alcohol, Games, Christianity and drugs to children who are unprepared to recognize their addictive and dangerous effects, they will. Parents are often not in the position to recognize these problems and help their children learn to han

While there may be something to the concept of addictive personality [netdoctor.co.uk] or genetic predispositions [unc.edu], other important issues are easy access to the drug and the engineering of the drug to the individual.

Most work environments don't allow drug dealers to visit your workstation, but screening out gaming is hard. More alarmingly, it is only a matter of time before games modify the individual user experience to maximize time spent playing them.

"Rotating black holes are thought to be formed in the gravitational collapse of a massive rotating star or from the collapse of a collection of stars with an average non-zero angular momentum. Most stars rotate and therefore it is expected that most black holes in nature are rotating black holes." Rotating black hole - Wikipedia [wikipedia.org]

According to theory, the event horizon of a black hole that is not spinning is spherical, and its singularity is (informally speaking) a single point. If the black hole carries angular momentum (inherited from a star that is spinning at the time of its collapse), it begins to drag space-time surrounding the event horizon in an effect known as frame-dragging. This spinning area surrounding the event horizon is called the ergosphere and has an ellipsoidal shape. Since the ergosphere is located outside the event horizon, objects can exist within the ergosphere without falling into the hole. However, because space-time itself is moving in the ergosphere, it is impossible for objects to remain in a fixed position. Objects grazing the ergosphere could in some circumstances be catapulted outwards at great speed, extracting energy (and angular momentum) from the hole, hence the name ergosphere ("sphere of work") because it is capable of doing work. Once all the angular momentum is extracted from a spinning black hole, what do you think happens, it stops spinning.

Hmm, this begs the question. Do the simulations for a non-spinning black hole approximate a very slowly spinning BH, or is it a step function, spinning vs. non-spinning? Since just about everything in the universe has some angular momentum, you'd think all BH'es would be spinning with the older ones just doing it very slowly.

Another question would be: Can the ergosphere apply energy to the BH making it spin faster? I.E. If a body crashes into the ergosphere almost grazing, but is captured, does it tr

So if the BH got enough hits in the direction opposite to it's spin, it would slow down, and at the instance it got enough it would stop spinning and the ergosphere would disappear making it forever a static black hole. Since these things have been around for billions of years, a few of them probably have stopped spinning.

According to theory, the event horizon of a black hole that is not spinning is spherical, and its singularity is (informally speaking) a single point. If the black hole carries angular momentum (inherited from a star that is spinning at the time of its collapse), it begins to drag space-time surrounding the event horizon in an effect known as frame-dragging. This spinning area surrounding the event horizon is called the ergosphere and has an ellipsoidal shape. Since the ergosphere is located outside the eve

Once they have some experience with this simulator I'm sure they will move on to spinning black holes.

True. In fact, some steps have already been taken in this direction by other groups. For instance, my group at U.T. Brownsville -- whose non-spinning simulations were published simultaneously with the NASA results (but we don't have the same PR machine) -- have put up a preprint on the orbits of black-hole binaries where the individual holes have spins parallel to (or antiparallel to) the orbital angular mo

"NASA scientists have achieved a breakthrough in simulating the merging of two same-size non-spinning black holes based on a new translation of Einstein's general relativity equations. The scientists accomplished the feat by using some brand-new tensor calculus translations on the Linux-running, 10,240 Itanium processor SGI Altix Columbia supercomputer. These are reportedly the largest astrophysical calculations ever performed on a NASA supercomputer. According to NASA's Chief Scientist, "Now when we observ

The simplest tensor calculus equations require thousands of lines of computer coding. The expansions, called formulations, can be written in many ways. Through mathematical intuition, the Goddard team has found the appropriate formulations to lead to suitable simulations.

More like, did they guess right with their "mathematical intuition" in creating the computer code. Or did they just muck with it until they got a pretty video that wouldn't crash the system. This could be just another NASA problem with

First, with regard to intel, there is essentially no risk from this, as the math libraries used by everyone involved in such work wave test exercises that verify the accuracy of the hardware. It's not uncommon to run every calculation on two physical processors to assure that no single processor malfunction can introduce a significant error.

Second, with regards to the correct approximation of Einsteins equations, either the approximation is exact, in which case there is no risk, or the error size for the approximation is closely known, in which case when we observe the black hole merger, we will have one of 3 conditions: confident to some error size that he was right (actual results match simulation, but we can't rule out his theory being slightly wrong at a finer level), confident that he was wrong (actual results lie outside of error range for simulation), or no result (actual results indicate the possibility he was wrong, but lie within error range).

Spin is one of the fundamental identifying characteristics of a black hole (in addition to mass and electric charge). Also, if I understand correctly, not all of the mass is necessarily contained at the singularity.

As my professor back in P1 said, "There are three types of orbits: elliptical, parabolic, and hyperbolic. There is no suck orbit." Thus, no matter how many times you use the word thus, you still have no clue what you're talking about.

I cite this paper because Larry Smarr is one of the Nasa panelists for this project, and I heard his talk on this paper at the University of Texas at Austin in the late 1970s. Come to think of it, I remember seeing one of the other panelists, Joan Centrella, at the same talk.

OK, I'm no general relativist, but I am a computational physicist -- what could the article possibly mean when it says earlier attempts were "plagued by computer crashes -- the equations were far too complex"?

I can imagine a situation where a poorly-arranged computation of an equation might give you an underflow in an intermediate result, or where a badly-arranged summation might give you noise. But crashing the computer? Sounds more like array-bounds, which can happen no matter how simple the equations are.

A major technical problem of integrating field equations is inthe propagation of/constraints/ on the components. Ie GRdescribes the time evolution of a tensor for which all thecomponents are not independent- for instance they obeyBianchi identities.http://mathworld.wolfram.com/BianchiIdentities.htm l [wolfram.com]

Simple numerical integrators destroy these identitiesat order dt^n for some small but finite n. Run the codeforwards and one can find finite time blow ups due tothe stepping algorithm- however even after a singletime step the numerical solution has unphysical aspects

OK, I'm no general relativist, but I am a computational physicist -- what could the article possibly mean when it says earlier attempts were "plagued by computer crashes -- the equations were far too complex"?

The article is a bit wooly. Basically, Einstein's equations are a system of constrained partial differential equations. The horrible thing about them is that they are all coupled and nonlinear too. There exist several mathematical ways to analyse these sorts of things in the continuum, mainly cent

If the simulation results seem to say that Einstein was wrong, that still doesn't prove that he was wrong, because they have not actually merged two black holes together in the real world. Simulation != real world. If there are any flaws in the assumptions, parameters, or algorithms they use to perform the simulation, then it invalidates the whole exercise.

1) This is a first -- no other group has achieved this before. yay! (after decades of work!)

2) This is hard for the following reasons:
a) since you are doing calculations near (or on/in) a black hole, you tend to get a lot of
infinities, which 1) crash your code and 2) exacerbate your errors
b) for most simulations, your grid remains fixed. For black holes though, they *deform* the
spacetime around them -- which means your grid points have to move (in a non-predictable
manner)!
c) what happens when two black holes merge is not well understood (ie, what should happen?),
so this is new science
d) initial data is hard to get and unreliable. If two black holes are far apart, you can
write an exact solution (at least within some error), but to get them close to where they
are interating, you pretty much need this kind of simulation anyways. This is such a large
problem that there are only a handful (a dozen or two?) initial data sets currently.

3) Everything is written in Fortran!:) (some competing groups use Cactus which is C++ based, although it also allows C and Fortran).

5) There are several approaches to some of the issues above, from puncture splitting (using adifferent spacetime metric like 1/r vs r to remove the singularity), excision (not evolvinginside the event horizon, since that's not "interesting" anyways), and other methods. Ournew method actually doesn't need any of those "tricks", which is pretty interesting.

6) This data helps drive the LISA and LIGO projects from a theoretical standpoint--basicallyknowing what kind of gravitional waves they should be seeing, and to correlate what they seeand what their data may represent (ie, if you see a waveform like this, this means that it'stwo merging black holes, vs just co-rotating black holes).6a) We study black holes b/c they are pretty much the only thing that'll generate detectablegravitational waves.

What is useless now will someday be useful.Exempli gratis (and it's way out there):

Using this new data, someone observes a black hole merger. It doesn't fit the data. Relativity is redone, so to speak. Someone sees a great way to unify Relativity and quantum mechanics because of the new formulation. Bam. Like that, unified theory of everything. Those spinning superconductors generating magnetogravitic fields are understood. Artificial gravity and anti-gravity are discovered. Moon-flights are near ch

What is the actual outcome from this research? more knowledge about the universe and how it might work.

Will this help create more energy-efficiency in the world?maybe, who can say what future developments and understanding of this area of physics will bring.

Will it help us find technology that humanity can actually use to make a better society? maybe, see above. it depends on the definition of "better".when general relativity was first thought of in 1915 there was no application, for the average person. today GPS relies on general relativity.

Will it increase our safety, or decrease power of madmen and dictators? the obvious answer is probably not. and while these are important questions, this one is not topical in this discussion.

You can't do meaningful experiments without some idea of what the theory says will happen. Numerics of this sort provide that for complex physical cases which are essentially impossible to work out with pen and paper. So yes, this is a step towards getting knowledge of the universe and how it might work.Also, understanding does NOT require the tie to experiment since you can have mathematical understanding of a particular theory independant of whether that theory properly models reality. For instance, I can

How about making science progress by testing a part of one of the most important theory in physics? It's not my funding, however I'd love my country to invest more in science even if only for the sake of science. We're in an era where everything has to be justified by money, it feels like the Dark Age of information. I'm waiting for the next era where new thoughts, science and knowledge progress get some value back.

Call me utopist if you want, but finding something that "increase our safety, or decrease power of madmen and dictators" gets the #1 naive award (always thinking big shields and weapons, what a world).

When you say "the next era where new thoughts, science and knowledge progress get some value back," do you mean some era several displaced from our current one, or are you making the irrational asumption that the next era will be just such a time. Me? I'm expecting holy wars and inquisition-style persecution in the near future.

Well, in a quantum mechanics experiments, you cannot physically test because you'd change the environment. Does it mean we don't have to simulate, even if we cannot physically test?

The testing the merger of two black holes is quite the contrary,, and we'd be the ones destroyed if we'd get too close. The only solution is through astronomic observation, so we're waiting for the phenomenon to appear. However, how to compare with our current laws of physics (in the case, Einstein's theories) if we don't simu

I realize that this doesn't fit nicely into your libertarian view, but we often do science just for the sake of doing it. Knowledge in and of itself is a good thing, and funding some cycles on a computer that would otherwise be simulating nukes or finding prime numbers doesn't seem wasteful to me at all.

If this experiment can ultimately lead us to see if Einstein was right about gravitational waves or not, then this is not a waste of funding. Because these waves are thought to be unchanged by any material they happen to pass through, it is thought that they may carry unaltered signals across various reaches of space. This could theoretically provide us with a way to estimate cosmological distances and help us understand how the universe was formed, what the whole of it looks like, and the ultimate fate of

Stories like this make me feel sad that many people feel we need public funding for research that seems to have no real gain for those paying for it.

I would question your definition of "seems" and "gain". It's people like you that are a large part of the reason why we don't have colonies on the moon and Mars, why we've not been to the stars and found other habitable planets. It's not like this one is going to last forever, and it's not like we are going to stop screwing with it. Understanding how gravity

Comments like yours make me feel sad. Why do this? What about the pure hell of it? Of finding out things that no-one else knows, of pushing back the boundaries of human understanding?

In short, what's wrong with pure research for research's sake?

Besides which, who knows what the applications might be in the future? There were lasers lying in research labs for a decade or more before anyone thought of a practical use for them. Now I personally have at least 8 in my house; peo

To make the internet work on a physical level requires really good understanding of theoretical physics, because when you pipe huge amounts of information around the planet, or bounce it off satellites, you need to account for relativistic effects.

And I think we can agree that the internet is extremely helpful in making the world a better place: distributing free information, reducing the energy spent communicating, and even promoting recycling through eBay, Freecycle [freecycle.org] et cetera.

If they already have bought the machine, it's actually a hell of a lot more wasteful NOT to use it. TFA indicates that it was ranked for the Top 500 in November 2005... and since it had to be purchased a fair bit of time before its delivery date, it could a year or more since the original funding allocation and subsequent purchasing decision were made.

I'm sure the exact date of the purchase order could be dug up by someone more determined than I am.

I agree that on the cosmic scale of practicality this is probably on the bottom, but just solving the math in this problem is pretty impressive and we have no idea where the solutions to this deep problem will lead when it perculates up to more practical matters involving computer simulations. Who would have thought that a nasa space craft propultion system would be most often used in our ionic breaze air cleaners.

You use his theories to construct and run a model, and then you compare the results of that model to what you can observe in the sky. The differences between what is observable and what the model indicates are where the new knowledge is, even if things don't match up.

Won't building a model based on an equation automatically prove a theory that is based on that equation?

You don't use the model to predict what the model will do, you use it to predict what (in this case) actual black holes will do. And so if your model predicts something else than what happens in nature (within the limitations of the model), you know your theory is bust.

The key here is not really what the model looks like. It's how the model compares to real life. If when LIGO comes online, they detect waves that match what the model predicts they should detect, that gives experimental support that the equations the model is based on are correct. Also, in your example the equation is part of the theory, which is that 7=13, so then the model is 7*2 and the result is 26. If you do an experiment with counting blocks and combine two groups of 7 blocks yet find yourself with 14

Won't building a model based on an equation automatically prove a theory that is based on that equation?

No. In Physics a theory makes claims that can be falsified by an experiment. The theory (general relativity) is already there and the experiments will be carried out by LIGO and LISA (the latter having been delayed indefinitely thanks to Bush's plans).

However, we strongly assume that General Relativity must break down at some point and give way to some theory of quantum gravity. There are several such the