Working Through the N-Body Problem in Universe Sandbox ²

What is the n-body problem?

The n-body problem can be defined as “the problem of predicting the individual motions of a group of celestial objects interacting with each other gravitationally.” Or, in a gravitational system of n bodies (where n can be any number), where will they all be after one year?

It’s helpful to frame this in contrast to the two-body problem, which looks at the motion of just two objects interacting with each other. For example, you can look at the Earth and Moon as a two-body problem. The Earth pulls on the Moon quite a bit, keeping it in orbit, and the Moon pulls on the Earth just a little bit.

The issue here is that the Moon is not affected gravitationally by just the Earth; it is also being pulled by the Sun, and Jupiter, and every other object in space. The same is true when looking at the Sun and Earth: the Sun is not the only object pulling on Earth. So to account for all of these gravitational forces, you need to use an n-body solution.

The problem of the n-body problem in Universe Sandbox ²

In Universe Sandbox ², every object is simulated as part of an n-body problem. Unfortunately, when solving for many objects, or n objects, you can’t just jump forward in time without getting massive errors. There’s simply no way around this. Solving an n-body problem requires calculating how each object affects each other object every step of the way. Errors will still happen, but taking smaller steps reduces them.

By default, the simulations in Universe Sandbox ² try to set an accuracy which prevents orbits from falling apart due to error. This means setting a maximum error tolerance for each step and also making sure the total error doesn’t reach an upper limit.

If you crank up the time step, the simulation then has to take fewer, larger steps. This means the potential for greater error. And the greater the error, the more likely it is that an orbit, which otherwise would be stable, falls apart. Moons crash into planets, Mercury gets thrown out of the solar system — things like that.

This isn’t what most people want in their simulations. But at the same time, most people also don’t want a limit on how fast they can run their simulation. This is a problem.

An imperfect solution

So how can we get around this problem? How can we accurately simulate thousands of objects while still allowing for large steps forward in time? For example, what if you wanted to simulate our solar system on a time scale of millions of years per second so that you could see the evolution of our Sun?

One solution proposed by Thomas, our physics programmer, is to allow for a special mode within simulations running at high time steps. This mode (which of course could be toggled) would collapse the existing n-body simulation into a series of 2-body problems: Moon & Earth, Earth & Sun, Europa & Jupiter, Jupiter & Sun, etc.

Solving a 2-body problem is much easier than solving an n-body problem. Not only is it faster computationally, but there is also a relatively arbitrary difference between figuring out where the two objects will be in one year and where they’ll be in a million years — it still requires just one calculation. So if you collapse an n-body simulation into a series of two-body problems, the simulation could take one big step forward, instead of taking the small steps needed for calculating it as an n-body problem.

The results won’t be entirely accurate, as this method would effectively ignore all gravitational influences outside of the main attractor. As mentioned before, calculating Earth’s orbit by looking at how it interacts with just the Sun is not accurate, as Earth is also affected by every other body. The Sun, however, is the most significant factor by far, because it is much more massive than any other object in our solar system. The other, much smaller forces tend to have little effect overall in non-chaotic systems. So while it’s not correct, it’s close enough when simulating something relatively stable like our solar system.

This isn’t a perfect solution. But we think it could be an improvement over the current system and its limitations, which leave you with the choice of either destabilizing the orbits with massive errors, or waiting days for the simulation to advance the millions of years needed for the Sun to evolve. Neither is particularly interesting.

When is this coming to Universe Sandbox ²?

Not anytime soon.

This solution is just a proposed idea right now, and is not a high priority for us, as we already have a big list of exciting features planned. But we think it is useful to understand the complexity of accurately simulating the motions of hundreds to thousands of objects interacting gravitationally.

This is especially a challenge when attempting to do this in real-time on a home computer, which is why researchers run numerical simulations on supercomputers which take days to complete. With Universe Sandbox ², we’re exploring new territory and working through problems which haven’t been solved before. And this is a big part of why we love making it.

For the sake of good science, I am going to throw out a few ideas just to see if something sparks a worthwhile solution. I may be completely off with my assumptions, but it is worth a shot.

Human vision is not perfect, and that is good for your problem. At high speeds anything that is moving will likely not even be visible because it is moving so fast. Perhaps there is a way you can get away with outright ignoring certain “safe” objects until time slows down, at which time you plot the safe objects based on the passed time. “Safe” meaning any object that will clearly not be affected by anything else.

Perhaps in high warps you can consider orbiting companions, like Earth and Moon as one object. Anything that has enough mass to significantly tug the moon is also gonna have a big pull on Earth too. I guess pre-determine the amount of gravity required to “rip” the two apart, and unless that amount is met, consider it as 1 object (and even alter the mass/density to represent both bodies as 1). Like that massive heap around Jupiter is, for all intents and purposes, stable. There is no need to calculate all of it at a speed that will render all those moons invisible.

Lastly, perhaps buying the pc time might help. You could do some super sci-fi countdown while the comp cranks out possible errors and fixes them ahead of time…like “3…2…1…” then the camera looks like it hits warp speed and continues.

What you call “orbiting companions” is exactly what is being suggested in the blog post. Taking the Earth-Moon example, Earth-Moon would be one 2-body problem and Earth-Moon and Sun would be another 2-body problem where Earth-Moon acts like one single object.

As to safe objects, it is tricky, since that would require the object to neither affect others or be affected by others, and that is a very rare situation.

The warp idea sounds fun, but lets assume we can do 10 days per step and each step takes 5ms, even counting down ten seconds would only buy 54 years and you might as well watch things move while it is being computed. Perhaps I didn’t understand your suggestion correctly?

I don’t know that the limit per step would be as little as two days. With a number of two-body systems, each one will be periodic (though periods will differ), so all you need to do is iterate through your two-body systems, determine the period, modulo each one against the desired step, and move everything. Modulo is a very fast operation, and is only performed once per two-body per jump order, so you can jump forward a month or a million years just the same – doesn’t matter how long. The factor that’ll influence time there is how many two-bodies you have.
One suggestion I might make is to have this thing look for orbits that pass fairly close to each other and determine when the two-bodies will be near enough to maybe influence one another. Jump them to that time, drop them out of warp, simulate them as they pass, recalculate two-bodies to see if any moons have changed ownership or been destroyed or whatever, and put them back in warp (looking again for orbits that pass nearby each other). That’ll be slower, but it has the advantage that it can simulate stuff like earth-crossing asteroids over a very long period of time very quickly, because it only has to simulate each asteroid when its orbit might change.

Would a numerical solution still take a long time to calculate, or would it go faster without the objects being rendered?

I ask because if we’re specifically talking about very high time warps, then maybe it would make sense to simply turn off rendering for local objects (or simplify them to basic dots) when the time warp increases past a certain point. Everything would be moving too fast to see anyway. And that way, you could maybe lose less accuracy? Again, that assumes rendering is in any way the main limiting factor. If the calculations take time regardless of the visuals, then it’s sort of irrelevant.

Another idea would be to calculate a gravitational potential field for all those quasi-stationary bodies (like stars, etc.) and just calculate n-body between smaller bodies within some mass fraction threshold (using that vector potential field as well).

While it would be nice to fast forward occasionally being limited to non-chaotic systems really seems to limit the usefulness quite a bit. After all watching how the system evolves, orbits change and planets get ejected over many millions of years would be one of the main reasons to fast forward in the first place

Maybe the problem could be simplified without losing generality by splitting it into several smaller n-body problems instead:
1) Take stars, planets and planetoids in the system and treat them as a separate n-body problem
2) Create a local n-body problem for every planet(oid) that only contains the planet(oid) itself and its moons
3) Handle everything else (asteroids, meteoroids and gas) with smoothed-particle hydrodynamics

While this won’t allow you to jump directly ahead it should still help to improve speed by exploiting locality and cutting down the size of each n-body problem

That is true, but see it as an extra option rather than a replacement. This still needs to be tested out, but the thinking is that we will fast-forward as 2-body systems, take an ordinary n-body step and evaluate if the systems should be updated or alternatively indicate to the user that the current configuration is far from 2-body.

For “small” systems, the overhead of building and traversing the tree makes this slower than brute force. For larger systems, the tree-based approach becomes better and better, in comparison, obviously.
Due to the complexity of the current implementation, with a mix of managed and native code, and the relatively low number of attracting bodies, brute force performs better in general.
Currently I am rewriting the engine to run purely native C++, and part of that rewrite is to revisit the tree, if for nothing else than for galaxies at least. So consider this a roundabout way of saying “yes, I have considered it” 😉