Author
Topic: Stellar age issue [KNOWN] (Read 6091 times)

First off I'd like to say, this simulation is amazing! I don't know how I managed to go this long without hearing about it.

My issue: When I attempt to age a star, for example the Sun in the climate simulation, the age increases yet no other parameters (e.g. surface temp or radius) change.

This prevents me from inducing a nova or expansion to a giant unless I adjust temp directly. Not sure if this has been mentioned before...

This issue holds for all created and stock simulations...Also a few questions:1.) are there any plans to limit supernova to stars that are actually massive enough? And to differentiate between nova and supernova

2.) Is the stellar remnant or nova remnant supposed to be a white dwarf? Under composition both the stars and stellar remnants are 100 percent hydrogen.

3.) Does the game take account of magnetics (say around a pulsar) in regards to nearby planetary heating? Will the magnetics of a pulsar contribute to evaporating the atmosphere of planets in the sim?

Sorry for the issues, cavok84. We are aware of a number of issues with stellar evolution right now. The system is a little broken. Instead of attempting to put band-aids on all of the specific issues, though, we're working on a complete rewrite of the stellar evolution code. The new model should hopefully address most, if not all, of the issues you mentioned. Unfortunately this means we'll just have to wait a bit to see these addressed. We're hoping to have the new model ready for Alpha 20, though, our next big update.

Regarding magnetics, no, only internal magnetic fields for planets have any effect right now, which is reducing mass loss erosion from solar winds. Hopefully we'll improve on magnetic simulation in the future.

As for those crashes, I see that you sent in a log, thanks. There doesn't appear to be anything unusual there. Can you send us the log after you experience a crash? Once you run Universe Sandbox ² again, it overwrites the log, so you'll need to send it to us after a crash and before you run it again.

When v-sync is off: 150-300 FPS: simulation error appears to be at its lowest state in comparison to v-sync on. Simulation error converges with the error I observe with v-sync ON as I reduce the time step to very slow (sub 1 sec/sec) Once I'm below a sec/sec Vsync ON is as accurate if not more accurate than Vsync OFF

However, at normal to very high time steps V-sync OFF appears to provide an accuracy far better than ON.

When v-sync is ON: set at 60 FPS: As stated above, error increases unless at very low time step values. The PHYS rate (fps viewer) decreases to about half of what it is when V-sync is OFF. At very high time steps orbits appear non-circular (i.e. hexagonal or some other geometric shape, I assume this is due to some error and FPS issue)

Vsync OFF gives an average CPU usage (reasonable time steps) of 30-50 percentVsync ON gives an average CPU usage of 10-20 percent.

Sorry for the delay. I asked the team about Vsync and accuracy, and you are correct. When the FPS is higher with Vsync off (if it is then higher than 60 FPS), the physics calculations can take smaller steps, and smaller steps means a lower simulation error. Essentially, enabling Vsync limits the CPU load, and disabling it can increase the CPU load and increase the physics calculations. We try to maintain a low error even when CPU load is lower, but if your aim is for utmost accuracy then disabling Vsync is your best bet right now. We've thought about ways to improve this in the future. You can also manually adjust accuracy and tolerance by clicking Sim > Show More in the bottom bar.

"At very high time steps orbits appear non-circular" -- this is because at higher time steps there are fewer positions calculated. The trails you see show the path from position to position, so for example, a tight orbit with only six calculated positions per orbit will appear as a hexagon. If you enable Orbits instead of Trails, you will see a circular orbit.

It was just a bit odd that something like vsync would have such a noticeable effect upon error, as typically I would assume the GPU would handle limiting FPS whilst the CPU could still provide high PHYS updates or additional steps to increase sim accuracy.

Like you said, increasing accuracy manually does have a noticeable effect of the values, but only if I manually enter a very low tolerance, the sliding 'accuracy' bar doesn't change anything and I suspect his is because my sim accuracy is high enough at a given time step to not warrant a change.

I'm currently using 'Native cpu' rather than 'managed'. I'm guessing this is still optimal?

Is there anything we can do to better utilize our GPU in computations? Especially if we have an AMD with high floating point processing?

While the nbody code is running asynchronously, and is not strongly tied in with the render loop, they do synchronize. The timing is such that the user can specify a desired speed scalar, such as 1000 times faster than realtime, and for every frame nbody is asked to advance the simulation 1s/fps * 1000s

The system is really designed to work for situations where the nbody step is slower than the render step, and not the other way around, so currently a new nbody step is only launched once every n frames. In situations where the nbody step takes multiple frames, as is commonly the case, this works well, but when the nbody step takes much shorter time, it is not optimal.This is why it can run faster with vsync off, since frames are rendered more frequently and therefore nbody stepping is also launched more frequently.

Generally a very low tolerance will force the nbody code to take multiple smaller substeps internally, which will make it more accurate, and also slower, thus utilizing the computation time better. The question is then if that is what you want, since there is such a thing as "accurate enough" and you may not always want "even more accurate" at the cost of 100% cpu utilization.

As to native vs managed, managed means the c# implementation of the nbody code while native is the c++ implementation (not user friendly names, I know), which still has managed collision-resolution parts, though. Native is generally some 3-5 times faster, and should be the default mode. The nbody code is currently being re-re-rewritten to be even more pure c++, with still better performance, while dropping managed mode entirely.

Currently we are not using the gpu for computations. We did a long time ago use OpenCL for the core of nbody, but with support for multiple platforms, and now even mobile, the pure cpu implementation won out. This is likely to change eventually, but we will not provide gpu computation any time soon.

I hope some of the above made sense. If not, don't hesitate to ask again :-)

Thanks for the detailed explanation! Your reply really cleared up some confusion I had as to the relationship between fps and internal physics updates/steps.

My curiosity came as a result of considering the accuracy of long term simulations. For example, let's say that I'm running a basic solar system simulation where I've introduced a rogue planet that enters an elliptical orbit around the sun. My goal is to view long term perturbations to the system.

If I'm running vsync ON, and I have an average error rate of 4.0 m/s and 10 percent CPU usage: after letting my sim run for a week (real time) and at a fairly high speed step.... will that 4.0 m/s eventually be a cumulative error that would render my simulation erroneous? This is in contrast to vsync OFF and an error rate of .25 m/s (40 percent cpu usage). To clarify, are there internal physics steps that correct previous ambiguity or error so that, given a reasonable time step, accurate enough* is just as good as most accurate?

In other words, for the simulation to set a reasonable tolerance in the 'auto' settings, it would seem to be that the simulation would need to know for what length of time the user intends to simulate. Am I off about this?

As to your explanation, I'm very appreciative of your taking the time to fully articulate the specifics. I won't pretend to know anything about coding or what you and your team are actually doing, but the info you provided clarified the majority of my confusion.

The lack of GPU computation also explains such a low GPU utilization/temps. I came across some old info describing GPU computing and types of GPUs best suited to the sim, it seems this has been completely changed now and might offer another reason to upgrade to the new AMD ryzen processors!

The concept of error in the simulation is perhaps a bit misleading.Given the chaotic nature of a nbody simulation, a calculation error of 1m in one step can cause an error (from the true solution) of 1km the next step, while the error, from the already diverged path is perhaps again only 1m.

It is like a magnetic pendulum like in
where any ever so tiny offset in the initial position can cause entirely different motion.

The error we report is the error for _one_ step, which can reasonably well be estimated. The next step, though, will now be starting from that slightly wrong position which has already diverged from the "true" motion and the error reported the next time is from the already diverged path. In other words, you cannot say that after ten seconds of an error of 5m/s you have certainly no more than 50m of error. The only thing you can say is that if the simulation is mostly non chaotic then that estimate is true, otherwise it is truly per step and is only helpful for when you want to set a reasonable upper limit, which you find to be reasonable in light of other scales in the simulation. This mostly being orbital scales.

I hope that made some kind of sense. If not, I will try to give a better example.I should probably consider writing something up about this to make it more clear what guarantees we can and cannot give related to error.