I typically am lenient if its clear what the units are and its clearly a typo except for mock exams (mark does not count). Don't get me wrong... it was wrong and unlike gramma ****s its something that should be pointed out since it can/does change the meaning. But also i want to write ms^{-1}

I have no special talents. I am only passionately curious.--Albert Einstein

I know how big space is. >_> Like I said I have (had T_T) a simulation running with a few planets placed at their correct distances from the sun + their moons.

Numerical accuracy isn´t going to be good enough even with 64-bit floating point variables, so I plan on using fixed point ints/longs evenly distributed over the simulation area plus floats/doubles for positions relative to the fixed points to get the same numerical accuracy everywhere. I would also use them for velocity. Would that eliminate the need for changing the frame of reference per object? I´m afraid that the numerical accuracy of even doubles will be too low to handle collisions in some cases. For example it´s possible to put something in orbit around the sun (not a planet). That could mean problems if for example two objects collide while in orbits around the sun in the outer parts of the solar system (same distance as Pluto? xd), right? Since the sun is the body with the biggest gravitational pull on the objects they woud use the sun as frame of reference, which is insanely far away. Another problem that fixed precision solves is rendering things with OpenGL. With only 32-bit floats and matrix multiplications things at the same distance as the Earth from the sun cannot be rendered accurately. I think fixed precision for velocity also improves collision detection/handling for objects travelling at very high speeds in almost the same direction.

I suppose this would mean changing the point of reference to the nearest point in an even grid, so maybe we´re even talking about the same thing? =P

We are talking about mostly the same thing. Numerical accuracy is really about cumulative errors. Even a plain old double with 53bits of precision is accurate to microns (about 17 microns) with a baseline from the sun to the earth. The problem with floating point is that the errors depend on the distance from the origin. This is compounded by the use of Cartesian coordinates. Typically spherical coordinates are used. Coordinate transforms then use tensors since these are curvilinear coordinates. That is the basis vectors change depending where you are.

Thing is that these small errors means that your simulated system could eject planets and that sort of thing all the time.

But all of the this is a bit moot for a lot of things. How accurate do you need it to be? What is it for? Faking it will probably get the job done without instabilities. ie just use parametric equations of motion for the planets. This is what Celestia and other orbital tracking software uses. There are tables that are accurate for the next 1000 years.

Also Celestia is open source and does all its visualization in opengl and they have dealt with the scale problem. You could check out their source.

I have no special talents. I am only passionately curious.--Albert Einstein

We are talking about mostly the same thing. Numerical accuracy is really about cumulative errors. Even a plain old double with 53bits of precision is accurate to microns (about 17 microns) with a baseline from the sun to the earth. The problem with floating point is that the errors depend on the distance from the origin. This is compounded by the use of Cartesian coordinates. Typically spherical coordinates are used. Coordinate transforms then use tensors since these are curvilinear coordinates. That is the basis vectors change depending where you are.

Thing is that these small errors means that your simulated system could eject planets and that sort of thing all the time.

But all of the this is a bit moot for a lot of things. How accurate do you need it to be? What is it for? Faking it will probably get the job done without instabilities. ie just use parametric equations of motion for the planets. This is what Celestia and other orbital tracking software uses. There are tables that are accurate for the next 1000 years.

Also Celestia is open source and does all its visualization in opengl and they have dealt with the scale problem. You could check out their source.

You lost me after you mentioned spherical coordinates...

I agree that using equations for immovable bodies is a good idea, but not for smaller bodies (ships, missiles. asteroids, etc). Let's say they have to be able to remain stable for 100 years in game time (=100 days of running the simulation at 30 "FPS").

Numeric instability could add some nice flavor to game. If you use too much time weird things start happen. The moon escape from the earth and jupiter start slinging deadly moons toward everything and other things that plaey can concider as apocalyptic events.

If the large bodies use parametric equations, this will make pretty much everything quite stable. Space ship etc are only affected by the large bodies and bobs your uncle you have a pretty stable system. Using big int will be very slow, I would just use longs or scaled doubles. For a game its going to be good enough.

I have no special talents. I am only passionately curious.--Albert Einstein

I believe that doubles will be insufficient in extreme cases. I also think that using floating precision numbers for something that needs evenly distributed precision (like position and to some extent velocity) is simply bad manners, so allow me to hijack back the thread and instead derail it into even precision variables. I did som math on 64-bit longs and found that a 2D position can represent 2 light years with 1mm precision. Sadly the closest star system to the sun is 4 light years away. >_> Oh, 128-bit ints, where arth thou?

What´s the best way of doing this? I think using integer math is a good idea, but it seems to be really complicated and slow to emulate 128-bit ints. I´m sure that velocity is fine with 64-bit longs and a fixed decimal point since it´ll only be used to accumulate acceleration and I´m fine with limiting the max velocity to a few times the speed of light. xD

One idea is to use a 64-bit long to represent position in meter plus either a 32-bit int or a float as a fraction. This would give a range of 2000 light years with micro or nanometer (10^-9) precision. However, I it seems hard to do basic math with such variables, especially handling overflow and underflow in the fraction variable.

My idea is therefore to store position in >64-bit precision and velocity as a 64-bit decimal long. Acceleration is calculated as doubles since forces benefit from the floating precision.

Why not the equivalent of an animation hierarchy at the high level. The end nodes are where there is some spatial data structure. The resolution can be very low as exactly where the spatial data structure is attached can be specified by a registration point which can move as needed to give fine grain resolution. Then all actual objects are simply stored as coordinates inside the local spatial data structure. If these are still too big, they could be store cell local as the higher levels coordinates are all implicit. The effect of the spatial data structures on one other could then be a simplified model (or models depending on distance)...for example a point mass for all it's contents if very far away or as a projected line of varying mass, etc. etc.

Why not the equivalent of an animation hierarchy at the high level. The end nodes are where there is some spatial data structure. The resolution can be very low as exactly where the spatial data structure is attached can be specified by a registration point which can move as needed to give fine grain resolution. Then all actual objects are simply stored as coordinates inside the local spatial data structure. If these are still too big, they could be store cell local as the higher levels coordinates are all implicit. The effect of the spatial data structures on one other could then be a simplified model (or models depending on distance)...for example a point mass for all it's contents if very far away or as a projected line of varying mass, etc. etc.

I hate to admit that I have no idea what you´re saying... (^_^;) Give me some time to process that, but first I need to get some food so my head starts working again...

Roquen has the right idea. When you are looking at different starts you use a different scale, say 1km per count. Then when you "zoom in" you use the 1mm scale for local simulations with coordinates centered on the star, or alternatively everything is 2 numbers a baseline+local delta. Most of the math only needs to use the local delta and can ignore the baseline, while other parts of the simulation can ignore the delta and only work with the baseline. [edit fixed typo]

The old java3d did something like this with its concepts of baselines.

Roquen has the right idea. When you are looking at different starts you use a different scale, say 1km per count. Then when you "zoom in" you use the 1mm scale for local simulations with coordinates centered on the star, or alternatively everything is 2 numbers a baseline+local delta. Most of the math only needs to use the local delta and can ignore the baseline, while other parts of the simulation can ignore the delta and only work with the delta.

The old java3d did something like this with its concepts of baselines.

So you´re basically approving of what I wrote earlier? Baseline = a long where 1 unit is 1 km or so, and local delta = an int or something. But how do I handle overflows/underflows in the local delta? Overflow = increase baseline, underflow = decrease baseline...

Don´t get too excited about this... Just me pondering over the technical plausability of a 2D space MMORTS... I liked Ogame except for the fact that it sucked. Since no one wants to make a decent space browser game that...

- does not depend on how fast you are at pressing F5 to update the browser - does not have laughable physics and planets 5 meters away from each other - is not ****ing textbased -_-

... I figured I might as well try to make one myself. There you go, add determinism to the list! ^_^ But like I said, don´t get too excited. I have 2 other games to complete first and 2 shadow mapping techniques to test before I can work on this. xD And since I´m completely stalled for around one more week my head is soon going to explode with programming ideas... >_<

It´s only newtonian physics so we´re not limited by the speed of light at least, and 1 light year takes only 1 day at the spee of light. xD In Ogame it takes weeks or even months to build certain buildings since the cost and time taken to build something increases exponentially.

For some kind of MMO game i would stick with 100% parametric for everything. More or less. Run totally different "modes" for between stars vers within systems. Since its parametric it is easy to move any one part of the system to where it should be at any given time. Even approximate orbits for ships etc can work this way, and its probably going to just as accurate as any simulation that would be practical.

Since the transport mode between stars will need to be some kind of FTL to make the game interesting you already have a natural way to separate the scales.

I have no special talents. I am only passionately curious.--Albert Einstein

It´s only newtonian physics so we´re not limited by the speed of light at least, and 1 light year takes only 1 day at the spee of light. xD In Ogame it takes weeks or even months to build certain buildings since the cost and time taken to build something increases exponentially.

You can't just throw precession at chaotic systems. Well not really. Errors eventually grow exponentially regardless. The only way to really do it properly is with interval arithmetic. Next best is always be smart with the precision you have. Doubles are seriously pretty good if used properly. rounding errors are about 1mm at 8 light hours. Longs are also good for 1mm error at 2 light years. Add a base line and a galaxy is no problem at all.

And no matter how "good" c++ is or whatever using the longer versions are much slower that the base versions. Its not like its hardware native format.

I have no special talents. I am only passionately curious.--Albert Einstein

It is MUCH faster (than BigDecimal), if you let the cpu do all that things... btw, I've got an Idea:

theagentd: maybe, you could make something like a Chunk-System (lol.. sorry its minecraft again.), where you have chunks in size, where your double has a good precision (that could be HUGE). Having a chunk system below would make everything acctually really ... yeahh. reeeaaallllly huge: MAX_VAL_LONG * MAX_VAL_LONG Chunks? I think, thats okey . And since you can have really big chunks, you don't have to switch the chunk too often and its not an expensive calculation anyways.

Also lets consider the time resolution. If you want 1mm accuracy that mean the integration steps need to be pretty small. Earth orbital velocity is about 30km/s. So to travel 1mm it takes 33ns. Of course you don't want to use that sort of time step. But you see that you simply don't need all this precision.

Unless you are studying the long term stability of the solar system (really really hard), you just don't need more than properly used longs/doubles. And it will be fast. But not as fast as parametric equations of motion.

I have no special talents. I am only passionately curious.--Albert Einstein

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org