One thing that occurred to me about time based movement, is that the timing is always one frame behind, which can cause noticable flaws in movement. Which is why I stick with tick based movement with a fixed frame rate in my current game with a frame skipping function to make up for any occasional hick-ups.

I don't understand exactly what you guys are saying when you mention time based and frame based animation. What is the difference exactly? In my code for example i just fix a target fps value and ensure that each rendering update wastes exactly the time alocated for each frame and if it wastes too much then i skip render updates until i get on track again.

However an easier method than my previous code would be simply to record tha animation start time on a variable and then compute the anim frame i need to use to render based on the formula ((current time - start time) % fps) if it is a looping animation. No frame behind issues this way.

I don't understand exactly what you guys are saying when you mention time based and frame based animation. What is the difference exactly?

With time based movement, the speed at which everything animates depends on how long the last frame took. This ensures that everything moves at the same speed, independently of the frame rate. Works great for 3D games, but sucks for 2D (especially scrolling games).

With tick based movement, the speed at which everything animates is fixed, so the speed of the movement depends on the framerate. If the frame rate drops, the game gets slower. So you need to ensure that your game logic updates at a constant rate, which may require the game to occasionally drop rendered frames if the target frame rate is not achieved.

If you throttle a time based game loop to a fixed, constant frame rate, it will basically behave the same as tick based movement (if the target frame rate is always achieved).

So what you're doing is basically tick (or frame) based movement.

Quote

However an easier method than my previous code would be simply to record tha animation start time on a variable and then compute the anim frame i need to use to render based on the formula ((current time - start time) % fps) if it is a looping animation. No frame behind issues this way.

This looks more like time based animation, but it doesn't take movement of your game objects in account.

Maybe I'm just doing it wrong or maybe there's an error in my thinking, but when I implement time based movement, I have a method in all game objects like this:

1

publicvoidupdate(floatdeltaTime) {... }

Where deltaTime is the time which the last frame took to complete, which is used to scale the movement. So the next frame's animation might be off, if the current frame will take a drastically different amount of time to complete, because the current frame update is based on the time from the previous frame update, which was already displayed.

"If you throttle a time based game loop to a fixed, constant frame rate, it will basically behave the same as tick based movement (if the target frame rate is always achieved)."

The problem here is the Thread.sleep method. No mater what method is used sleeping allways wastes more than it should because it's very innacurate in java 1.5. Don't know if it has changed in java 1.6 but the error is something between 1 millisecond and sometimes it can miss by much more. I used a brute force sleep method so i can simply ignore this.

"Maybe I'm just doing it wrong or maybe there's an error in my thinking, but when I implement time based movement, I have a method in all game objects like this:Code: public void update(float deltaTime) {... }"

Thats a common usage of the update method. However if you have a singleton object that manages time statistics for your game - lets call it Scheduller for example - you can obtain deltaTime from this object instead when you call the update method. For example:

Throttling is always a pain.Since java 1.5, the throttling code in JEmu2 isn't reliable anymore in some cases. Specifically one case where I want a frame rate of 50fps and a display refresh rate of 100Hz. I both sync to the display, and throttle to get 50fps, but since 1.5 the throttling somehow gets confused and it ends up running at 100fps. I'm quite sure the logic of the throttling is okay, but it seems as if currentTimeMillis can get confused with lots of 1ms sleeps in a tight loop, because if I print the numbers of how long it tries to sleep and how much it actually throttled, the numbers are okay (it reports 50fps, 20ms per frame), but the visible end result is wrong (100fps). Very weird. Also without sync to vblank (throttle only) this is the case.And then there's the issues with some configurations when you use a high precision timer, which is another reason I'm currently avoiding time based movement. With 1ms precision, it's not accurate enough to ensure that the physics of the game are always consistent.

System.currentTimeMillis() is too inaccurate to try to measure frame rate on windows (55ms accuracy). System.nanoTIme() is the way to go, then throttling should be OK.

The only problem I can see with time-based movement is what ErikD was getting at earlier, and that is that if the draw-time is not constant, then a longer than average frame-draw will be displayed which has a small update time applied to it that >> so that frame would be lagged. (If the GC were to come in during the frame draw then it would be lagged too)

All frames after that would be OK though since the long lag would be compensated for in the next frame which would have had a bigger update time applied to it. As long as the frame rate is quick, so a lagged frame doesn't hang around for long, they eye shouldn't notice this problem too much.

I see little advantage at all to tick-based movement, its really just time-based movement with a hard-coded time >> what's the point?

To elaborate on that: I still wonder why so much people use frame-based updates. The more the framerate varies, the larger the error (think of a vehicle making a circle). It's much harder to sync everybody in multiplayer games, and the update process becomes slightly more complicated, because you have to multiply everything by the elapsed time, or worse, when dealing with acceleration and rotation.

The alternative, tick-based updates is extremely stable, will give the same results whatever the framerate, and is easy to sync.

That is pretty much the point

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

Exactly. Nobody can guarantee if high resolution timers will work or will really be high resolution at all.Especially if you do 2D scrollers, tick based movement with a fixed vsync'ed framerate is the way to go if you want the scrolling to be as fluid as it can be. Now you can't guarantee VSync as well, but you don't really need a high resolution timer to get a reasonably stable 60fps if VSync doesn't work.Even for my simple 3D game I stick to tick based, because it turned out that time based made the game unpredictable (sometimes you couldn't make a jump which you could in other cases), and in that game it's absolutely vital that it's predictable. The underlying code is actually time based, but I feed it a fixed time every frame. That way I can scale the game speed to the display's refresh frequency if 60Hz can not be chosen for any reason.

The alternative, tick-based updates is extremely stable, will give the same results whatever the framerate, and is easy to sync.

That is pretty much the point

The multiplayer aspect is the only thing I see as a disadvantage. I'm running into that problem now. I plan to overcome it by taking advantage of the fact that updates are very quick compared with frame draws (2500/second for my game). This is the solution I'll soon implement:

Update by the time from the last update time to the next millisecond time divisible by 5. Do this until the addition of these update times is within 5 milliseconds of the current time. Then update by the time that remains until the current time and draw & display the frame. Repeat.

For rotation the server and client will get slightly out of sync because of the sub-5-millisecond updates required before and after frame draws. However, with my particular game design the server resends its whole game world to clients who replace theirs, so the small out-of-sync errors will be corrected.

You may say that this design depends on being able to replace all clients' game worlds with that of the server, but even with your tick-based games, this will be needed because of floating-point rounding differences between server and clients.

This is not a unique fault of time-based updating. Tick based updating depends on having a high resoltution timer also, how else can you keep a constant time between frames...

With time based updating, the accuracy of timing is much more critical.The difference is that with time-based updating, any timing errors lead to inconsistent behaviour where with tick based rendering, the worst that can happen is that the frame rate is not 100% stable, but the game's behaviour always is. Those frame rate variations due to timing granularity are hard to spot, even if you use a 1ms timer. And most of the times, VSync works which makes timing almost a non-issue.

Quote

The multiplayer aspect is the only thing I see as a disadvantage.

Also, collision detection can become more difficult if you cannot fully predict game behaviour due to timing.

Even with FPS games, it's not unheard of to update the game at a fixed 60 frames per second (which makes the game play tick based), but render as fast as you can (interpolating the display frames between the game update frames). I think the reason behind it is also consitency and predictability.Correct me if I'm wrong, but IIRC Doom3 uses this technique because of the inconsistencies due to timing errors in Quake3.

>> Floatingpoint inaccuracies are not nearly anywhere relevant in this context. The differences are so extremely small that it will at least *look* the same everywhere.

You're right. But mightn't these errors compound over time? AffineTransforms were needed in Java2D to mitigate this problem were they not?

Nevertheless, tick-based updating 1. doesn't take advantage of the full processor power and 2. can skip frames while time-based rendering won't.

Also, erikd, System.nanoTime() is for all intensive purposes adequate as a high-res timer & both approaches depend on it anyway. I can't see where 'predictability' comes into it except from a networking perspective.

Nevertheless, tick-based updating 1. doesn't take advantage of the full processor power and 2. can skip frames while time-based rendering won't.

Not necessarily. If you do it like Doom3 like I mentioned in my last post, where game updating will be done at a fixed speed but rendering will be performed as fast as possible. I'm guessing this approach has it's own share of problems, so in the end it probably depends on the type of game you're creating which approach will be preferable.My point was not to dismiss time based updating, but to show some good reasons when tick based updating can be preferred.

Quote

Also, erikd, System.nanoTime() is for all intensive purposes adequate as a high-res timer & both approaches depend on it anyway. I can't see where 'predictability' comes into it except from a networking perspective.

Like I mentioned, in time based updating, accurate timing is *essential* for correct and predictable behaviour of the game. In tick based updating, this is not an issue and timing is only used for throttling and optionally a frame skipping option, so 1ms precision will do. I believe the timer in LWJGL has at least 1ms precision everywhere (although it doesn't guarantee that as well).

I actually avoid System.nanoTime() at the moment, because it seems that it doesn't work reliably in some configurations (which I believe is why it's turned off by default in LWJGL). I don't know if it even has sub millisecond accuracy on Mac, for example.

I actually avoid System.nanoTime() at the moment, because it seems that it doesn't work reliably in some configurations (which I believe is why it's turned off by default in LWJGL). I don't know if it even has sub millisecond accuracy on Mac, for example.

And well, it (QPC) is simply broken on some older chipsets. Worked fine with win98se and an nvidia card. Now with 2k and an ati card (not sure which one is to blame) it doest work anymore. The problem I'm seeing is so called QPC leaping... that is... it randomly jumps a few seconds in the future if there is a high bus load (say a non-command line game heh). As you can imagine this is really annoying, because you get warped to death in lots of timebased games, which rely on a working/accurate QPC.

I just got the latest drivers for my nVidia card so I can finally play all of your OGL games.... They're great! I've been playing Cas's Titan Attacks & Ultratron, OrangyTangy's Quix, AnalogKid's last drops & Mojang's Wurm... they're very polished, I take my hat off to all.

The thing I notice about OGL which I suspect most of you have written these games in (since none worked before I got the driver except for Last Drops), I saw how OGL locks the draws to the screen refresh.

It does it to my own game too now when I'm in OGL pipeline mode. It is kind of annying for me because the time-based approach to rendering isn't working as well since the frame has to wait for so long until the screen refresh is ready. Now I see another reason why you like to use tick-based rendering - because it locks you into the pattern of screen refreshes.

Does that mean that you have to always try to have your tick-based FPS set at the refresh rate (or a factor of it)?

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org