What is better for games when developing game loops, fixed time steps or variable time steps? What type of games are better with one or the other?

Variable time steps:

With variable time step, I mean physics updates will take in some sort of "time elapsed since last update" argument and hence dependent on framerate. This may mean doing calculations as position = position + distancePerSecond*timeElapsed.

pro: smooth

pro: easier to to code

con: hard to record/replay actions as time steps vary

con: weird physics errors that are hard to predict with very small or large time steps

With fixed time steps, the update method may not even accept a "time elapsed" as it assumes that each update is for a fixed time period. Calculations may be done as position = position + distancePerUpdate. The example includes an interpolation during render.

pro: physics are very predictable

pro: easier to record actions per time step as they are fixed

pro: possibly easier to sync up with other players over network?

pro: don't have to confuse all calculations with timeElapsed variable everywhere

con: it will never sync up vertical refresh so you either have jittery graphics or you have to always interpolate.

con: maximum frame rate is limited unless you interpolate

con: hard to work within frameworks that assume variable time steps (like pyglet or flixel)

Use variable timesteps for your game and fixed steps for physics
–
Daniel LittleMay 6 '11 at 5:11

6

I wouldn't say that variable time step is easier to code exactly because with fixed time step you "don't have to confuse all calculations with timeElapsed variable everywhere". Not that it's that hard, but I wouldn't add "easier to code" as a pro.
–
pekSep 26 '11 at 3:05

True, I think I was referring to how you wouldn't need to interpolate variable time steps.
–
Nick SonneveldSep 26 '11 at 4:22

@pek I agree with you. Variable time step has a simpler coded game loop but you have more to code in your entities that deal with that variability in order to "pace it". Fixed time step has a more complicated to code game loop (because you have to accuratelly compensate for time approximation variances, and recalculate what extra delay to add or how many updates to skip to keep it fixed) but has simpler coding for the entities that will always have to deal with the same time interval. On the whole none of the approaces is clearly simpler than the other.
–
Shivan DragonSep 28 '12 at 10:44

10 Answers
10

In Glen fielder's Fix your time step he says to "Free the Physics". That means your physics update rate should not be tied to your frame rate.

For example, if the display framerate
is 50fps and the simulation is
designed to run at 100fps then we need
to take two physics steps every
display update to keep the physics in
sync.

In Erin Catto's recommendations for Box2D he advocates this as well.

So don't tie
the time step to your frame rate
(unless you really, really have to).

Should Physics step rate be tied to your frame rate? No.

Erin's thoughts on fixed step vs variable stepping:

Box2D uses a computational algorithm called an integrator. Integrators simulate the physics equations at discrete points of time. ... We also don't like the time step to change much. A variable time step produces variable results, which makes it difficult to debug.

Glen's thoughts on fixed vs variable stepping:

Fix your timestep or explode

... If you have a series of really stiff spring constraints for shock absorbers in a car simulation then tiny changes in dt can actually make the simulation explode. ...

Should physics be stepped with constant deltas? Yes.

The way to step the physics with constant deltas and not tie your physics update rate to the frame rate still is to use a time accumulator. In my game I take it a step further. I apply a smoothing function to incoming time. That way large FPS spikes don't cause the physics to jump too far, instead they're simulated more quickly for a frame or two.

You mention that with a fixed rate, the physics wouldn't sync up with the display. This is true if the target physics rate is near the target frame rate. It's worse the frame rate is larger than the physics rate. In general it is better to target a physics update rate of twice your target FPS, if you can afford it.

If you can't afford a large physics update rate, consider interpolating the graphics' positions between frames to make the drawn graphics appear to move more smoothly than the physics actually moves.

I've played The Floor is Jelly before and after upgrading my machine and it was silly: it wasn't the same thing because the physics were indeed invoked from the game-loop (and so tied to the frame-rate) and not from a timer. My old machine was very bad so it constantly switched between slow-motion and too-fast motion and it had great impact on the gameplay. Now it's just at a very fast motion. Anyways that game is a fine example of how problematic this issue can get to be (still a cute game though).
–
MasterMasticAug 11 '14 at 18:32

I think there are really 3 options, but you're listing them as only 2:

Option 1

Do nothing. Attempt to update and render at a certain interval, e.g. 60 times per second. If it falls behind, let it and don't worry. The games will slow down into jerky slow motion if the CPU can't keep up with your game. This option won't work at all for real-time multi-user games, but is fine for single player games and has been used successfully in many games.

Option 2

Use the delta time between each update to vary the movement of objects. Great in theory, especially if nothing in your game accelerates or decelerates, but just moves at a constant speed. In practice, many developers implement this badly, and it can lead to inconsistent collision detection and physics. It seems some developers think this method is easier than it is. If you want to use this option you need to step your game up considerably and bring out some big-gun maths and algorithms, for example using a Verlet physics integrator (rather than the standard Euler that most people use) and using rays for collision detection rather than simple Pythagoras distance checks. I asked a question about this on Stack Overflow a while back and got some great answers:

Use Gaffer's "fix your time step" approach. Update the game in fixed steps as in option 1, but do so multiple times per frame rendered - based on how much time has elapsed - so that the game logic keeps up with real time, while remaining in discrete steps. This way, easy to implement game logic like Euler integrators and simple collision detection still work. You also have the option of interpolating graphical animations based on delta time, but this is only for visual effects, and nothing that affects your core game logic. You can potentially get in trouble if your updates are very intensive - if the updates fall behind, you will need more and more of them to keep up, potential making your game even less responsive.

Personally, I like Option 1 when I can get away with it and Option 3 when I need to sync to real time. I respect that Option 2 can be a good option when you know what you're doing, but I know my limitations well enough to stay well away from it.

regarding option 2: I am not sure a raycast can ever be faster than pythagoras distance checks, except if you are very brute force in your application of pythagoras, but a raycast will also be very expensive if you don't add a broadphase.
–
KajAug 14 '10 at 4:13

3

If you use Verlet with unequal time steps you are throwing out the baby with the bathwater. The reason Verlet is as stable as it is, is because errors cancel out in subsequent time steps. If the time steps are not equal, this does not happen and you are back in exploding physics land.
–
drxzclAug 17 '10 at 11:15

The biggest pro in my opinion to this is one that you mentioned, that it makes all of your game code calculations so much simpler because you don't have to include that time variable all over the place.

This is the most common way of doing a game loop. However, it's not great for battery life when working with mobile devices.
–
knight666Jul 26 '10 at 15:07

@knight666; are you suggesting that using a longer timestep, the reduced amount of iterations will save batterylife?
–
falstroJul 26 '10 at 15:09

That's still a variable update -- the update delta changes based on how long the frame took to render rather than some fixed value (i.e, 1/30th of a second).
–
Dennis MunsieJul 26 '10 at 15:11

1

@Dennis: as i understand it, the Update function is called with a fixed delta...
–
RCIXJul 27 '10 at 2:45

3

@knight666 Uh - how do you figure that? If you have vsync on and are not stuttering - these methods should be identical! And if you have vsync off you're updating more often than you need to and probably wasting CPU (and therefore battery) by not letting it idle!
–
Andrew RussellJul 27 '10 at 5:21

There's another option - decouple Game update and physics update. Trying to tailor the physics engine to the game timestep leads to problem if you fix your timestep (the problem of spinning out of control because integration needs more timesteps which take more time which needs more timesteps), or make it variable and get wonky physics.

The solution that I see a lot is to have the physics run on a fixed timestep, in a different thread (on a different core). The game interpolates or extrapolates given the two most recent valid frames it can grab. Interpolation adds some lag, extrapolation adds some uncertainty, but your physics will be stable and not spin your timestep out of control.

This is not trivial to implement, but might prove itself future proof.

Personally, I use a variation of variable time-step (which is sort of a hybrid of fixed and variable I think). I stress tested this timing system in several ways, and I find myself using it for many projects. Do I recommend it for everything? Probably not.

My game loops calculate the amount of frames to update by (let's call this F), and then perform F discrete logic updates. Every logic update assumes a constant unit of time (which is often 1/100th of a second in my games). Each update is performed in sequence until all F discrete logic updates are performed.

Why discrete updates in logic steps? Well, if you try and use continuous steps, and suddenly you have physics glitches because the calculated speeds and distances to travel are multiplied by a huge value of F.

A poor implementation of this would just do F = current time - last frame time updates. But if calculations get too far behind (sometimes due to circumstances beyond your control like another process stealing CPU time), you will quickly see awful skipping. Quickly, that stable FPS you tried to maintain becomes SPF.

In my game, I allow "smooth" (sort of) slowdown to restrict the amount of logic catchup that should be possible between two draws. I do this by clamping: F = min(F, MAX_FRAME_DELTA) which usually has MAX_FRAME_DELTA = 2/100 * s or 3/100 * s. So instead of skipping frames when too far behind game logic, discard any massive frame loss (which slows things down), recover a few frames, draw, and try again.

By doing this, I also make sure that the player controls keep in closer sync with what is actually shown on the screen.

Final product pseudocode is something like this (delta is F mentioned earlier):

This sort of updating is not suitable for everything, but for arcade style games, I would much rather see the game slow down because there is a lot of stuff going on than miss frames and lose player control. I also prefer this to other variable-time step approaches which end up having irreproducible glitches caused by frame loss.

Strongly agree with that last point; in pretty much all games, input should 'slow down' when the framerate drops. Even though this isn't possible in some games (i.e. multiplayer), it would still be better if it were possible. :P It simply feels better than having a long frame and then having the game world 'jump' to the 'correct' state.
–
IpsquiggleSep 8 '10 at 17:44

Without fixed hardware like an arcade machine, having arcade games slow the simulation when the hardware can't keep up makes playing on a slower machine cheating.
–
user744Sep 9 '10 at 9:11

Joe that only matters if we care about "cheating". Most modern games aren't really about competition between players, just making a fun experience.
–
IainSep 9 '10 at 9:36

1

Iain, we are talking specifically about arcade-style games here, which are traditionally high-score-list / leaderboard driven. I play a ton of shmups, and I know if I found someone was posting scores with artificial slowdown to leaderboards I'd want their scores wiped.
–
user744Sep 10 '10 at 6:17

Fixed time step is useful when taking into account floating point accuracy and to make updates consistent.

It's a simple piece of code so it would be useful to try it out and see if it works for your game.

now = currentTime
frameTime = now - lastTimeStamp // time since last render()
while (frameTime > updateTime)
update(timestep)
frameTime -= updateTime // update enough times to catch up
// possibly leaving a small remainder
// of time for the next frame
lastTimeStamp = now - frameTime // set last timestamp to now but
// subtract the remaining frame time
// to make sure the game will still
// catch up on those remaining few millseconds
render()

The main issue with using a fixed time step is that players with a fast computer won't be able to make use of the speed. Rendering at 100fps when the game is updated only at 30fps is the same as just rendering at 30fps.

That being said, it may be possible to use more than one fixed time step. 60fps could be used to update trivial objects (such as UI or animated sprites) and 30fps to update non-trivial systems (such as physics and) and even slower timers to do behind the scenes management such as deleting unused objects, resources etc.

This solution doesn't apply to everything, but there is another level of variable timestep -- variable timestep for each object in the world.

This seems complicated, and it can be, but think of it as modeling your game as a discrete event simulation. Each player movement can be represented as an event which starts when the motion starts, and ends when the motion ends. If there is any interaction that requires the event be split (a collision for instance) the event is canceled and another event pushed onto the event queue (which is probably a priority queue sorted by event end time).

Rendering is totally detached from the event queue. The display engine interpolates points between event start/end times as necessary, and can be as accurate or as sloppy in this estimate as need be.

To see a complex implementation of this model, see the space simulator EXOFLIGHT. It uses a different execution model from most flight simulators -- an event-based model, rather than the traditional fixed time-slice model. The basic main loop of this type of simulation looks like this, in pseudo-code:

The main reason for using one in a space simulator is the necessity of providing arbitrary time-acceleration without loss of accuracy. Some missions in EXOFLIGHT may take game-years to complete, and even a 32x acceleration option would be insufficient. You'd need over 1,000,000x acceleration for a usable sim, which is difficult to do in a time-slice model. With the event-based model, we get arbitrary time rates, from 1 s = 7 ms to 1 s = 1 yr.

Changing the time rate does not change the behavior of the sim, which is a critical feature. If there is not enough CPU power available to run the simulator at the desired rate, events will stack up and we might limit UI refreshing until the event queue is cleared. Similarly, we can fast-forward the sim as much as we want and be sure we're neither wasting CPU nor sacrificing accuracy.

So summing up: We can model one vehicle in a long, leisurely orbit (using Runge-Kutta integration) and another vehicle simultaneously bouncing along the ground -- both vehicles will be simulated at the appropriate accuracy since we do not have a global timestep.

Cons: Complexity, and lack of any off-the-shelf physics engines which support this model :)

On top of what you've already stated it may come down to the feel you want your game to have. Unless you can guarantee that you will always have a constant frame rate then you're likely to have slowdown somewhere and fixed and variable time steps will look very different. Fixed will have the effect of your game going in slow motion for a while, which can sometimes be the intended effect (look at an old school style shooter like Ikaruga where massive explosions cause slowdown after beating a boss). Variable timesteps will keep things moving at the correct speed in terms of time but you may see sudden changes in position etc. which can make it hard for the player to perform actions accurately.

I can't really see that a Fixed time step will make things easier over a network, they would all be slightly out of sync to begin with and slowdown on one machine but not another would push things more out of sync.

I've always leant towards the variable approach personally but those articles have some interesting things to think about. I've still found fixed steps quite common though, especially on consoles where people think of the framerate being a constant 60fps compared to the very high rates achievable on PC.

You should definitely read the Gaffer on games link in the original post. I don't think this is a bad answer per se, so I won't down vote it, but I don't agree with any of your arguments.
–
falstroJul 26 '10 at 14:15

I don't think slowdown in a game as a result of fixed timestep can ever be intentional, because it's because of lack of control. Lack of control is by definition surrendering to chance and thus can't be intentional. It can happen to be what you had in mind, but that's as far as I'd like to go on that. As for fixed timestep in networking, there's a definite plus there, as keeping physics engines on two different machines in synch without the same timestep is pretty impossible. Since the only option to synch then would be to send all entity transforms, that would be way too bandwidth heavy.
–
KajAug 14 '10 at 5:36

Use Gaffer's "fix your time step" approach. Update the game in fixed steps as in option 1, but do so multiple times per frame rendered - based on how much time has elapsed - so that the game logic keeps up with real time, while remaining in discrete steps. This way, easy to implement game logic like Euler integrators and simple collision detection still work. You also have the option of interpolating graphical animations based on delta time, but this is only for visual effects, and nothing that affects your core game logic. You can potentially get in trouble if your updates are very intensive - if the updates fall behind, you will need more and more of them to keep up, potential making your game even less responsive.

Personally, I like Option 1 when I can get away with it and Option 3 when I need to sync to real time. I respect that Option 2 can be a good option when you know what you're doing, but I know my limitations well enough to stay well away from it

Your game must be multithreaded in order to achieve consistent timestep/framerate.
Physics, UI and rendering must be separated into dedicated threads. It is hideous PITA to sync them, but the results are that mirror smooth rendering that you want (esp. for VR).

Mobile games are esp. challenging because the embedded CPUs and GPUs have limited performance. Use GLSL (slang) sparingly to offload as much work from the CPU as possible. Be aware than passing parameters to the GPU consumes bus resources.

Always keep your framerate displayed during development. The real game is to keep it fixed at 60fps. This is the native sync rate for most screens and also for most eyeballs.

The framework you are using should be able to notify you of a sync request, or use a timer. Do not insert a sleep/wait delay to acheive this - even slight variations are noticeable.