i got a problem: I currently only using a single selfmade Animator-Class (like the FPSAnimator) to update the world and render the scene. This works almost fine... I call the drawable.display(); followed by the drawable.swapBuffers(); method to call for a new frame. The time needed for displaying is measured and results in a time-scale-factor for the world update. I want to draw 60 frames per second, that means i have 16,6 milliseconds for drawing a single frame. If the rendering needs more time, the time-scale-factor grows and the world updates offsets the game time a bit more. If less time was needed, the thread may sleep some time.

The problem comes from the display and swap buffers methods, which are not invoked as i call them, but any time they want to.The little image shows an ingame gui. The first line shows the current fps and a graph of the last 10 seconds. The second line show the measured mspf (milliseconds per frame). The green graph show the mean values for 10 seconds and the dotted white graph the last 30 values (the values for a half second). The last line shows the tsf (time scale factor) and the difference between the last 2 mean values. The dottle lines shows my problem. In most of the display, swapBuffers calls i need nearly 0 ms and for some calls i need 16.6 ms. One can imagine that this comes from the vsync option (setSwapInterval) but in both cases this is how it looks like.

It seems that the interface / device buffers the render requests and waits some time before it just renders all buffered requests. But how can you gain a steady world update with this?The FPSAnimator class produces a steady framerate and render call rate, but while using drawable.repaint() with autoswap buffer mode you have no chance to measure the time past or the framerate reached. If you want to reach 60 fps and the machine can only produce 10 fps the worlds time passes in slow-motion...

Are you storing a delta time from your last update, and using that to control the "amount" of update that occurs? i.e. If you have a character that moves at 100 pixels / second, and your delta time is 10 milliseconds, then you want to move him (10 / 1000) * 100, or simplified (10 * 100) / 1000, or 1 pixel. Then if the delta time fluctuates, you obviously move it a decreased or increased amount, to give the appearance of fluid motion.

You can apply this technique to all draw calls in order to get a similar smoothness.

Also, to get delta time, you pretty much want to have a target time stored (in the above case, that's the 1000), then in your update you say:

The delta time is not usable. Thats my point.Take an empty scene. Take the default FPSAnimator with scheduled animation of 60 fps.Take your lines of code and just make an additional System.out.println(delta);.

I got this (for example):1616161516311616153216015151616

And thats the problem.Taking a mean value of delta is incorrect too, since measured over a second the render rate is still okay. But you cannot take a mean value of a second since this results into slow update time corrections.Currently i made the following, which works quite fine:Take the last 5 deltas, sort them in a list, take the middle.

(By the way: i have to correct myself. I took another look at the source of the animator classes and saw that drawable.display() was used with autoSwapBufferMode(true))

And another "by the way": this seems only to occur on windows xp. windows vista produces constant values. linux and mac os x system also seem to have this problem. -> all tested only with nvidia cards.Oh, another "by the way": ATI cards sometimes only render 30 frames while the display method is called 60 times...

You should try to decouple your game update from your graphics update. Make it so that your game updates constantly at 60 updates/sec. After updating the game state, check to see if there's enough time "left" to finish rendering a frame. This way the game will always be at 60 fps and if a computer can't run it as well, they will only have laggy graphics instead of a laggy game state.

I do something similar. I do not accumulate time calculated by the "animating enigne part" but i calculate a factor for scaling the "dt". That way i only need to make 1 step in animating the world, which is fine since i have no physics engine running. Making one animation update followed by an render call is just fine if you keep track of the time past and therefore so offset the animations.The problem comes from the devices or interfaces itself. If the graphics hardware buffers images there is no chance of making "non-jerky" animations. Take a look at the timings 2 posts before. If there is a complete engine tick, which happened in less than a milliseconds the world should not update. But if the graphics hardware delays the buffer swaping of something else, the world should have updated. The problem comes from measuring the time past. The graph with the white dots shows it. Some frames take time, some not (the scene does not change). That way the world does not update for about 5 frames and afterwards updates the time passed for 10 frames within the next 5 frames...

I am not sure with this, but i have tried a lot of things. If i somehow start to measure time and consider the time passed for the world update and render/bufferswaping call, all animations start to have this subtle visual non-smoothness. If i say: "hey, vsync is on, 60 fps are okay, just make the world update", then everything is smooth.

A fixed timestep is a major advantage when your dealing with drag and bounces. It ensures your game will be predictable, while with a dynamic dt, the gameplay will depend on your framerate. Just like in some (old) arcade games you can only make a jump with a certain framerate.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

There should be no difference when using a fixed dt.If the framerate is to low, you can have a 100 undetached animation updates, you wont see the jump.Player presses "Up"... tick 0 start jump.... tick 20 animated model is in the air... tick 40 animated model is back on the ground... tick ... tick... render.The problem of old arcade games had been, that absolutly no dt was used. Update -> Render -> Update -> Render. Old hardware, slow unplayable game, new hardware, extremly fast unplayable game. From the point on where you start considering the time passed you do not have this problem. A fixed dt is as good as a dynamic dt. Integrating a physics engine takes this to a new level... but i have no physics engine, thats why i want to make as few world updates as possible. And that is one update per rendering.

There should be no difference when using a fixed dt.If the framerate is to low, you can have a 100 undetached animation updates, you wont see the jump.

You might not see it, but you'd be at the other end of the gap. Guaranteed! With variable dt you might or have not jumped the gap fully - you might land a few pixels farther or nearer.

Handling acceleration and/or collision with variable dt is simply chaotic eventually.With fixed dt it is deterministic. That is especially important in multiplayer games, but even in singleplayer the framerate won't affect the gameplay, which is important in specific genres.

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

I agree that a fixed dt is more robust than a dynamic dt. Maybe sometimes a physics engine will be used in the project. Thats why i will change my code to use the algorithm provided above. But there will be some changes that have to be done in addition to make this work. Making these changes will take some time (cause i have other things to finish in advance). I will post again when i have finished them and tell you about the result.

But i do not guess that this will fix my problem. If it does not, i will provide a little applet (about 100 lines of code) which demonstrates the problem. Cause the problem does not come from using a dynamic dt but from the devices which delay images. Currently i guess that using swapBuffers() is the problamtic part. Using autoSwapBuffers seems not to produce this error.

You see nearly nothing. Just a 10x10 px sized rectangle in the upper left corner of the page.The interesting part (the deltaTime between 2 calls) is outputted to the java console.

While there is nothing drawn nor anything difficult happens, the deltaTime varys from 15 milliseconds up to 32 milliseconds. But the FPSAnimator class is well set up and calls 60 times per second the display function.Just taking the deltaTimings and update the world produces jerky animations, because there are no frames that needed additional time. The buffer swaping just seems to delay itself. As a result there are frames that forward the ingame time.

No i didn't check it. I though this is what the function does. But actualy you seem right. The API says:

Quote

Returns the current time in milliseconds. Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.

I will try nanoTime tomorrow. I tell you about the results. But some comments earlier someone said that there is a problem with nanoTime on AMD cpus...

You can make a dedicated thread that writes its time into a globally accessible variable. That way you won't suffer from different cores with different times, and you significantly reduce your calls to nanoTime (to a fixed frequency).

Hi, appreciate more people! Σ ♥ = ¾Learn how to award medals... and work your way up the social rankings!

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org