The basic idea is to call clientTick() while there is less "unprocessed time" in passedTime than MILLISECONDS_PER_GAMETICK, then call gameTick() until passedTime is lower than MILLISECONDS_PER_GAMETICK again.

This ensures that gameTick() will be called exactly GAMETICKS_PER_SECOND times per seconds on average, and that clientTick() will be called as often as possible.

gameTick() should manage all physics and other things that should happen at a fixed speed.clientTick() should handle all the other things that should happen as often as possible, like rendering and sound updating.I tend to put all networking in gameTick() as well.

If we just let the system be as it is now, we'd get lots of fps, but the camera position would only update 25 times per second, so it wouldn't really work.To fix this, we interpolate the values in clientTick().That means that clientTick() will need to know how far along the we are between two gameTicks(), and all previous values for the values that should be interpolated.

We also need to change the definition of clientTick() to clientTick(float interpolation).To calculate the interpolation in the main loop, we just have to divide the "unprocessed time" by MILLISECONDS_PER_GAMETICK.

Another question is why gameTick (happens every so many milliseconds) is called in an inner while loop, and clientTick(interpolation) is called on the outside. And what is the interpolation used for in clientTick?

Thanks.

-Nick

Edit: I think I may understand it. It only goes into the nested while if so many milliseconds has passed, right? And if this is correct, couldn't it be replaced by an if statement?

It has to be a while loop in case clientTick() ever takes longer than MILLISECONDS_PER_GAMETICK.Like if your game suddenly pauses for 10 seconds, and you've got 10 ticks per second.. The while loop would "catch up" by running the gamelogic tick 100 times, but the if statement would lag behind for a long time.

IFloat is a class that interpolates between two floats.Most of the time, clientTick() will get called far more often than gameTick() gets called, so we have to interpolate between the values the last call to gameTick() produced, and the ones before that.

Am I right in saying that you are doing all of this in one thread? That is to say, there is a clock running in its own thread (GAGE?) and giving you ticks, and you're just reacting at the appropriate times to do rendering and ALSO to do your game management?

What happens on the lower end if your painting is taking too long and not leaving enough time for your management tasks (gameTick() if I'm not mistaken is where you have all that code)?

Yes, it's all in one thread. if clientTick() (the rendering method) takes too long, gameTick() would execute several times in a row to catch up.

I really don't believe in multithreading applications that don't use a lot of blocking methods, as you just end up with a bunch of synchronisation and duplication of data, with no performance gain whatsoever.

I really don't believe in multithreading applications that don't use a lot of blocking methods, as you just end up with a bunch of synchronisation and duplication of data, with no performance gain whatsoever.

Unless you are running on a multi-processor system, in which case that's exactly when you want to have multiple threads, since it implies that each thread has useful work to do.

Yes, AI is something I think can benefit from multi-threading and multiple processors. Chess being one example, but taken further it might become practical at some point to train a neural net with a background thread so your game AI adapts to the users playing style. I don't think we are at that point yet though.

Markus I have tried your loop and it works well but I do get image stutter every now and then. Anyway to fine tune it? I am guessing that I ma drawing a lot of frams but most of them go to the garbage bin. Is there a way to limit the fps in your loop?

You can limit the framerate by just adding a Thread.sleep() in the main loop, or if you want to be more scientific you can time the clientTick() call and sleep n-t milliseconds, where t is 1000/<desired fps>, and n is the number of milliseconds the clientTick() call took.

You might want to use your timer's sleep methods if it has one instead of the Thread sleep method, though.

I think the gameTick pattern is fine and good.I was just pointing out that the 'tween code will need a higher order of "awareness" of the data it needs to interpolate, so as to use the correct process.Or from another point of view, each data class can implement the 'tween functionality and main loop and game control can simple call interp on everything.

However, in both cases, I think allot of context specific optimizations will be ugly/difficult. For example, in character animation, the animation data is compressed and decompressed on the fly. The decompression code typically will accept a time value in an animation as the target for a matrix generate. Just like other types of media compression, this may be a directional decompress, i.e only forward, and may take as addition feed, previous generate d values.

Anyway the point is, global, timeless (ticked) based control is bound to have it's pro and cons just as the all-on-one syncronized update does. Perhaps your particular app would be the deciding factor.

Yes, definitly.One thing this system handles poorly is something simple as a ball bouncing against a surface. If the actual "bounce" is between two game ticks, the linear interpolation will cause the ball to magically hover above the ground for a full gametick instead of bouncing on the surface.This could be solved by adding interpolation keyframes, but then we're deep in gc hell.

Also- I understand what vertical synch is all about, but I don't yet see how to "do it" with pure Java. (I do see how to do it via LWJGL- very nice btw). I currently get big time flashing- which is what I'd expect if my buffer.show() isn't synched with monitor's vsynch and their respective rates aren't perfectly in synch. But I'm a big time newb- maybe I'm missing something.

Welcome.Your question seems a bit off topic to this thread and you might want to post some code to make yourself a bit more clear.

Anyway, it seems like the flickering has something to do with buffering (you are double or triple buffering are you?) rather than vertical sync (not syncing to the monitor will not result in flicker, but in 'tearing').I think using BufferStrategy will sync to the monitor's refresh rate, unless this has been disabled in your video driver.

In Markus_Persson's original code, there was a timer.getTimeInMillis(). I accidentally used the AdvancedTimer's getClockTicks(), which is NOT in milliseconds. Things behave as expected, no more sleeping either, when I slipped in System.currentTimeMillis() temporarily.I need to do some math in order to correctly use the AdvancedTimer. Thanks.

Now if gameTick() gets called once every second, the player would move exactly one "unit" per second, but clientTick() would still make the motion smooth.

What do you mean with "Unit"? Where is the unit defined? is it the "xPos+=1" or the "xPlayerPos.getValue(1) ??

I'like to speed up the movement of my object. If I try e.g. "xPos+=10" the object moves faster, but is hopping between several pixels (according to the interpolation-value) when there is no further key event.

Well "unit" isn't an actual variable or anything. It's just the distance you want to move in a given amount of time. That could be 200 pixels per second or 1 inch per minute, etc. So, in the example, clientTick is the actual framerate you get that may or may not fluctuate wildly from one second to the next, while gameTick is sort of the theoretical, constant framerate that you want to achieve a.k.a the "unit" value. The interpolation mechanism, then, sort of works as an intermediary between the actual and theoretical framerate to keep the sprite moving at constant "unit" increments. Thus, if you want ALL of your movement to appear to move faster, you would increase your "unit" value, which, in the case of the example given would be the value of GAMETICKS_PER_SECOND. Anyway, that's my current understanding of it...

At the beginning of every T cycle your monitor will show a new picture from VRAM, if your FPS exceeds the monitor refresh rate, guess what happens, if u update VRAM in the middle of the T cycle you will not see any new picture on the monitor, simple isnt it !

Quote

Also- I understand what vertical synch is all about, but I don't yet see how to "do it" with pure Java. (I do see how to do it via LWJGL- very nice btw). I currently get big time flashing- which is what I'd expect if my buffer.show() isn't synched with monitor's vsynch and their respective rates aren't perfectly in synch. But I'm a big time newb- maybe I'm missing something.

The VSync marks the beginning of a new T cycle, the buffer show() method waits until a new T cycle is started

Go to your local bookstore and buy some books about game physics, or go to a company where u can learn this profession, professional game developer, its not easy but after 5 or 6 years you can program your own game

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org