Anything less than about 35 and I start to dislike the frame rate. If you're trying to dynamicly adjust quality or cap cpu usage, why not let the user set their perference.Worse than a slow frame rate is slow input processing. I think many people overstate the importantance of a really fast framerate because often input processing is tied to each frame. The worst is a slow game where the player can build up a queue of inputs which create increasing lag as the user tries to get conrol over his character again.

Don't cap it or anything freaky like that, just do time based movement instead of tick based movement. I really can't see why anyone would prefere the latter on a modern game (consoles being an obvious exception to this rule)

The only problem this can cause is slightly more complicated collision logic, and the annoyances that gc glitches can cause long delays between updates. But both can be fixed by splitting up large update times into several smaller time slices, although really you should address the root cause - get better collision detection, and stop discarding objects on the fly...

Aw come on Cas, you can do better than that Time based allows smooth fps for machines that can handle it, yet gives slower machines a chance to run at full speed. If you want the same with fixed fps logic but unlimited graphical fps, then you've got to do annoying interpolation between logic updates (case in point: Quake1, with logic @ 10fps, and godawful animation).

Fixed update also complicates networking, since you've got to try and keep the ticks in sync more than time based approaches.

2d vs. 3d should make no differences, its just that with 2d these hacks will be less noticable.

Well, apart from being a non-portable solution - ie. it's a hack, it does in fact cap the frame rate, which is all well and good. But from then on what you do with the elapsed time is where the mistake is made. I strongly urge counting elapsed frames as per normal, set a capped frame rate to match your minimum specification JVM/HW/OS combo, and tuning your animation to work at this capped rate, and tuning your application to ensure it doesn't drop below this rate.

Without native trickery, the sleep based timer hack as seen in various forms here will cap your frame rate OK. If you set the cap at a realistic level then you can simply adjust your animation to look best, tick-by-tick, to that rate. I reckon that'll look fine.

Thats the beauty of the technique - you don't pick a speed for a fps, you pick a speed based on your internal measurements. Being a physics guy, I like to keep everthing in SI units as much as possible - so i'll convert the time delta into seconds and have my speeds in meters per second.

Btw, I found out recently that the above method is also used in high-end millitary/comercial simulators, so its definatly a tried and tested technique

Yeah, but I can't stress the difference between a physical military simulation and a game more strongly!

I suggest you try a very simple experiment: write a game that involves jumping from one platform to another using physics. Cap the frame rate at successively lower rates, and see how easy it is to make the jump, or how good it looks.

Then try the tick-based tuning method and tell me which one feels like a slick game and which one feels like it's got network lag

im behind Orangy Tang on this 1, a variable timeScale is so much more elegant. (and the fps will only be limited by either the speed of the computer, or the minimum timer resolution of the underlying OS)

1 other, vaguely related issue.In quake3, at exactly 75 and 125 fps, it was possible to jump slightly higher/further (q3dm13 you could jump upto the mh, q3dm7 you could jump across to the rg)

anybody got any thoughts on the exact cause of this?(was it a floating point rounding issue?,or was it some hack in the code,where by when the fps and the tickrate were very close,the game would merge the 2 together?)

How about this situation: I have an el-cheapo computer and it came with a soft-modem (aka: WinModem) and I'm playing a network game. If the game I'm playing eats all the cpu it can in an effor to provide 600 fps the network performance will suffer because a software modem will be starved of cpu cycles and the network play aspect of the game will be percieved to be crappy because of that.

FPS should always always always be capped to monitor refresh rate at the most, as any more is simply wasting power because no-one can actually see it. And 50 seems to be what's needed for that ultra-slick smooth feel of the C64 and Amiga.

Hey, anyone remember Gods? That ran at half frame rate :/ Beautiful graphics but didn't half feel slow after all the other games...

first lemme say that i've seen a literal 160fps, and when objects move quickly it really isn't a waste. maybe i just thought it was cool.

ok. the unreal engine has always used time-based updates since its early days. i'm in favor of time-based updates because, although they might make physics shakier if managed badly, everything gets asymptotically more accurate as framerate increases. and that's just the easy way to do it. if acceleration is taken into account and velocity is viewed as inconstant over a period of time (an elapsed frame), then the only difference between a high framerate and a low framerate is how often the game is affected by the user and the AI.

the other end of this is that tick-based updates will always behave the same because fps is not a factor. if frames can't be drawn fast enough, then you can skip some, and the game still behaves the same.

i wonder, though, if maybe a threaded approach might work best? if the world was updated at a high rate like 100Hz, and the renderer was a separate thread that just fetched its updates when it was ready, you could have tick-based gameplay and an arbitrary framerate.

No! No! No! No! No! Must I explode before anyone understands!c[size=72]You do NOT WRITE MULTITHREADED RENDERING CODE![/size] I dunno whether I can be arsed to explain it again so I think I should write a FAQ about it. Hehe.

That way I can control how many triangles is rendered, and by using threads, I'm assured the maximum throughput (every triangle renders as fast as they can, no more waiting for a traditional single threaded rendering loop to render it).

However, I might be forced to redo this in C++ as the JVM is really bad at handling threads ((. My approach only gives me about 1-2 fps for a quake modelviewer test. My guess would be that using C++ would give me about 60-100 fps. Do you think I should file a bug report to SUN or are they too lazy to respond to serious game programmers like me?

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org