Current solution: LWJGL + extra thread fix (downloads have not been updated with this fix)

Hello JGO! This is my first time posting here; I apologize in advance for any mistakes I make.

I've been using Game Maker for quite a while, but recently I learned some Java in college so I've been trying to make a game engine in Java (relying further on tutorials and various other sources). I want my engine to use fixed timesteps or some similar method (with graphical interpolation), and want it to be run in a window (or rather, to be capable of being run in either fullscreen or windowed mode as opposed to just one or the other), if possible. But, on some computers the graphics never get rendered as smoothly as, say, a GM game--- moving images sometimes stutter. I realize that much has already been said on this topic, for example in the following links:

But I feel like no conclusive solution has yet been discovered, so I'd like to try and consolidate all of the known information on this topic and start another, more language-specific discussion. I've been discussing this with some people over at TIGForums in this topic:

(Thank you to the many who helped, including Bryant Drew Jones, Chromanoid, J-Snake and Paul Eres!)

We've discussed many things and tried various methods to solve the issue. However, at least on the computers I've tested my programs on, I haven't been able to get smoothness on the level of GM. The only computers that have gotten really close to GM's smoothness are the ones at my university which, if I understand correctly, are really fast.

Note: I've mostly tested on Windows computers.

Potential causes:

1) Hardware. It could just be that the machines I test on are slow, but I doubt this is the case because as far as I know there shouldn't be much reason for them to be slow, and in any case that doesn't explain why GM games still run smoothly on them. It is true that GM games also display some stuttering on the same computers, but it's usually not as noticeable as the stuttering with Java.

2) My code. Of course I may have just written faulty code, but I think, if that were the problem, there probably wouldn't be so many existing discussions about the issue, would there? (Because people would just need one or two resources of this kind to solve their problem, and wouldn't have had these discussions for so long.)

EDIT: I am kind of a novice, though. I've been building my program from portions of various examples and tutorials; if there is an error in my code, it may be pretty likely that it comes from faulty implementation of the code from these various sources. Maybe I made an error in fitting all the different pieces into my target framework?

EDIT: I prepared several versions of the LWJGL example Cero posted that incorporate different examples of fixed timesteps from various sources. I've bundled together those examples in a single download, along with some other example programs (like a similar Game Maker example) and the source for all the different examples. You can download it here: box.com download

Note: example_simple.jar and example_variabletimestep.jar are examples without fixed timesteps; they're there for comparison.

There are various settings you can toggle within the JARs, like vsync and the usage of Thread.yield and Thread.sleep. More details are contained in the included readme file.

3) Sleeping and timers. This is the kind of problem discussed in this thread. The idea is that sleep() is unreliable, and/or that the precision of available timers is insufficient. Wasn't this latter problem solved by System.nanoTime(), though?

a) Potential solution: Running a thread in the background the whole time. I've seen this work on at least one computer, but I think I've also seen it not do much another computer, and in any case it seems a bit hacky to me...

4) Something to do with waiting for vertical retrace. Here are some links related to this topic:

But I haven't tried the first solution (I don't really know how to make it work...), and I suspect that the second one might be outdated. Does anyone know how to properly implement these solutions, or know if any current, equivalent solutions have been made, or could be made?

c) Potential solution provided over in the XNA forums. Someone has apparently experienced similar problems with XNA, and has provided a supposed solution:

This may be a solution that can get around not having access to vsync, but I've had trouble understanding some of the XNA-specific code provided in the link. Does anyone know enough XNA to be able to try an equivalent solution in Java?

A Potential General Non-Solution: Maybe we're just being too sensitive to stuttering? Someone in the aforementioned XNA-related link commented:

Quote

Ignore the GPU clock, and let the CPU clock control the update frequency. Sure, this means you will occassionally have dropped or doubled frames, but as long as the clocks are roughly close, this will be rare, and the results are fine for many games. Especially, people tend to notice the time drift in very simple test apps where they are doing things like just moving a box across the screen one pixel per tick, so they freak out about this, but the artifacts tend to become less obvious as the game becomes more complex, so they are often no problem at all in the finished product. The nice thing about this mode is that it keeps your update logic nice and simple, which is why this is the XNA Framework default (we call it fixed timestep mode).

This sounds kind of plausible, but I think there's one thing this doesn't account for: GM running more smoothly than Java.

I have seen screen tearing the worst on crappy computers at my school. What removes the screen tearing is to run the game at or above 60FPS. This is done by using a busy while loop that calls Thread.yield().

Response to causes:1. Single core computers (and dual cores to some extent) are more susceptible to stuttering due to other programs hogging resources than quad+ cores.

2 (also touching 4). I've never had any real problems with stuttering unless it's my own fault, usually when I only do some special updating every n:th frame or something like that. In those cases, VSync did not fix the problem, as if you go over 16ms frame time, the frame will have to wait for the next sync, ruining performance while not doing anything to cure the stuttering. I believe the universal solution is to keep the amount of computations you do as constant as possible between frames, in which case you are not the source of the stuttering. Not to mention the mouse lag caused by VSync.

3. The leep fix solves the problem in my experience.

4. See number 2. TIP: Vsync can be used to "prove" stuttering and see if you're not just imagining things. Adjust the workload (create objects, increase number of particles, whatever) so that you get an FPS slightly higher than your screen's refresh rate (often 60 hertz) and then enable VSync. If you get a huge FPS drop (to 30 or even lower) you're having so much frame time jitter that VSync some frames are too slow for the frame time (see number 2 again). If I get my threaded particle test to run at 65 FPS (about 900k particles) and enable VSync performance is mostly constant at 60FPS but occasionally drops a few frames, with very noticeable stuttering in those cases. The reason it's so noticeable for me is because my laptop's screen is extremely bad and slow, and fast moving things never manage to "light" the pixels they cover up fully. If a frame is dropped, all particles remain at their old position for another frame, which will be visible as almost twice as bright particles on my screen. Easily visible as blinking. Wondered what the hell was happening before I figured it out. And what's wrong with calling LWJGL's Display.setVSyncEnabled(true); after creating your display? It works for windowed games too as long as you aren't on 5 year old Intel cards, in which case it doesn't work at all (even for fullscreen).

I might try to investigate this more as people seem to seriously have a problem with this.

I have seen screen tearing the worst on crappy computers at my school. What removes the screen tearing is to run the game at or above 60FPS. This is done by using a busy while loop that calls Thread.yield().

Completely wrong, screen tearing is most visible in games running at higher than 60 FPS. It's visible at any FPS, but less noticeable at lower <40 FPS (the stuttering hides it a little). Even syncing the game using sleep or (Display.sync() in LWJGL) to achieve 60 FPS or whatever your screen has will NOT remove tearing. The only cure for screen tearing is VSync, and usually at the horrible cost of mouse lag and reduced FPS (unless you use also enable triple buffering, in which case the mouse lag is even worse).

I thought the big issue with Windows was their choice to use a clock interrupt at something like once every 15msec. The commands Thread.sleep() and System.currentTimeMillis() rely on this signal for their accuracy. However, as theagentd points out, there is what he calls the "leep" solution. I thought this was deemed to be sufficient? (At least for bare boxes moving across the screen with little GUI involvement.)

I've been so focused on audio and its interaction with the GUI that I've kind of lost touch with all this. Am looking forward to reading more on this thread.

One thing I'll say, though, is that there sure are an effing lot of ways to write less than optimal code, and the type of performance we are looking for doesn't leave a lot of slack.

Does the following concept apply at all? In audio, the way the JVM switches back and forth between tasks, the audio signal that gets made is actually assembled a bit ahead of the game, in bunches (and the bunches DON'T necessarily relate to the chosen buffer size). Because of this, real time events such as GUI events, don't "line up" with the audio signal very well. I first brought this up here: http://www.java-gaming.org/topics/an-audio-control-helper-tool-using-a-fifo-buffer/24605/view.html There is a diagram that helps explain.

So, I am wondering if there are analogous issues on the graphics end of things.

I thought the big issue with Windows was their choice to use a clock interrupt at something like once every 15msec. The commands Thread.sleep() and System.currentTimeMillis() rely on this signal for their accuracy. However, as theagentd points out, there is what he calls the "leep" solution. I thought this was deemed to be sufficient? (At least for bare boxes moving across the screen with little GUI involvement.)

I've been so focused on audio and its interaction with the GUI that I've kind of lost touch with all this. Am looking forward to reading more on this thread.

One thing I'll say, though, is that there sure are an effing lot of ways to write less than optimal code, and the type of performance we are looking for doesn't leave a lot of slack.

Does the following concept apply at all? In audio, the way the JVM switches back and forth between tasks, the audio signal that gets made is actually assembled a bit ahead of the game, in bunches (and the bunches DON'T necessarily relate to the chosen buffer size). Because of this, real time events such as GUI events, don't "line up" with the audio signal very well. I first brought this up here: http://www.java-gaming.org/topics/an-audio-control-helper-tool-using-a-fifo-buffer/24605/view.html There is a diagram that helps explain.

So, I am wondering if there are analogous issues on the graphics end of things.

I only quickly looked through the thread you mentioned. OpenGL buffers commands and then actually issues them later, if that's something similar. It obviously shouldn't cause lag if the driver handles everything well, but in a well threaded program on a quad-core or something, it might be optimal to leave a single core for the driver to work with.

Cas wrote a gameloop, but when I tried it, it lagged every 3 seconds or so.

Also it depends on your game: I'm making a 2D sidescroller in which case this issue is most noticeable...

Like I said I used so many different loops, but actually, right now, I just enable Vsync (so if vsync works on this machine, its fine) and then sync using LWJGL's Display.sync at 60This simple way is still one of the more stable ways. I also always use the dead background sleeping thread / windows fix.Don't know yet about Linux, still get screen tearing - but I'm focused on the game itself for now.

I try my game on a lot of machines, because I want to support a lot of machines - and yeah, not easy to get right, especially if there is no vsync

But also: Even though you want to make it perfect, I don't think its necessary - Most of the time the stutters are only like 1 frame every 2-3 seconds, and I doubt a "normal" player will really notice it much - or I hope =D

Does the following concept apply at all? In audio, the way the JVM switches back and forth between tasks, the audio signal that gets made is actually assembled a bit ahead of the game, in bunches (and the bunches DON'T necessarily relate to the chosen buffer size). Because of this, real time events such as GUI events, don't "line up" with the audio signal very well. I first brought this up here: http://www.java-gaming.org/topics/an-audio-control-helper-tool-using-a-fifo-buffer/24605/view.html There is a diagram that helps explain.

So, I am wondering if there are analogous issues on the graphics end of things.

I don't think that's the issue here, because I don't think it's caused by cross-thread communication in that way. If cross-thread communication is an issue, then (as I mentioned in your thread), adding a small but constant time lag between control signal and output can help - a constant delay is actually less noticeable than a shifting one. Incidentally, you shouldn't be getting bunching in that way ...

Is this just with composite window managers (compiz, etc)? There's a range of problems with these that means vsync rarely (if ever) seems to work, whether you try and switch it on or not. Using metacity or similar brings back vsync but loses desktop effects.

Is this just with composite window managers (compiz, etc)? There's a range of problems with these that means vsync rarely (if ever) seems to work, whether you try and switch it on or not. Using metacity or similar brings back vsync but loses desktop effects.

Just "marked compiz for complete removal" to be sure - no screen tearing in window mode and "only" a line on the top of the screen (like 40 pixel from the top) when scrolling in fullscreen mode.Still not a fix obviously.

Interesting to note is that the stutter is the same as in windows, every 2-3 seconds, 1-2 frames of stutter, on average.Well I also know that one of the programmers who are working on this game too; he doesnt have this problem on his desktop pc at all - and on his laptop, it says 60 but the rendering itself skips frames noticable (pretty sure its some kind of 7 / Aero problem there)But this stutter - I experience it on my desktop quad core machine; it can happen that there is no stutter at all on my 2 Ghz single core laptopIt's not predictable =P

I think I only tried ROTT and no i didn't, but there isn't as much scrolling - Well when I played it, I wasn't really looking for it back thenBut in a sidescroller you have almost constant scrolling of course

Since I still use Slick, using your gameloop wasn't as easy - so I might have made some mistakes (with the LWJGL timer or something)

But obviously it would incredible if you could make like a simple Ball bouncing example using your gameloop, for us to use as "the solution"

Cas wrote a gameloop, but when I tried it, it lagged every 3 seconds or so.

Also it depends on your game: I'm making a 2D sidescroller in which case this issue is most noticeable...

Like I said I used so many different loops, but actually, right now, I just enable Vsync (so if vsync works on this machine, its fine) and then sync using LWJGL's Display.sync at 60This simple way is still one of the more stable ways. I also always use the dead background sleeping thread / windows fix.Don't know yet about Linux, still get screen tearing - but I'm focused on the game itself for now.

I try my game on a lot of machines, because I want to support a lot of machines - and yeah, not easy to get right, especially if there is no vsync

But also: Even though you want to make it perfect, I don't think its necessary - Most of the time the stutters are only like 1 frame every 2-3 seconds, and I doubt a "normal" player will really notice it much - or I hope =D

VSync does NOT fix stuttering! It even worsens it when it actually appears! The only time it will improve it is if your screen has a better precision timer, in which case it KIND OF will sync to it instead of using your computer's timer, so VSync could improve the timing. However it's so unreliable (Linux, Intel graphics cards) that you should in no way use it without some other kind of syncing method (or using a varying time step).

For the last time:

Vertical Syncronization is a setting that forces the graphics card to synchronize its update time to screen refreshes. When your 60 hertz screen decides that it is time to renew its content, it will read the current frame the graphics card's front frame buffer (you're drawing to the back buffer as you're doing double buffering). That means that if the front buffer is updated by a new frame while the screen is reading it, it will get part of the old frame (the top part) and part of the new frame (the bottom). This is what's called tearing, as you'll be able to clearly see the "tear" or discontinuity between the two images. By enabling VSync you force the graphics card to only update its front framebuffer when it is not being read by your monitor, eliminating the possibility for tearing. Obviously your graphics card can't overwrite the front buffer when your monitor is reading from it, so it has to wait until the monitor is done reading it.

It's like a train station, where your update much catch its train scheduled every 1/60th second. If you miss it, you'll have to wait for the next train. If you enable VSync and you're able to render 50FPS with your hardware, VSync will limit this to 30 FPS. This is due to the fact that you only have a passanger for every other train, forcing you to wait for the next one. I mean, even if you're only 1 ms late for your morning train, you'll still have to wait for the next one (true in both real life and when rendering with VSync xD). If you take even more time you'll have to wait for every 3rd train, and you'll get 20 FPS. Every 4th train = 15 FPS. Your FPS gets rounded down to the nearest (refresh_rate / n) FPS, where n is an integer above 0 (1, 2, 3, ...). If you're able to render at 1000 FPS without VSync, it will still only go as fast as your screen can refresh (in this case 60 FPS), as you only have 60 trains running.

"Sure, I know that but how does that worsen stuttering?"Well, if your average frame time is close to 1/60th second (about 16.6667 ms) there is a big risk that your frame will take more to than 1/60th second to render. This could be due to another program using a shared resource (CPU, GPU, whatever), which can push the rendering time for just a single frame to over 1/60th second, causing you to drop a frame. Without VSync, at least half the frame might make it to the screen (which isn't much of an improvement anyway), and you won't be wasting the time until the screen refresh is finished.

"So when it works, it gives me perfect timing, right?"NO. The timing might be better than the 15 millisecond precision that Windows manages, but far from perfect. Why? OpenGL buffers commands. While it cannot buffer more than one complete rendered frame at a time (for double buffering), it can buffer rendering commands for several frames. By buffering more frames the driver can ensure that the graphics card always has something to do and increase efficency, similar to how we keep buffers when playing and mixing audio, e.t.c. Your OpenGL commands only block when the command queue is full. As long as your game is GPU limited (or VSync limited, it's the same effect), your CPU will fill the command buffer and then wait until the GPU consumes some of them, which will obviously sync up with the VSync rate and your game will be running pretty synchronized to the refresh rate even though the command buffer causes some jitter. When your CPU is faster than your GPU, VSync doesn't do anything for the timing, but in such a case there is no need to sleep in the first place, so there won't be any timing problems in the first place.Because of the command buffer, it doesn't actually make much sense to measure the rendering time of a frame (the "update delta"), because due to the command buffer, the time you measure is just how much time it took for the last frame, the frame before that, or maybe even the frame before that. It will not move the objects relative to the amount of time it took to actually render them, but by the time it took to submit the commands of the frame, which is also depending on how the buffer is looking at the time (= depending on the commands of previous frames). Again, this is only the case if you're GPU limited.

"It solves my stuttering!"Most likely no.

"What about micro stuttering?"Micro stuttering is when you get constant stuttering due to frame times varying. I know of two possible causes for it: SLI/Crossfire (two or more graphics cards alternately rendering frames) and workload differing between frames. Micro stuttering is when the game runs it at certain FPS but looks like it's running at a much lower FPS. I've seen very bad micro stuttering due to the nature of alternate rendering with SLI where the two frames are completed at the same time, causing one of them to be displayed for a very short time or not at all as the next frame was ready before a screen refresh, effectively looking more like between 1/2th or 2/3th the FPS you're rendering at. I have seen this myself on my GTX 295, and it's especially bad in Bad Company 2, where it looks like 30-40 FPS when it is running at about 60. In this case, VSync DOES indeed fix the problem, as it gives both graphics card something to synchronize their updates with so that they actually produce frames at roughly constant interval. At the moment, VSync is the only cure for micro stuttering caused by SLI or Crossfire.The other cause is that you do heavy computations occasionally in some frames. This will obviously cause this frame to take more time to render and the game experiences stuttering during those frames. As an example, I only generated my fog of war every 10th frame, but it took about 70ms each time (I had 2000 units >_>). The game was running at up towards 180FPS, but it still looked like it was stuttering due to the uneven distribution of load between frames. In this case, enabling VSync limited the FPS of the frames that just sped by (the ones were I didn't generate the fog of war) while obviously not speeding up the fog of war rendering. With VSync on I got close to 30-40 FPS for the same scene. VSync was making me get an FPS more closer to what I was actually seeing in the first place, and it looked very similar to not having VSync on.

"So VSync reduces tearing at least. Why not always use it?"Two words. Input lag. VSync is insanely infamous in shooting games and other games that are dependent on fast input response. The reason is simple. When a game is running at 60 FPS, the game buffers rendering commands several frames ahead. This can be controlled on NVidia cards in the NVidia Control Panel (the confusing setting Maximum "Pre-rendered Frames"). The default value for this setting is 3. Yes, 3 full frames. Check it yourselfAs long as VSync is on, the command buffer will almost always be full. Frames are consumed at a constant interval, so the input delay can be calculated pretty accurately.delay = (pre_rendered_frames + 1) * frame_timeThe +1 is because you actually have to render that frame too after buffering it in the command buffer. Example: You have a 60 Hz screen and VSync enabled. The frame time is constant to 1/60th second = 16.666667 milliseconds. The delay is approximately(3 + 1) * 16.666667 = 66.66667ms delayYour cheap USB keyboard and mouse is polled at perhaps 100 Hz, giving you 10ms delay just there. It takes a frame before your game reads the buffered input (actually doing stuff based on the input in the game loop), so that's another frame_time long delay. After rendering you also have to transmit the frame to the monitor. An optimal dual-link DVI cable can transmit data at approximately 8 giga-bit = 1 Giga-byte per second. Ignoring all encoding overhead, e.t.c we still have to transmit the 32-bit color (it's encoded into 10-bit per channel according to Wikipedia so probably 32-bit per pixel). A 1920x1080p screen is 7.91MBytes of data, which will take about 1ms (0.966ms, but hey, I thought it was more xD). Finally the screen has to process it. My laptop represents a worst case scenario with its 17ms delay there, but there are better 2ms screens (like the one I have at home -_-). This is all really simplified, and there are probably lots of other sources of delay, but these should be the worst ones.Total: 10 + 16.66667 + 66.66667 + 1 + 17 = 111.333334 ms delayThat's pretty insane. A majority of those numbers are based on frame time, so having an FPS higher than 60 FPS is actually a benefit in fast paced games. For example, disabling VSync on a really good graphics card so the game runs at, say, 120FPS instead, using a 1000Hz polled gaming keyboard and mouse (1ms delay) and a 2ms delay monitor, you can get a much lower delay. In this case, frame_time = 1/120 secs = 8,333333333333.1 + frame_time + frame_time * 4 + 1 + 2 = 4 + 5 * 8.333333 = 45.66666667 ms delay.This isn't even exactly accurate, as it isn't guaranteed that the command buffer gets completely filled with VSync off. It's most likely slightly less than what those 45.667ms I get by calculating it, maybe 10 ms less at best, but that's a guesstimate.

"I've heard triple buffering solves everything."Triple buffering simply adds another fully rendered buffered frame. The reason is that this makes the graphics card always able to render to one of the two backbuffers buffers, making the actual rendering FPS not limited by the screen refresh rate. The cost is even more input delay,

Due to all this, the only time I enable it is in games that don't require fast input response, like strategy games (as they usually have a sync time of several hundred milliseconds due to determinism), and when the artifacts of not having it on are too visible. I enable it in Bad Company 2 for more fluent rendering (easier to see things moving > input delay) and in games with extremely visible tearing. For example, in Bioshock there are often blinking lights that look like shit when the you have a clear line between the almost black last frame and the fully lit current frame.

Wow, so much feedback! I'm having difficulty understanding some of the things being discussed, though--- as you can probably tell, I'm kind of a novice. Note that I've been building my program from portions of various examples and tutorials; if there is an error in my code, it may be pretty likely that it comes from faulty implementation of the code from these various sources. Maybe I made an error in fitting all the different pieces into my target framework?

What removes the screen tearing is to run the game at or above 60FPS. This is done by using a busy while loop that calls Thread.yield().

I don't know if this is what you're referring to, but we have tried rendering as fast as possible. I got it to work on one computer with what is reported as "120 FPS" (that's probably not the actual number of frames rendered per second, but it still seemed to help), but it still wasn't perfectly smooth, and I think it was hard on the computer; the fan started going really easily whenever I ran the program like that. Furthermore, on at least two other computers it didn't do much. Another method suggested was to run the program at high update rates, but personally I've already settled on having my updates be performed at a certain rate (60 updates per second).

I think Slick2D manages to achieve this, actually, but, again, I haven't yet been able to solve stuttering with it.

privatestaticfinallongserialVersionUID = 1L;/* difference between time of update and world step time */doublelocalTime = 0f;

/** Creates new form Game */publicGame() {setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);setIgnoreRepaint(true);this.setSize(800, 600); }

/** * Starts the game loop in a new Thread. * @param fixedTimeStep * @param maxSubSteps maximum steps that should be processed to catch up with real time. */publicfinalvoidstart(finaldoublefixedTimeStep, finalintmaxSubSteps) {

Graphics2Dg = null;try {g = (Graphics2D) bf.getDrawGraphics();render(g, localTime); } finally {g.dispose(); }// Shows the contents of the backbuffer on the screen.bf.show(); //Tell the System to do the Drawing now, otherwise it can take a few extra ms until //Drawing is done which looks very jerkyToolkit.getDefaultToolkit().sync(); }Ball[] balls;BasicStrokeballStroke;intshowMode = 0;

//This value would probably be stored elsewhere.//final double GAME_HERTZ = 30.0;//Calculate how many ns each frame should take for our target game hertz.finaldoubleTIME_BETWEEN_UPDATES = 1000000000.0 / gameSpeed;//GAME_HERTZ;//At the very most we will update the game this many times before a new render.finalintMAX_UPDATES_BEFORE_RENDER = 5;//We will need the last update time.doublelastUpdateTime = System.nanoTime();//Store the last time we rendered.doublelastRenderTime = System.nanoTime();

//If we are able to get as high as this FPS, don't render again.finaldoubleTARGET_FPS = 60;finaldoubleTARGET_TIME_BETWEEN_RENDERS = 1000000000 / TARGET_FPS;

//If for some reason an update takes forever, we don't want to do an insane number of catchups.//If you were doing some sort of game that needed to keep EXACT time, you would get rid of this.if (now - lastUpdateTime > TIME_BETWEEN_UPDATES) {lastUpdateTime = now - TIME_BETWEEN_UPDATES; }

//Yield until it has been at least the target time between renders. This saves the CPU from hogging.while (now - lastRenderTime < TARGET_TIME_BETWEEN_RENDERS && now - lastUpdateTime < TIME_BETWEEN_UPDATES) {Thread.yield();

//This stops the app from consuming all your CPU. It makes this slightly less accurate, but is worth it.//You can remove this line and it will still work (better), your CPU just climbs on certain OSes.//FYI on some OS's this can cause pretty bad stuttering.try {Thread.sleep(1);} catch(Exceptione) {}

//If for some reason an update takes forever, we don't want to do an insane number of catchups.//If you were doing some sort of game that needed to keep EXACT time, you would get rid of this.if (now - lastUpdateTime > timeBetweenUpdates) {lastUpdateTime = now - timeBetweenUpdates; }

Is this the oh so dreaded problem of stuttering? You mean updating the game at 63 FPS and expecting it to be look smooth on 60 FPS? Yeah, good example. The stuttering is your fault because you're rendering 3 more frames than you can display each second, so every 1/3rd second you get this small jump, where one frame was never shown. If you actually limited it to 60 FPS properly, you would not have gotten it. Wow, that was hard.

Although, whatever it is, could it have no affect on a Game Maker program while still having an affect on our Java programs? One of our issues is that Game Maker games run just fine on the computers that our Java programs don't run smoothly on.

Is this the oh so dreaded problem of stuttering? You mean updating the game at 63 FPS and expecting it to be look smooth on 60 FPS? Yeah, good example. The stuttering is your fault because you're rendering 3 more frames than you can display each second, so every 1/3rd second you get this small jump, where one frame was never shown.

I seem to remember encountering this problem when I tried LWJGL.

Quote

If you actually limited it to 60 FPS properly, you would not have gotten it. Wow, that was hard.

Apart from @theagentd's highly astute observations about framerate and LWJGL examples, DWC with OpenGL is entirely happy to occasionally not bother rendering the odd frame for you, causing what appears to be random jitter.

Bottom line: on Windows, you need DWC turned off if you're on Vista or 7, or to be running in fullscreen; you need vsync on; and you need to actually sync your update loop to the display refresh rate. If any of these criteria are not met you'll get little glitches.

Is this the oh so dreaded problem of stuttering? You mean updating the game at 63 FPS and expecting it to be look smooth on 60 FPS? Yeah, good example. The stuttering is your fault because you're rendering 3 more frames than you can display each second, so every 1/3rd second you get this small jump, where one frame was never shown. If you actually limited it to 60 FPS properly, you would not have gotten it. Wow, that was hard.

Quote

1

Display.sync(60); // cap fps to 60fps

It does.And again I didn't write this. It's the one of the LWJGL examples which shows how things are supposed to be done.

Apart from @theagentd's highly astute observations about framerate and LWJGL examples, DWC with OpenGL is entirely happy to occasionally not bother rendering the odd frame for you, causing what appears to be random jitter.

Bottom line: on Windows, you need DWC turned off if you're on Vista or 7, or to be running in fullscreen; you need vsync on;

Again, is this something that only affects Java and not Game Maker, then?

Quote

and you need to actually sync your update loop to the display refresh rate. If any of these criteria are not met you'll get little glitches.

I think this is what the XNA forums link was referring to. Could you perhaps provide some example code or pseudo-code to explain how this is done?

No, this affects all OpenGL games under DWC. DirectX games have better control over how they interact with DWC and appear not to suffer from this issue but I stand to be corrected.

Syncing to the display refresh rate is actually best achieved using vsync. It's more accurate than even the nanotimer as though the display might report that it is doing 60hz it might actually be 59.97hz in reality. I use both vsync and a timer as a backup - sometimes vsync is reported as working when it actually isn't. In addition Linux machines tend to not know what their refresh rate is or don't have vsync capability at 60hz (eg. Compiz, that worthless heap of shit currently ruining Linux for everyone, runs at an entirely useless 50hz I'm told).

No, this affects all OpenGL games under DWC. DirectX games have better control over how they interact with DWC and appear not to suffer from this issue but I stand to be corrected.

I see, so it may make sense that GM is unaffected.

Quote

Syncing to the display refresh rate is actually best achieved using vsync. It's more accurate than even the nanotimer as though the display might report that it is doing 60hz it might actually be 59.97hz in reality. I use both vsync and a timer as a backup - sometimes vsync is reported as working when it actually isn't. In addition Linux machines tend to not know what their refresh rate is or don't have vsync capability at 60hz (eg. Compiz, that worthless heap of shit currently ruining Linux for everyone, runs at an entirely useless 50hz I'm told).

Cas

But there's no way to do this in Java2D, right. Slick2D has a way, but I heard it only works in fullscreen mode; is that true? In any case turning on VSync in windowed doesn't seem to improve the stuttering much (if at all).

If you've got DWC turned off and running in a window, and vsync's on, the only thing left you can do is accurately sync to the hi-res timer. Unfortunately LWJGL's Display.sync() method somehow manages to do this wrongly if you need it to be dead accurate. Try this:

One more thing: almost nobody cares about running smoothly when running in a window. It's not how people play real games. Web toys, etc. - that's what people play in windows, with accordingly lower expectations.

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org