Once the framerate reaches the 36 and below mark, the G-SYNC module begins inserting duplicate frames to maintain the display’s minimum physical refresh rate, and smooth motion perception. If the framerate is at 36, the refresh rate will double to 72 Hz, at 18 frames, it will triple to 54 Hz, and so on. This behavior will continue down to 1 frame per second.

So, short answer is yes...ish. And no, it doesn't add latency. The latency introduced in these situations is due to the high frametimes at such low sustained framerates.

Not the same thing. When I say "duplicate frame" it means duplicate refreshes of the display to maintain the 30Hz physical minimum of the panel.

Unlike v-sync, G-SYNC adjusts the refresh rate to the framerate. If the display remains at such low sustained framerates/refresh rates (<36 fps) without this functionality, the display will literally begin to fade to white/go blank. G-SYNC must repeat the refresh in multiples of the current framerate in this range to maintain a picture, basically.

V-sync doesn't have this worry, because no matter how low the framerate, the refresh rate remains fixed at its current maximum.

salt wrote:Quite sure games don't take into account "time to presentation" and adjust object position based on it - you would need to be psychic to do that.

There were some few games (confirmed by nvidia, but not which games are affected) that are not compatible with g-sync due to this or similar techniques. The driver has an internal blacklist of games where it doesn't allow g-sync in those case. However, these games (whatever they were) are probably fixed by now. Unless it's a really old game.

Other techniques not compatible with g-sync involve using vblank as a timing source. Those games also don't work with g-sync and would be blacklisted. Again, I don't know which games. It has been mentioned in passing by NVidia's Tom Petersen in an interview.

Generally, it's not a real issue, as these techniques are not even close to being widespread.

About the duplicate frames, that just means that if a frame takes longer to be delivered than X milliseconds, and X is the highest possible refresh interval of the monitor, then g-sync will scan out the last frame in X/n millisecond intervals, where "n" is an integer. This makes sure there's no microstutter at low frame rates. It means 20FPS becomes 20FPS@40Hz (n=2), or 20FPS@60Hz (n=3), etc. If the refresh rate is an exact multiple of the frame rate, then you're good; no microstutter/judder, and g-sync makes sure that's always the case. This works right down to 1FPS (or 0FPS even).

The views and opinions expressed in my posts are my own and do not necessarily reflect the official policy or position of Blur Busters.

The graph in that post is slightly misleading, because with g-sync the time between two frames is determined by how long this frame took, while the animation distance is determined by how long the previous frame took. So, the g-sync line isn't going to be perfectly straight, just a lot straighter than the v-sync line. If your game engine wants something to move at 1000 pixels per second, and your render times are 10,12,9,10,11,10 milliseconds, the position would increment 0,10,12,9,10,11,10 when each of those hits the screen. So you have a 12 pixel frame that gets displayed 9ms after the previous frame, and a 9 pixel frame that gets displayed 10ms after that. So it's not a perfectly straight line, just a lot straighter than vsync. Say you had a fixed 120hz display with vsync, the position would still be 0,10,12,9,10,11,10, but the time each of those hits the screen would be 16.6, 25, 33.3,41.6,58.3,66.6, with the interval being 16.6, 8.3, 8.3, 8.3, 16.6, 8.3.

jorimt wrote:Not the same thing. When I say "duplicate frame" it means duplicate refreshes of the display to maintain the 30Hz physical minimum of the panel.

Unlike v-sync, G-SYNC adjusts the refresh rate to the framerate. If the display remains at such low sustained framerates/refresh rates (<36 fps) without this functionality, the display will literally begin to fade to white/go blank. G-SYNC must repeat the refresh in multiples of the current framerate in this range to maintain a picture, basically.

V-sync doesn't have this worry, because no matter how low the framerate, the refresh rate remains fixed at its current maximum.

I think I get what you are saying ...

Duplicating frames happens all the time. In Vsync, duplicating frames happen when the new frame isn't ready so the display just shows the previous frame a second time. This happens when you cannot get 60fps on a 60hz display, the game drops to 30fps and every frame is displayed twice.

This is what you are saying happens to a Gsync display at sub-30fps scenarios. New frame isn't ready, display previous frame again.

Generally this duplication isn't noticeable to the vast majority of people. Did you know that film projectors display the same frame 3 times in a roll? This is because the fps of film, 24 fps, is too low to use as the refresh rate. 24hz refresh would result in a ton of flicker. So they use a 72hz refresh rate, displaying the every frame of film 3 times.

Sparky wrote:while the animation distance is determined by how long the previous frame took.

This would require the game engine to track previous frame times and adjust animation accordingly. Do that do that?

If frame times vary significantly every frame ... what are the chances they be making things worse by trying to "predict" how long the next frame will take?

Sparky wrote:So, the g-sync line isn't going to be perfectly straight, just a lot straighter than the v-sync line.

I have to say I don't quite get the graph. Particularly the eye position part.

With Vsync if it can't keep 60fps on a 60hz display, it will effectively drop to 30fps and you just show each frame twice. This is perfectly smooth as the object movement will be correct for the time pass in-between frames - just delayed by a frame. It's nothing fancy but it works.

Doing interpolation or adjusting animation ... seems ... will it even work? It is a heck a lot of guess work involved. Guessing wrong would be quite noticeable I believe.

Sparky wrote:while the animation distance is determined by how long the previous frame took.

This would require the game engine to track previous frame times and adjust animation accordingly. Do that do that?

not quite. The game engine decides what to draw in frame 1, then gives it to the GPU to render, while the GPU is rendering frame 1, the game engine is deciding what to draw for game 2. The game engine only needs to know how long it's been since the last frame, so it knows how far to move something. If you double your framerate, the distance something moves between each frame is cut in half.

If frame times vary significantly every frame ... what are the chances they be making things worse by trying to "predict" how long the next frame will take?

Sparky wrote:So, the g-sync line isn't going to be perfectly straight, just a lot straighter than the v-sync line.

I have to say I don't quite get the graph. Particularly the eye position part.

With Vsync if it can't keep 60fps on a 60hz display, it will effectively drop to 30fps and you just show each frame twice. This is perfectly smooth as the object movement will be correct for the time pass in-between frames - just delayed by a frame. It's nothing fancy but it works.

Doing interpolation or adjusting animation ... seems ... will it even work? It is a heck a lot of guess work involved. Guessing wrong would be quite noticeable I believe.

not interpolation, just a direct consequence of framerate. It's extremely rare for game logic to be tied to framerate, unless you're talking about old console or arcade games. If you go from 30fps to 60fps, stuff doesn't move twice as fast, it just gets shown more smoothly.

Interpolation would be taking a 30fps source, and inserting frames by averaging or otherwise combining the frame before and the frame after.

OK. That's more or less how I see it too - assuming you are saying what I think you are saying.

The game captures the latest game state and use that to determine what to render.

The post I was referring in the OP seems to imply that the game take estimated projected rendering time of a frame into consideration when determining what to render. That's impossible because the game can't know how long it will take the GPU (which can be any GPU) to render a frame and it can't predict future game state.