Another useful use of variable refresh rate technology (such as G-SYNC), operating at maximum cable bandwidth, is low-latency fixed refresh rate. Since G-SYNC has an additional advantage of faster frame delivery times, and faster on-screen scanout time, you can get 60fps@60Hz with cable delivery of only 1/144sec, and screen scanout of only 1/144sec per refresh! (or whatever maximum bandwidth the G-SYNC monitor uses -- or even theoretical future 240Hz G-SYNC monitors using DisplayPort 2.0, which would be able to do 60fps@60Hz or 77.5fps@77.5Hz or 187fps@187Hz, all with just 1/240th sec frame delivery / scanout latency!)

But with G-SYNC, delivery and scanout is decoupled from the refresh rate. You can choose to do 60fps@60Hz, with a lot less input lag than any 60Hz monitor -- even less input lag than a 60Hz CRT, because CRT's take a finite amount of time to scan from top-to-bottom.

Observe that HDMI 2.0 has the bandwidth to transmit individual 1080p frames in less than 1/240th of a second. Reducing input latency from 16.7ms all the way down to 4.2ms, is a major reduction.

So it has apparently great applications for movies and home theater. Variable refresh rate capability & faster frame delivery time belongs in HDMI 3.0, in my humble opinion. Less input lag for receivers, less input lag for sports, future game consoles (XBoxTwo, PS5), less broadcasting latency due to speeded-up frame delivery between settop box and TV, less input lag everywhere, future-proof frame rates, faster frame delivery times from one home theater device to another....

Conclusion --
This may be 5 years, 10 years, or maybe until after patents expire, but I think this is an important innovation step, for TWO very, very major reasons:
1. Faster frame delivery time to the display; for less latency even at low frame rates; and
2. Eliminate humankind dependance on discrete refresh rates. One small step (of many) towards Holodeck.

It's been talked about for years, but I'm glad that Nvidia have finally done this. The problem is that it's only going to show up in G-Sync monitors for the foreseeable future. Hopefully they'll push to get it in televisions, or someone will be interested in licensing it.
It will be interesting to see how variable framerates end up though. I'm sure it will be better than our current options (v-sync, triple-buffering, or screen tearing) but I'm having a hard time believing that a variable framerate is going to look smooth on a low persistence display.

but I'm having a hard time believing that a variable framerate is going to look smooth on a low persistence display.

There is more than one approach:

(1) ....You can create low persistence simply by using ultrahigh framerates. This works on displays without light modulation (no PWM, no plasma subfields, no DLP modulation, no phosphor decay, no strobe backlight). Continuous-light 1000fps@1000Hz is a persistence of 1ms. Variable frame rates opens the progress to ultrahigh framerates to eliminate the need for low-persistence-by-light-modulation (e.g. strobing). One step towards Holodeck (real life has no framerate; or equivalent to infinite frame rate, depending on how you look at it).
....With ultrashort frame lengths (1ms per frame at 1000fps@1000Hz), low persistence is successfully achieved without the need for any light modulation. There would be absolutely no flicker under high speed camera; a high speed video of the screen would look the same as a high speed video of real life.
....You get variable motion blur; the higher the framerate, the less motion blur. e.g. Tomorrow's "Holodeck" content creator (director) can even control the amount of motion blur via the framerate method, e.g. 500fps has twice the motion clarity of 250fps, and 1000fps has twice the motion clarity of 500fps. Motion blur generated by persistence, is proportional to the distance of movement between individual frames -- the length of persistence itself. The motion blur is the same as the equivalent photographic camera shutter speed (e.g. 1/500sec persistence creates equivalent motion blur as 1/500sec camera shutter). Obviously, the faster the motion, the more potential for motion blur to occur, until objects start moving too fast for human eyes to track.

-or-

(2) I've mathematically determined that it is technically possible to have flicker-free persistence-lowering light modulation with variable frame rates.. You keep persistence high at low framerates, but gradually shorten persistence for progressively higher framerates that are above flicker fusion threshold. (e.g. begin modulating light towards one shorter strobe/peak/illumination per refresh cycle). Basically, you blend PWM-free/ultrahigh-frequency PWM at low refresh rates (high persistence or sample-and-hold at low framerates), gradually to full strobing (light modulation) at higher frequencies. While maintaining a constant trailing-average brightness at all times. Example of blended flicker-free variable-rate strobing/pulsing algorithm that is low-persistence at high framerates, but high-persistence flicker-free at low frame rates. This is easier to control with pulse technologies where you can ultra-precisely control persistence (e.g. DLP, OLED, and all-at-once strobe-backlight LCD), than with phosphor-based technologies (where you can't precisely control the speed of phosphor decay), or segmented scanning backlights (where light leakage between segments complicates precise persistence control). Fortunately, the world is heading towards OLED, and persistence of OLED is controllable via adjustable pulse lengths, so this would solve the persistence problem without needing to go to ultrahigh framerates yet.
____

Note: Somebody PM'd me about
-- alternate framerate-less video technologies. Good talk stuff, but probably too far-future stuff for now.
-- technologies that uses eyetracking to increase framerates only in the portion of the image where you are looking at
-- game framerates/refresh rates eventually someday getting high enough to tolerate interpolation without lag (e.g. games running 250fps can be interpolated to 1000fps and only add 4ms of extra input lag)

Well G-Sync exists because high framerates are not attainable. If they were, we would simply be locked to 120fps at 120Hz and there would be no issues using V-Sync as it already exists.

I'm not sure what you describe in #2 does anything to help with the judder caused by uneven framerates.
G-Sync should have less judder than games which are running at an unlocked framerate on a regular display, because that requires frames to be repeated, but I don't see how an uneven framerate will ever produce completely smooth motion.

Well G-Sync exists because high framerates are not attainable. If they were, we would simply be locked to 120fps at 120Hz and there would be no issues using V-Sync as it already exists.

Humans would still be able to stutters on a 500Hz display: A single 1/500sec frame drop manifests itself as a sudden momentary doubling of persistence (doubling of motion blur). Eye tracking 2000 pixels/sec motion, you have 4 pixels per 500Hz refresh; and a single stutter at that, creates a sudden off-by-4-pixel discontinuity. It's still noticeable in the modern world: As we get to retina/4K/8K/imax (easier to see blur/stutters), get displays closer to our eyes for more vision coverage (easier to see blur/stutters with longer eye tracking), and faster motion (e.g. virtual reality, head turning), sharper graphics (e.g. computers instead of video), the thresholds for detecting persistence-related issues become that much higher.

I'm not sure what you describe in #2 does anything to help with the judder caused by uneven framerates.

Adding strobing won't add additional judder, if you do a blended algorithm similiar to this one:

Before you reply regarding this diagram, read this section closely to understand the situation better. Also, I've created a test electronic circuit -- my Arduino variable-LED-flickering tests shows stroberate transitions are possible without being human-noticeable. You just have to keep light modulations quick to make sure that trailing brightness average (over human flicker fusion threshold timescales) remains as constant as possible. It's far more challenging than doing just a constant rate flicker. Look at blackbody radiation, which has lots of noise (random high-frequency brightness modulations), and we see continuous steady brightness anyway -- the high frequency brightness modulations is too high frequency to be noticed. It is simply engineering and precisely mathematically-controlled light modulation, to keep average photon output per frame constant, photon volume changes above flicker fusion thresholds and prevent noticeable flicker during rapid photon volume changes. At least for large percentiles of human population.

Quote:

G-Sync should have less judder than games which are running at an unlocked framerate on a regular display, because that requires frames to be repeated, but I don't see how an uneven framerate will ever produce completely smooth motion.

Well, it definitely does. Uneven framerates actually produces shockingly smooth motion (as confirmed by many of us who saw G-SYNC), especially at framerates >60fps. See How does G-SYNC fix stutters?

Yes, I know, this is a hard concept to wrap your head around. But here's a diagram that helps explain why continuously variable framerates (as long as the variability is at a sufficiently high frequency) look perfectly smooth, provided the variable-framerate-capture/recording/rendering is perfectly in sync with the variable-refresh-rate output (on a relative time basis). Explaining from an eye-trackign perspective:

Traditional Fixed Refresh Displays

Variable Frame Rates sync'd on Variable Refresh Rate Displays

It's amazing, but true. Zero erratic stutters.

This of course, assumes, object positions in the variable frame rate, corresponds to delivery time to the human eye. Once you do that, erratic stutters are eliminated, and you don't see any erraticness during framerate transitions!!! (yes, I was impressed that this was possible).

Yes, there's a side effect. See earlier diagram of motion blur (smearing/ghosting).
Variable framerate side effects is simply variable motion blurring on steady-light-output displays (e.g. sample-and-hold, LCD).
During ~120fps variable frame rate output (fluctuating 100fps to 150fps), the rate of framerate variances would be so rapid, the variable motion blurring would blend into an average constant motion blurring).

Gamers lucky enough to have owned a 200Hz-capable CRT at one time (e.g. 2048x1536 DiamondTron, capable of 200Hz@640x480) can see easily a 1 frame stutter (e.g. 199fps@200Hz) -- and I know it would not stop there because there's plenty of opportunities for a stutter to show up. (At 2000 pixels/sec eye tracking, a single stutter at 500Hz creates a 4-pixel misalignment). In the world of higher pixel densities, closer to retinas (e.g. VR), faster and sharper motion (computer/graphics), the thresholds of detectability goes up.

People who have seen www.testufo.com (especially on a traditional 120Hz LCD computer monitor) is familiar with the relationship between motion blur and frame rate on non-light-modulated displays (e.g. sample-and-hold). On such displays, the 120fps object has half the motion blur of 60fps, and 60fps would have halve the motion blur of 30fps. 120fps one-quarter motion blur of 30fps. (though 30fps movement looks 'shaky' that the motion blur becomes visible vibration/shaky movement instead, but the amplitude of that shaking is twice as much as during 60fps). Now G-SYNC makes all framerates look like framerate=Hz, so now you've got smoother motion with no erratic stutters. Yes, you'll get "regular stutter" at low framerates (like 24fps@24Hz or 30fps@30fps) as you already see today at 30fps@30Hz or 30fps@60Hz, but beyond a certain framerate, the regular stutters become so high-frequency it looks like motion blur instead. Plus, the important thing -- no erratic stutter during framerate changes. Everything always looks framerate=Hz at all times, even through varying framerates.

There are situations where we definitely do not want to add motion blur the to original source material or the display. For things like virtual reality or video games, there are many use cases where we want motion blur to be 100% natural, completely generated by human brain, and no addition blur enforced upon us by the content/display. Motion blur is beneficial artistically, but shouldn't be a guaranteed/forced bottleneck. We (directors, content creators, users) should be able to choose to go into a zero-motion-blur mode during certain times when we need to. The chain from the director/content to the human eyeballs, should not have any motion-blur-adding bottlenecks, or Holodeck displays will be impossible.

*everyone* who saw G-SYNC in operation says there are no erratic stutters (sampling of G-SYNC news in mainstream media).

[Apologies if I've opened a Pandora's Box of multiple different topics at the same time -- but this is fascinating technology, and very fascinating stuff to display researchers like me.]

I do, yes. I'm one of the display experts here on AVSFORUM. I'm also the former moderator of the Home Theater Computers forum here, and I also worked in the home theater industry for a number of years. I also invented the world's first open-source 3:2 pulldown deinterlacing algorithm, which was used in dScaler more than 10 years ago, when video processors and line doublers still cost a lot of money in those days. (more info)

If I misphrased something or one of my terminologies is incorrect, point it out, and I can explain, or fix a terminology error that confused you.
I am the creator of www.testufo.com too, in addition to being Chief Blur Buster at www.blurbusters.com .

For motion fully synchronized to refresh rate, make sure your web browser supports full VSYNC synchronization to refresh. System requirements www.testufo.com/browser.html -- Recent system containing good GPU like AMD, nVidia, or recent Intel graphics -- and GPU accelerated browser such as Chrome. It also works best if you're not running anything else, when running the web-based motion test.

Higher persistence creates more motion blur/ghosting effects. Persistence is not the same thing as pixel transitions (GtG). For more information, read Why Do Some OLED's Have Motion Blur?, as well as the scientific references that explain sample-and-hold (persistence). John Carmack and Michael Abrash has been talking a lot lately about this as well. I am excited about better OLED displays too, though at the moment, high efficiency all-at-once strobe-backlight LCD's has less motion blur at the moment, at least until OLED improves. TFTCentral has a good explanation about strobe backlights, which are more efficient than scanning backlights, and allows some LCD's to have less motion blur than some CRT's. For those not aware -- Blur Busters is the blog that helped make LightBoost popular (low-persistence strobe backlight for LCD's), creating media coverage that refers to Blur Busters, and all the rave reviews ("It's like a CRT") by high end gamers, the YouTube high speed video proof that LCD pixel transitions can be bypassed via LightBoost. In fact, John Carmack, plus someone from NVIDIA, confirmed that an optional strobe backlight feature is now an official part of G-SYNC monitors.

This is exactly what I'm talking about. When the object position is no longer moving at a consistent rate, how is that perceived as smooth motion?One of the causes of judder is due to frame repeats, which G-Sync addresses, but it does not address this.

Quote:

Originally Posted by Mark Rejhon

*everyone* who saw G-SYNC in operation says there are no erratic stutters (sampling of G-SYNC news in mainstream media).

Nvidia's demo was primarily focused on fixed framerates which did not sync up with the display. E.g. 40fps at 60Hz, rather than fluctuating framerates. It was part of the demo, but honestly, I don't trust much of the tech press on this.

This is exactly what I'm talking about. When the object position is no longer moving at a consistent rate

For the game use case: you need to view both sides of the equation: The source (game) and the destination (eyeballs) Timing of object position inside the frame is now consistent with timing of object position where the eye tracking position is. Games can do that. It needs to be seen in person to be believed.

Quote:

how is that perceived as smooth motion?

It is much smoother motion because
You may see variances in edge-strobing/motion-blurring, but the object trajectory would stay in far superior sync to relative eye tracking position.

You, however, must have frame capture/generate times correspond to frame presentation times, though.
e.g. frame captured/generated for T+1.3ms presented to the human eye at T+1.3ms
frame captured/generated for T+7.4ms presented to the human eye at T+7.4ms
frame captured/generated for T+11.9ms presented to the human eye at T+11.9ms
frame captured/generated for T+21.7ms presented to the human eye at T+21.7ms
frame captured/generated for T+30.5ms presented to the human eye at T+30.5ms
(etc.)
To eliminate erratic stutters, you must keep the object positions inside the frame, to correspond to the presentation time of the frame. Yes, this only works for games, and not for prerendered content (movies, etc).

As long as the intervals between the frames are sufficiently small, and the number of frames sufficiently high, it's already perceived as smooth motion. The key is to make sure that rate changes occurs at sufficiently high enough that the average smoothness smoothes out to correspond with the average framerate; and thus 60-100fps would average out to look like smooth 80fps@80Hz motion. For on-the-fly rendered content (games), random 60-100fps on a variable framerate display looks much better than 79fps@80Hz (one stutter per second).

Quote:

Nvidia's demo was primarily focused on fixed framerates which did not sync up with the display. E.g. 40fps at 60Hz, rather than fluctuating framerates. It was part of the demo, but honestly, I don't trust much of the tech press on this.

There is an artificial stutter-injector feature in some of their demos (not shown to everyone, but to reputable people). Erratic stutters didn't become visible in the animations until they were grossly dramatic (e.g. >1/30sec between frames). This is confirmed. Yes, 30fps@30fps isn't as smooth looking at 60fps@60Hz. But now fluctuating 57-63fps would now look as perfectly smooth as 60fps@60Hz (assuming object positions inside the frames are adjusted to correspond for the frame fluctuations -- not a problem for realtime generated computer graphics).

With video games dynamically adjusting object positions in each frame based on how early/late a frame gets presented;
random fluctuating video game framerate 25-35fps now looks as smooth as 30fps@30Hz
random fluctuating video game framerate 50-70fps now looks as smooth as 60fps@60Hz
random fluctuating video game framerate 80-140fps now looks as smooth as 110fps@110Hz
(etc.)
In these situations, the display motion blur varies only by about ~20% (proportional to the average variance of the interval between frames), not noticeable when the blurtrail length modulates at very high frequencies (e.g. 110Hz average), it averages out to a fixed 110Hz-lile motion blur looking darn near identical to 110fps@110Hz. Motion blur size modulations are FAR LESS noticeable than stutters. When times between frames randomizes so quickly, the motion blur size modulates at levels above flicker fusion threshold, so the motion blur size stays visually constant. Constant perceived smoothness, constant perceived blur size. Even when stutters varied a lot more (>20% variance), the visibility of motion blur modulations is less noticeable than the visibility of stutters.

Most PC videogames already do this (they adjust object positions based on when they think the frame will be presented to the screen). The timing, is however bottlenecked/distorted by the forced granular refresh rate of traditional fixed-refresh-rate displays.. Games do this anyway to be refresh-rate-independent, and allow accurate object positions to keep VSYNC OFF smoother than otherwise (e.g. frame rates beyond refresh rate). G-SYNC simply eliminates the granular discrete refresh rates (the last frame timing weak link), finally making possible perfect synchrony between object positions (in computer) and the frame hitting human eye retinas (at least during G-SYNC native framerate range, currently 30fps to 144fps in the first upcoming monitors = intervals between frames varies between 1/30sec and 1/144sec), independently of when the frames are created.
The source frame timing equals the destination frame timing. Zero stutters during variable frame rates (above a threshold, ~60fps). Confirmed in demos.

This is amazing new science of exploration for vision researchers. Without G-SYNC, even just 1 frame drops is generally always noticed during consistent-speed motion tests (59fps@60Hz). With G-SYNC, it is impressive that random framerates varying within a percentage (10%-20%) be generally not noticeable at all during 60fps+ situations, if source (timing of object positions) stayed in sync with destination (timing of presentation to eyeballs). Variable refresh rate monitors allow framerates to dynamically vary more until the framerate variances are noticed. Researchers of the future will study: How much does framerates need to vary until framerate variances become noticeable, by more than 50% of population? Etc. It's all an amazing new territory to explore, in the new reality of variable frame rate displays.

If game framerates were never variable, G-SYNC would not be necessary. But playing games like Crysis3, we've got framerate variances all the way from 40fps to 144fps depending on which parts of the game we are playing on our 120Hz/144Hz computer monitor. G-SYNC is a godsend for those scenarios. Goodbye unnecessary 30fps caps, if implemented into HDMI 3.0, and tomorrow's consoles take advantage of it.

Now, for pregenerated content, variable framerate has different advantages/purposes (see my original post above, re-read the first post). The advantages for prerendered content (movies/video) is different from the stutter-elimination advantages for video games (since you can realtime adjust positions of objects based on knowledge on when that specific frame will be presented to the human eye. This is something you cannot do for prerendered content like movies/games).

Great idea, but do you think there is a way to implement this into HDMI 2.0? I would bet that 3.0 is a long ways off, IMO anyway. I wonder if there is a way to do this now with 2.0 using a firmware change. Surely the process that starts a refresh is controlled by code? Maybe not, but.. perhaps If the process is started each time by re-writable code, then i could be timed? If its stoppable by re-writable code, it could be delayed?

Makes me wonder what if anything in all the fancy new display features could possibly have caused TV manufacturers to built in software controlled refresh.

G-SYNC monitors are better than LightBoost; they include a sequel to LightBoost.

LightBoost
-- Came out in late 2011 for 3D Vision.
-- Unofficially became popular for 2D motion blur elimination since early 2013. (Google "lightboost")\
-- Degrades color somewhat.
-- Not officially sanctioned by NVIDIA for 2D usage.

G-SYNC
-- Hits the market early 2014.
-- G-SYNC is not LightBoost, but G-SYNC monitors includes an optional mode that's a LightBoost sequel that is "superior". (citation)
-- Should have better color than LightBoost
-- Officially sanctioned by NVIDIA for 2D usage.
-- Has other benefits such as variable refresh rate to eliminate tearing/stutters (although you can only choose between variable-refresh mode or strobing-mode, but you get both options in any G-SYNC monitor)

Quote:

also our TVs is causing problem.

Well, the problem isn't as big for televisions/movies as for video games, because video games create sharper & faster motion, which makes motion blur easier to see. A lot of high-paying elite gamers hate artificial external motion blur (either source-based or display-based) from adding to fully 100% natural motion blur generated by our brains; we often don't want the display/game to be the motion blur bottleneck.

Blur Busters also exists because there's enough of these types of gamers, in addition to people like me....
Even myself, who only uses a single-monitor, single-GPU setup, dislike input lag and motion blur too.
Input lag haters, motion blur haters. Different priorities than for video.

However, intrinsically, variable refresh rates and low latency has so many applications in the near future, since other, more exotic technologies (e.g. direct brain interfaces, lasers into retina, eye tracking for higher refresh rates only where eye is pointing) will take far too long to arrive at consumer prices, so we're stuck with a dependance on traditional pixel-matrix technologies (LCD, OLED, etc) for the forseeable future.

LCD, LCoS, DLP, OLED, and discrete-pixel LED, all can be technologically easily made variable-refresh rate (rate adaptive to frame rate, at dynamically high speeds) without visible flickering, while CRT / plasma is, alas, more complex to eliminate flickering of variable refresh rates with (but you can use internal display electronics to choose the closest-matching refresh rate or refresh rate multiple, and then convert incoming variability into that).

Quote:

dont u think lightboost display with G-sync combined would make best and beat crt

Theoretically, yes.

Combining G-SYNC and LightBoost is very appealing. With G-SYNC, you have the ability to do faster scanout (and have less input lag than the bottom edge of a CRT. CRT only has fully zero signal lag for the top edge of the image only; a CRT still takes a finite amount of time to "scan" from the top edge of the screen to the bottom edge), so LCD's with less input lag than CRT is possible for fully buffered refreshes (VSYNC ON, G-SYNC, etc) because of the faster scan-out of the entire refreshes. However, if you add strobing too as well, the display has to wait for a refresh to completely finish before strobing. However, that doesn't stop a LightBoost+G-SYNC display having less average input lag (including bottom edge of screen) than a CRT, due to more instantaneous full-screen presentation of images, instead of the old-fashioned CRT scanning way...

Great idea, but do you think there is a way to implement this into HDMI 2.0? I would bet that 3.0 is a long ways off, IMO anyway. I wonder if there is a way to do this now with 2.0 using a firmware change. Surely the process that starts a refresh is controlled by code? Maybe not, but.. perhaps If the process is started each time by re-writable code, then i could be timed? If its stoppable by re-writable code, it could be delayed?

Yes, HDMI 2.0 could technically be upgraded to variable refresh rates. A specification would be needed. Might as well call it HDMI 2.5 or something, to prevent confusion, but without waiting for HDMI 3.0. Variable refresh rates can be as simple as using dynamically-resized blanking intervals; something that can be done using any traditional signal with a synchronization interval (which includes VGA, HDMI, DVI, etc). The problem is in hardware that can output it, and displays that can accept that. There are major complexities in having a display become truly variable refresh rates without artifacts during refresh rate transitions. G-SYNC monitors can change refresh rates more than 100 times a second (every single frame!) -- without refresh-rate transition artifacts. LCD, LCoS, DLP, OLED, and discrete-pixel LED, all can be technologically easily made variable-refresh rate (rate adaptive to frame rate, at dynamically high speeds) without visible flickering, while CRT / plasma is much harder (flicker caused by variable refresh).

Wheres the petition page? Seems like they could really fill in a sizable niche with little cost by providing "A+ gaming certified" TVs with a G-sync-like solution, low input lag, high pixel response times for 3D and 120hz+ motion enhancements. They could add it to just a single line of Tvs.

However, that doesn't stop a LightBoost+G-SYNC display having less average input lag (including bottom edge of screen) than a CRT, due to more instantaneous full-screen presentation of images, instead of the old-fashioned CRT scanning way...

There certainly is no vertical blanking interval any longer, but a monitor still needs to perform a fetch from a backing store someplace, and that's done one at a time unless the memory is multiported or partitioned for parallel fetches, no? Either top to bottom, or in bands.

We're not yet at the pie in the sky era of having every pixel latched to a memory location and having the two change asynchronously to everything else.

There certainly is no vertical blanking interval any longer, but a monitor still needs to perform a fetch from a backing store someplace, and that's done one at a time unless the memory is multiported or partitioned for parallel fetches, no? Either top to bottom, or in bands.

On the display, nope...
On the cable, yes....
The blanking interval still exists in DisplayPort, HDMI, DVI.

My two gaming monitors (one 120Hz, one 144Hz) scans the LCD's, real-time directly from the cable, without any framebuffering whatsoever, in regular gaming mode. My high speed video showed this, and I also measured 2.8 milliseconds of input lag for the top edge of screen, between the computer side and the pixels finishing the 50% midpoint of transition (on the photodiode). Both measurement shows the realtime scanout nature.

From what I know now, G-SYNC behaves the same way; the scanouts is done on fly as the bits come in from the cable. But now the scanouts are done on demand rather than at a regularly scheduled interval (There might be a backing store of a single scanline for some processing, but definitely not full frame buffering). Frame buffering is also done as history (past) framebuffers to help with realtime on-the-fly LCD overdrive calculations.

Band-scanning is currently discouraged as that creates tear-artifacts during fast horizontal motion. A clean sweep scan even still creates skew artifacts (e.g. www.testufo.com/blurtrail during "Height" = "Full Screen" mode creates a tilt on 60Hz CRT's and 60Hz LCD's, including on iPad's in landscape mode -- try it!), while zone/band scanning creates stationary tear artifacts. A good old 1990's paper about the artifacts of band scanning: http://www.poynton.com/PDFs/Motion_portrayal.pdf (see page 5)
Also, the Sony Crystal LED prototype (not OLED) from a year ago had band-scanning artifacts during fast horizontal motion, which was noticed in fast pans.

A rep from NVIDIA did say they they use the variable-blanking-interval method to achieve variable refresh rates. Although the display itself (e.g. LCD) do not really need blanking intervals, they are still used on the cable medium, and this legacy feature is still carried over all the way to DisplayPort. You've seen the timings numbers in ToastyX Custom Resolution Utility, NVIDIA Custom Resolution Utility (both modern equivalents to PowerStrip), and they still allow you to adjust the blanking intervals and porch timings, etc. Even though the displays have moved on from needing them, they are still a legacy part of the signal. Yes, that means about 10% of the bandwidth for transmitting refreshes over a cable, is wasted in blanking intervals. Reduced blanking intervals are used to achieve 144Hz, using roughly the same bandwidth as 120Hz.

There certainly is no vertical blanking interval any longer, but a monitor still needs to perform a fetch from a backing store someplace, and that's done one at a time unless the memory is multiported or partitioned for parallel fetches, no? Either top to bottom, or in bands.

On the display, nope...
On the cable, yes....
The blanking interval still exists in DisplayPort, HDMI, DVI.

My two gaming monitors (one 120Hz, one 144Hz) scans the LCD's, real-time directly from the cable, without any framebuffering whatsoever, in regular gaming mode.

Imagine that the computer drawing needs to slow down a moment or two, either because of some complexity limit server side, or because there's a new monitor concept of updating only regions and parts of it are stagnant. (Disparate issues, disparate display tech, with the same problem). You'll then have a case where the monitor cannot reflash the frame on its own (to defeat the strobing) unless the entire frame is present within the display. It needs to reflash the frame at a minimum interval of the persistence of vision.

Imagine that the computer drawing needs to slow down a moment or two, either because of some complexity limit server side, or because there's a new monitor concept of updating only regions and parts of it are stagnant. (Disparate issues, disparate display tech, with the same problem). You'll then have a case where the monitor cannot reflash the frame on its own (to defeat the strobing) unless the entire frame is present within the display. It needs to reflash the frame at a minimum interval of the persistence of vision.

Right, this is more of a consideration for impulse displays.
Currently, G-SYNC monitors are sample-and-hold. This would thus, not be a problem/consideration for LCD, OLED, or DLP, all of which can be technically made variable-refresh-rate.

For strobe backlights in gaming monitors, I drew diagrams here on how you can blend PWM-free (at low frame rates) and strobing (at high frame rates), to get flickerfree at low refresh rates, and strobing at high refresh rates:http://www.blurbusters.com/faq/creating-strobe-backlight/#variablerefresh
Diagram is here for a flicker-free variable-rate strobing algorithm, for 120Hz video game monitors, since I'd love to see NVIDIA attempt to combine G-SYNC and strobing. I've heard NVIDIA is reportedly working on this, already. When 120Hz gets standardized with consumers within ten years hopefully (e.g. NHK 8K 120Hz), there will be more practical possibilities in regards to strobing, since that's less problematic to do in an interpolation-free way at 120Hz than at 60Hz...

But this, clearly, is a separate topic altogether, as eventually displays may migrate to low-persistence with zero light modulation (e.g. ultrahigh frame rates or other exotic technologies).

I've confirmed that the motion behaves exactly as I have described. You do get the "low framerate feel" of lower frame rates (some call it "regular stutter", other call it "edge strobing", and yet others "stop-motion feel"). However, there is zero random stutters, and zero framerate-transition-caused stutters.

I bet you're joking about me not being aware of it. Ha.
Yes, I am aware of the developments at CES 2014.
I posted about it already on BlurBusters.

I don't post much on here, since I've been focussing on the newly-launched Blur Busters Forum which is taking off rapidly in so short a time period.

So ontopic... Yes, AMD FreeSync is rather interesting!
This could be a huge step towards an open VRR technology that might someday migrate into HDMI.
Meanwhile, Oculus just showed off a low-persistence OLED prototype VR goggles, too!
And a new 2560x1440p GSYNC monitor got announced, so we finally have VRR and strobing in QFHD.

I bet you're joking about me not being aware of it. Ha.
Yes, I am aware of the developments at CES 2014.
I posted about it already on BlurBusters.

I don't post much on here, since I've been focussing on the newly-launched Blur Busters Forum which is taking off rapidly in so short a time period.

So ontopic... Yes, AMD FreeSync is rather interesting!
This could be a huge step towards an open VRR technology that might someday migrate into HDMI.
Meanwhile, Oculus just showed off a low-persistence OLED prototype VR goggles, too!
And a new 2560x1440p GSYNC monitor got announced, so we finally have VRR and strobing in QFHD.

Very exciting stuff, especially for the two AMD-driven next gen consoles, which might even have support for VBLANK in hardware, and if they do, all that remains is figuring out if it's possible to run Freesync on them. If they do, it will not only result in smoother games, but much better visual quality as well, since it won't matter if you don't quite hit that 60 FPS target, anything between 30 and 60 should seem smooth.

I wonder if a firmware update and an API update on the consoles could be enough to make it work, but that's only if you can detect via EDID whether a display can support VBLANK, because currently the TV resolution is hidden from game developers (I am one). You make the game in 720p or 1080p, and the console TV settings do scaling up or down (or not at all), from there. We'd need a similar thing in drivers for the consoles to detect if your TV can support this mode, since consoles need to "just work". Or perhaps add an extra checkbox, or check the EDID, or just the model name and number from the HDMI signal of the attached TV (yes, I know that doesn't work if you have a receiver in between).

Let's hope AMD and Sony can enable that, then MS will rush to catch up to support it in DX11 (or vice versa). That's why competition is great. I can even see Steamboxes with either GSYNC of Freesync, pushing each other forward (ideally with Freesync winning out, since it's unlikely AMD will ever license a proprietery blanking / signaling tech that's already built in to VESA spec).

So, questions that need answering, once Catalyst drivers enable Freesync to the public :
1) Can HDMI 1.3 / 1.4 or 2.0 all output a suitable signal for variable refresh? With or without an update to the spec, or firmwares of input ports or output ports. If it's just a matter of altering the signal slightly, it shouldn't require hardware chances in the actual HDMI ports. If HDMI cannot, it will have extremely limited use, although according to articles I read about G-Sync, there's no real reason why this should only work with Display Port 1.2 and above and not HDMI.
2) Since most TVs could support VBLANK, at least with a firmware update (according to AMD's CEO, at least), we will need to compile a list of TVs or monitors or projectors that can actually listen to, and correctly interpret, variable refresh timings. That's assuming AMD releases their drivers to the public, which they should, in response to G-Sync. If it's something that some, or many HDMI displays can support, even without a firmware update, it should be only a matter of time before manufacturers update their current, or at least future, TVs to support Freesync. At that point, you'll see many game developers rejoicing because they can increase the quality levels in games so that they don't need to target 60 FPS minimum, they can target between 30 and 60 and it should look very smooth regardless of variance in frame rendering time.

Mark, do you know anything about the HDMI EDID data that can tell us whether a monitor supports VBLANK? That would be the first step, to compile a list of those that do. Perhaps someone with those Toshiba laptops from the CES 2014 Freesync demo referred to over at Anandtech, can rip out their display's EDID data and we can analyze it. Once we know whether it can be used to distinguish if a display would support variable VBLANK, then it's just a matter of combing the net for all of them, and encouraging manufacturers to update their display firmwares. I personally would jump up and down if I could get BenQ to update my w1070 projector to support Freesync over HDMI, that would be incredible. BenQ has been pretty good about adding new 3D formats, and are one of the companies that is putting G-SYNC into LCD panels this year or next, so it would seem short-sighted for them (and other manufacturers) to not support both approaches.

Even if G-sync ends up being slightly better (1 frame less lag, perhaps, depending on how the third buffer is a backbuffer or adds more latency) than Freesync, this is all terrific news for videogames. Hopefully we can all figure out these issues. The sooner someone on the net gets their hands on a Radeon catalyst driver with Freesync enabled, it's off to the horse races to figure out if it works on common-place HDMI TVs or monitors, or even on the rare one here and there. Because once that happens, you can compare firmwares and try to haxx0r it in, to different models from the same manufacturer. Yeah, it's much better to wait for the manufacturers to do it themselves, but I love H/W haxx0ring like you guys do at Blur Busters, keep up the good work! If I still gamed on puny TV or monitors, I'd use your stuff, but I can't get over the superiority of my 100 inch 3D DLP projector, it kicks ass.

I'm considering trying to get 1400 x 900 working in 120hz 3D on my BenQ using some of those tweak programs, that would be a good compromise. Or even maybe a 2:35 to 1 resolution with 120hz, that would be killer for 3D. It's too bad AMD sucks for supporting stereo 3D in games, I was about to guy a Maxwell GPU but now I will have to wait to see how this Freesync news shakes down. Should be an interesting couple of months.

Even if G-sync ends up being slightly better (1 frame less lag, perhaps, depending on how the third buffer is a backbuffer or adds more latency)

This is my main concern with Freesync.

G-Sync fixes both stuttering/tearing problems, and latency.
It sounds like Freesync is triple-buffered, which means that you have two additional frames of latency compared to G-Sync.

If it becomes a VESA Standard, it's a lot more likely to be adopted by television manufacturers though, so in that regard it's a step forward, as it would apply to more than just PC gaming monitors. (which are tiny, and use poor quality panels)

I think people are being overly optimistic about it being implemented in consoles just because they're using AMD hardware.
Console games rarely ever use triple-buffering to eliminate tearing, and it assumes that this is something which Sony/Microsoft could implement, or would have an interest in implementing. Sony seems like the likely candidate as they sell both consoles and displays though - but unless it's possible via a firmware update on their 2014 displays, which seems unlikely, it's probably at least a year away.

Yeah, but I'm very skeptical that G-Sync has less latency than that Toshiba laptop has, so the question is, does it necessarily have less latency than a good Freesync implementation with a discrete graphics card and non-integrated display.

Don't forget, triple buffering doesn't mean three buffers, back to back, it means one front buffer and two back buffers, and the GPU merely alternating which backbuffer it writes to to avoid lock stalls (due to v-sync being on). Naive triple buffering would simply add lag without solving tearing, which would be pointless. The entire benefit of triple buffering is to lock on the GPU you're writing to, so the other one is immediately free to send to the front buffer and down the wire. Then the question becomes, how can NVidia only use two front (or back) buffers, one to write to and the other to snapshot down the wire directly, but Freesync can't. I'm skeptical that the CEO knows what he's talking about here, frankly.

In any case, V-SYNC is on, on Next Gen consoles, by fiat from on high, and it's not up to the end user to disable it (much to the chagrin of some of my gamer buddies, some of whom prefer more framerate and less lag over being tear-free, and others hate tearing with a passion). I just don't see what's so magical about G-Sync that VBLANK can't be implemented with exactly the same latency in every respect.

The question is, about the HDMI spec, I mean, are fixed v-blank intervals baked into the assumption for the video signal itself? It don't think so, it's more a question of the HDMI ports, and the videocard and display firmware. If AMD's engineers are as good as we think, with some forethought, they'd have shipped Xbox One and PS4 with the same capability to do Freesync.

I'm not too worried about PCs getting this tech (thought it'd certainly be better if both companies used the same Freesync, which will end up being the case eventually, I'm sure), and simply let gamer-centric manufacturers support G-sync. I mean, it's kind of absurd if it's part of the VESA spec and they can implement it themselves, to not do so.