Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Phopojijo writes "A monitor redraws itself top to bottom because of how the electron guns in CRT monitors used to operate. VSync was created to align the completed frames, computed by a videocard, to the start of each monitor draw; without it, midway through a monitor's draw process, a break (horizontal tear) would be visible on screen between the two time-slices of animation. Pixels on LCD monitors do not need to wait for above lines of pixels to be drawn, but they do. G-Sync is a technology from NVIDIA to make monitor refresh rates variable. The monitor will time its draws to whenever the GPU is finished rendering. A scene which requires 40ms to draw will have a smooth 'framerate' of 25FPS instead of trying to fit in some fraction of 60 FPS."
NVIDIA also announced support for three 4k displays at the same time. That resolution would be 11520×2160.

Okay, can someone who isn't wrapped up in market-speak tell us what the practical benefit is here? The fact is that graphic cards are still designed around the concept of a frame; the rendering pipeline is based on that. 'vsync' doesn't have any meaning anymore; LCD monitors just ignore it and bitblt the next frame directly to the display without any delay. So this "G-sync" sounds to me like just a way to throttle the pipeline of the graphics card so it delivers a consistent FPS... which is something we can

For starters it reduces memory contention because the display device doesn't have to read and send the display data over the wire 60 times a second, while rendering the next frame. Theoretically, if there's nothing happening on the screen, such as an idle desktop, the display device won't consume x*100 Mbyte of bandwidth just to show a still image on the screen.

That's not quite the angle they're going for, but there are such solutions, involving a special "no refresh" signal (I assume) and an LCD controller with a framebuffer that is used to refresh the panel if there is no change, allowing the GPU to be idled.

Yeah, I was going to say that I've got all sorts of games that have tearing on side-to-side scrolling scenes. Whether it's a problem with my panel, the videocard, or the game, I'd like it to stop (and usually enabling vsync in the game stops it).

Input lag is how long a game takes to react on your manipulation of controls, not how long it takes to display it on the panel or CRT you're looking at. Maybe you mean output lag? Since screens get updated 50 or 60 times a second with TFTs and CRTs often get higher refresh rates, you're looking at 20ms or less for a screen refresh, when it comes to pure VSync delay. "Whoa dude, 20ms, my ping time is less than that!" I hear you say. Apart from the fact that most of the planet has ping times that are way high

I don't have LCD lag, either. At least, not anything appreciable. That's because my PC monitor is running at native resolution, and my TV is a SHARP Aquos and it has a spiffy scaler which can scale in a single frame cycle, and at high quality.

Believe it or not, enabling vsync, especially on 60 Hz LCD diplays (about 16.7 ms time per frame), still causes a very perceptible delay in fast-paced games (even without triple buffering). Disable it, and you no longer see the delay, movement feels much more instanteous. Yes, scaling and adjusting refresh rates may introduce delay, but who runs their LCD in a non-native mode?

If you don't see the difference, then your game is too slow or doesn't render enough frames (>= 100) per second.

CRT's are only zero input lag at the top edge of the screen.CRT's even have input lag for the bottom edge of the screen, because they have a finite frame transmission time (scanning from top to bottom).Some gaming LCD's (certain BENQ and ASUS gaming LCD's) are the same way; the

The only advantage I see is that you see the image as soon as possible after it's done rendering. If your graphics card is done rendering the scene, but you monitor just refreshed 1 ms before that, you're going to have to wait another.016 seconds (assuming 60 hz) to see that frame, because that's the next time the screen refreshes. If you can make the refresh of the monitor exactly match the rate at which frames are produced, there's minimal lag between the frame being rendered, and the frame being shown

Also it can save power. If the GPU is creating frames lower than 60hz then that's less power it needs to spend to do it.

There's no "can" about it. It's a proven fact that it does saves power!!! I know because I enable VSync at the global level in the nVidia Control Panel. Regardless of what game I play, the card isn't calculating into infinity and drawing over 100 watts of power (not including the CPU power savings). It's also more quite because the fan isn't working hard to throw off the extra heat that wo

Well, the video card of course will save power by not rendering a frame until the LCD is ready to take it.

I'm not keen on the low level details of HDMI, but I do know that HDMI and friends send the entire image per frame over the wire. The "RAMDAC" of HDMI is sending a full frame's worth of data over that wire regardless of any changes. It would save a bit of power to not send that if there are no changes.

Even if your videocard did nothing on a given frame, it's framebuffer still got shoved over the HDMI

Indeed. I have a 144hz screen, and I noticed as soon as I went from 60hz to 144hz that even when the frame rate was below 60fps, it was still smoother than before. It was obvious that it would be smoother when getting over 60fps, but this surprised me. Thinking about it I came to the same conclusion.

I have also been wondering what would the picture be like if using a high refresh rate when the graphics card cannot render enough frames for one every refresh, what if it only rendered half the pixels every upd

VSyncs problem is that it is dictated by your monitor and is a fixed rate where graphics card frame rates are variable. This causes either studder or tearing depending on if you want to wait for VSync before drawing.

This solution instead is controlled by the video card so it will never tear and the monitor will update when told to rather than at a specific time. No more tearing and no more performance loss to deal with it... also no need to triple buffer which will help reduce input lag (among other things)

No marketspeak here, but if you're not familiar with the technical details you might be a bit lost. First of all, in order to understand the solution, you need to identify the problem.

The problem is that, currently, refresh rates are STATIC. For example, if I set mine to 60Hz, the screen redraws at 60fps. If I keep vsync disabled to allow my gfx card to push out frames as fast as it can, my screen still only draws 60fps, and screen "tearing" can result as the screen redraws in the middle of the gfx card pushing out a new frame (so I see half-and-half of two frames).

As described, let's say my gfx card is pushing out 25fps. Currently the optimal strategy is to keep vsync off, even though this can result in screen "tearing", because with low fps bigger problem emerges even though screen "tearing" is fixed, with vsync on.

Every time my gfx card pushes out a frame, since vsync is on, it waits to ensure it will not be drawing to the screen buffer while the screen is updating. Since it waits, the screen only draws complete frames. So at 60fps the screen updates in 1/60 second intervals, and the gfx card render at 1/25 second intervals. So, at the beginning of a frame render, the gfx card renders... and the screen redraws twice, and then the gfx card has to wait for the third opportunity to draw before syncing up again. Since it is waiting instead of rendering, I am now rendering at 20fps (since I lose 2/3 refresh opportunities) instead of the optimal 25fps. If I disable vsync, I get tearing, but now 25fps.

This "G-sync" claims to solve that issue by making refresh rates DYNAMIC. So if my gfx card renderas at 25fps, the screen will refresh at that rate. It will be synchronized. No tearing or gfx card waiting to draw.

Exactly, and that is especially a problem for the Oculus Rift and other virtual reality headsets that are coming onto the market, because it becomes really noticeable when you move your head quickly. I think that that is what they're mainly targeting here, although according to John Carmack, G-Sync won't work on the Rift [twitter.com]. Anyway, for those interested in the technical details, graphics programming legend Michael Abrash (currently at Valve) wrote an excellent technical piece [valvesoftware.com] about the frame timing issues you

This scheme would only work for LCD with full digital interface and not the ones that digitizes VGA signals as it would mess up PLL timings..

Indeed, from the material I've read it's DisplayPort only (DP is a high speed packet interface). As you say, HDMI/DVI work more or less like digitized VGA.

Also, addressing another issue you mentioned, the photos of the monitor-side controller show that it has several DRAMs. This memory is almost certainly used to store the last frame so the monitor can refresh itself if/when the host refresh interval drops too low to keep a stable image on the panel.

G-Sync enforces a 30Hz minimum refresh rate (the monitor will never wait longer than 33ms to refresh, or in the 144Hz demo monitors, it will never wait less than 7ms), so your example wouldn't work, but apart from that, yeah:)

Currently the optimal strategy is to keep vsync off, even though this can result in screen "tearing"

No, the optimal strategy is to keep vsync on and throttle your redraws exactly to it. To make it work you must set up an event loop and a phase-lock timer (because just calling glFinish to wait for vsync will keep you in a pointless busywait all the time). Unfortunately, game programmers these days are often too lazy to do this and simply ignore vsync altogether. While this may result in smoother animation, i

>This "G-sync" claims to solve that issue by making refresh rates DYNAMIC. So if my gfx card renderas at 25fps, the screen will refresh at that rate. It will be synchronized. No tearing or gfx card waiting to draw.

Well, we already have Adaptive VSYNC (if you have bothered updating your drivers in the last year), which does in fact make your GPU refresh rates somewhat dynamic to avoid the annoying 60 -> 30fps hops.

G-SYNC looks even better, though. My only worry is that it will be horrendously overprice

LCD monitors absolutely do not ignore VSync. Now let's not forget that the primary function of a VSync signal is to tell the monitor (CRT or LCD) where the start of the picture begins. There's also HSync to break the picture into scanlines. VSync always takes a certain amount of time which the monitor will "take a breath" (CRT will also move the gun back to top). At this time, it is the perfect moment for the GPU to quickly swap its framebuffers in the video memory. The "scratch" draw buffer will be moved as the final output image and then the GPU can begin drawing the next one in the background. At the same time the completed image is sent to the monitor in the normal picture signal when the monitor gets back to work to draw a frame. If the buffers are swapped in the process of the monitor drawing the frame, the halves of two frames will get shown together which leads to the video artifact called "tearing".

If we are a good citizen and swap buffers only during the VSync period we can get a nice tear-free (typically 60fps) image. However if instead it takes more than the time of one picture (which about 16ms) to draw the next one, we have to wait a long time for the next VSync and that means that we also slide all the way down to a 30fps frame rate. Now if the game runs fast at some moments but slower at some others, the bouncing back between 60fps and 30fps (or even 15fps) makes this annoying jerky effect. NVIDIA's G-Sync tries to solve this problem by making the frame time dynamic.

Well I don't want to be "that guy", but I am "that guy". The real reason for vsync in the days of CRTs is to give time for the energy in the vertical deflection coils to swap around. There is a tremendous amount of apparent power (current out of phase with voltage) circulating in a CRT's deflection coils.

Simply "shorting out" that power results in tremendous waste. They used to do it that way early on, they quickly went to dumping that current into a capacitor so they could dump it right back into the coil on the next cycle. That takes time.

An electron beam has little mass and can easily be put anywhere at all very quickly on the face of a CRT. It's just that the magnetic deflection used in TVs is optimized for sweeping at one rate one way. On CRT oscilloscopes they used electrostatic deflection and you could, in theory, have the electron beam sweep as fast "up" as "left to right".

So why didn't they use electrostatic deflection in TVs? The forces generated by an electrostatic deflection system are much smaller than a magnetic system, you'd need a CRT a few feet deep to get the same size picture.

So this "G-sync" sounds to me like just a way to throttle the pipeline of the graphics card so it delivers a consistent FPS...

Actually, it's the inverse, with G-sync, the monitor retrace tracks the instantaneous FPS delivered by the game... That way there is no stutter (or tearing) as a result of quantizing the display scans to a pre-determined arbitrary frame rate.

Actually, it's the reverse. Instead of forcing the GPU to wait for the screen's refresh rate (as is the case with V-sync), potentially causing some pretty bad frame drops, G-sync makes the monitor wait for the GPU's frames. Whenever the GPU outputs a frame, the monitor refreshes with that frame. If a frame takes longer, the screen keeps the old frame shown in the meantime.

Remember, V-sync forces the GPU to wait for the full frame's duration, regardless of how long it's taken to render the frame. If the GPU renders the frame in 3ms but V-sync is at 10ms per frame, the GPU waits around for 7ms. Flip side, if the GPU takes 11ms, it's "missed" a frame (lag) and still has to wait 9ms until it can start drawing the next frame. G-sync is supposed to make it so as soon as the GPU's done rendering a frame, it pushes it to the monitor, and as soon as the monitor can refresh the display to show that new image, it will.

In theory, this could effectively give the visual quality of V-sync (no screen tearing) with a speed similar to straight rendering without V-sync.

You sync the panel's refresh rate to the application's.Say a frame gets delayed (40 fps instead of 60fps, for instance), traditionally, the monitor is blissfully ignorant of that fact and just refreshes whatever it is given when the time comes.Nvidia's solution is to have the GPU signal the LCD's controller, telling it when to refresh. This allows the monitor to refresh when the frame is done rendering, instead of at a fixed point in time.

1) Reduce input lag to the lowest possible delay due to each frame being displayed immediately on the screen. With standard fixed refresh rate displays, there is almost always a delay between a frame being generated and being displayed on the screen and the delay is not constant.

2) Remove the need for vsync to eliminate screen tearing. Since the monitor's refresh cycles are controlled by the GPU, it can be guaranteed to avoid tearing without requiring the GPU to render fra

Oh, slashdot. Yet another ignorant "girlintraining" post modded up to 4+ interesting/informative/etc. for no discernible reason.

LCDs do not ignore vsync. They have never ignored vsync. How on earth did you get the idea that they ignored vsync? Same comment re: "bitblt the next frame directly to the display without any delay". The "next frame" isn't even in the display, you buffoon. It's either not computed yet, or sitting in buffers on the video card. The display can't magically pull those bits out o

'vsync' doesn't have any meaning anymore; LCD monitors just ignore it and bitblt the next frame directly to the display without any delay.

Ah, but vsync does still have meaning because LCD monitors essentially emulate it. The video feed is still sent to the monitor as if it were a CRT - sequentially top to bottom left to right at a set frequency. If a game finishes drawing a frame while the video stream is still halfway down the screen then you get a tear because the display frequency is fixed.

What this technology seems to do is allow the graphics card to send a complete frame to the monitor then tell the monitor to display it straight away. W

Now we just wait until they finally figure out to employ a smarter protocol than sending the whole frame buffer over the wire when only a tiny part of the screen has changed. It would do wonders for APUs and other systems with shared memory.

On one hand, I see where this could be a good idea. On the other hand, I kind of like dumb displays. Stuff like displays and speakers should really just display/play the signal sent to them. They should be as simple as possible, because they are expensive, and this allows them to last for longer and be cheaper. If TVs were smart, I would probably have to upgrade my TV every time they came up with a new video encoding standard. Luckily, TVs just understand a raw signal, and I can much more easily upgrade

A huge part of good remote desktop protocols is just that! Keeps the bandwidth down. If your graphics card could know "for free" that all changes were in a given rectangle, and I bet it often could, that doesn't even sound hard to do.

AIUI first generation thunderbolt is basically equivilent to PCIe 2.0 x4 while second generation thunderbolt is basically equvilent to PCIe 3.0 x4. Afaict that is tolerable but suboptimal for running an external GPU.

Now we just wait until they finally figure out to employ a smarter protocol than sending the whole frame buffer over the wire when only a tiny part of the screen has changed. It would do wonders for APUs and other systems with shared memory.

They exist - they're called "smart" LCD displays and are typically used by embedded devices. These maintain their own framebuffer, and the LCD controller sends partial updates as it needs to then shuts down. It saves some power and offloads a lot of the logic to the scre

Now we just wait until they finally figure out to employ a smarter protocol than sending the whole frame buffer over the wire when only a tiny part of the screen has changed.

Wouldn't that depend on whether it's faster to just send the entire framebuffer over the wire, or do a pixel-by-pixel compare between the current framebuffer with the previous one to figure out which parts have changed?

This sort of streaming compression makes sense when bandwidth is limited, like back when people used dialup to acce

You wouldn't do a naive pixel-by-pixel compare, the operating system would tell the driver what to update since it directs all screen changes and always knows what has changed.
It's not about the bandwidth between the screen and the display device, but the memory bandwidth consumed from the video RAM, since the display device currently reads the screen memory 60 times per second even if there's nothing currently happening. By only sending the parts that have changed, you free up memory bandwidth for any ot

For FPS's when you rotate all around, or for action movies where the camera moves quickly, all of the screen is updated.

Then make "scroll rectangle" one of the primitives in the screen difference protocol. If the camera turns, scroll the data in the frame buffer at the same speed that the camera turns. Sure, there'll be artifacts near the HUD, but overall, that should provide the illusion of less latency [slashdot.org]. MPEG-4 ASP (e.g. DivX, Xvid) uses this technique under the name "global motion compensation", but ultimately, the concept dates back to motion vectors way back in the H.261 era.

The fact is that the bandwidth between video card and monitor must be enough to handle the worst case scenario or else its not fit for purpose, and the hardware cost difference between fully utilizing this link and greatly under utilizing this link is very very small. There are power savings if you can under-utilize the link without sacrifice, but...

Meanwhile there are large up-front costs associated with performing real-t

In [i]that[/i] particular case we're back to square one, where we are now, but in all other cases there are power- and memory bandwidth savings. Not all usage cases are full screen FPS games, and typically these are not the users who are concerned with memory bandwidth and power usage.

In [i]that[/i] particular case we're back to square one, where we are now, but in all other cases there are power- and memory bandwidth savings.

Would there? The GPU actually requires more memory bandwidth, since it needs to retrieve the previous frame for a pixel-for-pixel comparison. And both the encoding and decoding require circuitry, which needs power - probably more power than just sending the raw frame over a 1-meter link in the first place. That's worth remembering: we aren't talking about a trans-A

That's not how it would be implemented. You would just let the operating system tell the driver what parts of the screen need to be updated, since it directs all drawing operations and knows exactly what has changed on the screen. The power savings and performance benefits come from not having to burden the video RAM by continuously reading x*100 Mbyte/s of screen memory to generate the display signal when there's little or nothing happening on the screen. It would reduce power consumption and increase perf

The power savings and performance benefits come from not having to burden the video RAM by continuously reading x*100 Mbyte/s of screen memory to generate the display signal when there's little or nothing happening on the screen.

And instead you would burden the main CPU and RAM, preventing them from ever entering power saving mode and adding lag to every screen update. And it wouldn't even work, because the doesn't know what parts of the screen have changed. The relationship between various buffers and the

This exact system is already in use in remote desktop software, so you can't really argue it's stupid and that the people writing these softwares are also stupid, and so your replies clearly show you don't understand how modern computer screens, display devices and operating systems work, nor the concept of a shared memory system and why memory contention is a real issue.

It all boils down to these options:

1. Keep the current system where the display device continuously burdens video RAM or system RAM wi

I would feel pretty good about this if it were being proposed as some sort of standard, but from the blurb, it looks like a single-vendor lock-in situation. You will need an Nvidia graphics card to make it work, but your monitor will also need an Nvidia circuit board to regulate the framerate. The only value of this kind of variable framerate technology is for gaming. This means that the needed circuitry will appear only in monitors that are meant specifically for gamers. This means that they will be segmen

Interestingly, Nvidia will be providing the G-sync chips by themselves [anandtech.com], allowing people to mod their monitor to install the chip on them. I'm not sure just how compatible this would be, but it might allow you to upgrade your existing monitors with G-sync support or get someone to do it for you, depending on your capabilities and willingness to risk your monitor.

I don't see anything about them selling chips to end users, just stuff about them selling upgrade modules. I guess each module will be specific to one make/model of monitor and will require cooperation of the monitor manufacturer to produce.

You mean this quote? "Initially supporting Asus’s VG248QE monitor, end-users will be able to mod their monitor to install the board, or alternatively professional modders will be selling pre-modified monitors."

You're confusing HTPCs and using panels designed as TVs for computer monitors. We're talking about people who stick a 32" monitor (or larger) on the wall in front of their desk in their office, vs putting a computer under the TV in your living room. While the components are the same, the ergonomics are different.

An important detail there. Back then, as I recall, HD TVs (1280×720), or even lower res, were very common. While that's okay for watching movies or TV shows from the couch, that's awful for a large screen sitting within arm's reach. And even now, most TVs are only Full HD (1920x1080), no matter the size, while computer monitors often go higher; 27" monitors at WQHD (2560x1440) are getting quite popular, I heard.

G-sync (i.e. sync originated by the graphics card) seems like a good idea.It: allows for the ability of single or multiple graphics cards within a computer to emulate genlock for multiple monitors, so that the refresh rates and refresh times of those monitors interact properly
allows for the synchronization of frame rendering and output, i.e. reducing display lag which is important for gamers and realtime applications.
allows for a graphics card to select the highes

Sync has always been originated by the graphics card so no special assistance from the monitor would be needed to lock the framerates and timings of multiple monitors together.

The problem is that traditionally monitors don't just use the sync signals to sync the start of a frame/line, they also use them as part of the process of working out what geometry and timings the graphics card is sending. Furthermore some monitors will only "lock" successfully if the timings are roughly what they expect. So you can't

Because skilled directors and camera operators have learned in the last 100 years of movie making history which kind of camera movements work, and painstakingly avoid those which don't work with low framerates.

Because you have been trained by films for your entire life to think that blurry stuttery 24fps is smooth and cinematic. If you ever watch a movie where lots of action is happening on the screen at once, you'll probably get slightly lost because everything turns into an unrecognisable blur. For an example of this, watch any of the Michael Bay transformers movies and try to figure out which Transformer is on the scene during any random action shot.

Actually, the problem is even bigger. Somewhere around 200fps, you start flying into "uncanny valley" territory. 200fps is faster than your foveal cones can sense motion, but it's still less than half the framerate at which your peripheral rod can discern motion involving high-contrast content. When it comes to frame-based video, Nyquist makes a HUGE mess thanks to all the higher-order information conveyed by things like motion-blur. That's why so many people think 24fps somehow looks "natural", but 120fps looks "fake". Motion-blurred 24fps video has higher-order artifacts that can be discerned by BOTH the rods AND cones equally. It's "fake", but at least it's "consistent". 120fps video looks flawless and smooth to the cones in your fovea, but still has motion artifacts as far as your peripheral rods are concerned. Your brain notices, and screams, "Fake!"

Sit closer, so the screen completely fills your field of vision and immerses you in the image. Your opinion will probably change.

If you sit back from the screen, you're using foveal cones to watch it. It's the rods along your vision field's periphery that cause the problems.

The "uncanny valley" problem affects mainly immersive videogames where you're either sitting really close to the screen, or have additional screens off to the side that are viewed mainly with peripheral vision.

I already spend most of my video watching time in front of a 103" DLP projection screen at 10'... and I prefer IMAX high frame rate to low frame rate films because the jitter drives me nuts on lower frame rates.

Almost spilled my coffee there, NVidia and VSync in the same sentence? The nVidia linux driver has tearing artifacts on video almost no matter what you do, it's ridiculous. VLC, Dragon player, Totem, all have obvious tearing. mplayer looks better if you disable compositing and turn off all but one monitor, but still has some tearing if you look closely. I just tried xbmc yesterday, and it may be good.

Anyway, "GSync" seems like a good idea. Seems nice for videos with different refresh rates, like displaying

I've been expecting this ever since we brought out DVI-D and then HDMI and Display Port. I'm in fact a little shocked its taken this long. Its really a simple concept; when the frame buffer is ready to be drawn, tell the monitor to refresh with that data, then work on the next frame. In fact, that's exactly how people think video output works already in most cases, but its not.

A tangent, but frankly, given the choice between 4K monitors that I couldn't afford an a return to widespread availability of a 16:10 option at 1920x1200, I'd take the latter. 16:9 is less ideal to me.

This technology will be available soon on Kepler-based GeForce graphics cards but will require a monitor with support for G-Sync; not just any display will work. The first launch monitor is a variation on the very popular 144 Hz ASUS VG248QE 1920x1080 display

We can't have one of the largest purveyors of video hardware influencing display standards now, can we?

NVidia isn't some startup. They put GPUs into millions of devices; desktops, laptops, tablets, consoles, phones, etc. When they offer a new technique for syncing video the world is going to have a look. That doesn't mean it must be accepted, but it won't be dismissed out-of-hand.

Besides, given an advanced bus like DisplayPort I suspect this might amount to a simple video-chip-to-display negotiation with

Also, when you can have a gaming PC why would you ever want a glorified netbook with a laptop video card glued to it?

Because if you have more than one gamer in the household, you don't always want to have to buy two to four gaming PCs and two to four copies of each game. One console, one copy of each game, and two to four controllers are cheaper, even with console maker markup on the games. Even though console games are somewhat less likely to support same-screen multiplayer than they used to, I'm under the impression that console games are still more likely to support it than PC games. (And no, same-screen doesn't necess

Actually, Gabe Newell at last year's CES (last January) was talking about NVIDIA Maxwell architecture. He claims NVIDIA will allow GPU virtualization for gaming applications. In other words, one PC could power multiple netbooks or Roku-style Steam boxes.