Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader sends this excerpt from Develop:
"Games developed for the next-generation of consoles will still target a performance of 30 frames per second, claims id Software co-founder John Carmack. Taking to Twitter, the industry veteran said he could 'pretty much guarantee' developers would target the standard, rather than aiming for anything as high as 60 fps. id Software games, such as Rage, and the Call of Duty series both hit up to 60 fps, but many titles in the current generation fall short such as the likes of Battlefield 3, which runs at 30 fps on consoles. 'Unfortunately, I can pretty much guarantee that a lot of next gen games will still target 30 fps,' said Carmack."

Would you rather have double the detail at 30 FPS, or half the detail at 60 FPS? Considering most people can't perceive frame rates faster than 30, it makes a bit of sense to push more polygons instead.

That's not really how it works. Every dev I've ever worked with in the games industry, all aim for 60fps. Given enough time and resources, that's what they'd all end up delivering. Since they are never given enough time or resources, by either management or their publishers, there will be a time when the only option is to drop to 30fps. It has nothing to do with laziness, and everything to do with money.

Not this again.. This assumption is based on perceived motion from frames containing captured motion blur and even in such (24/30hz) frames, motion is NOT transparent to most people. With games there is no temporal data in frames, so it's VERY obvious. Even 60 is to many gamers, and is why they opt for 120hz (real 120hz, not hdtv '120' interpolated which looks terrible) panels and video cards that can push them.

Then there is input lag. Its perceived turnaround time is very noticeable at 30fps, and if the rendering is not decoupled from the input polling/irq, the latter's latency actually does go up. id had to patch quake 4 to make it acceptable to play because the 60hz was dropping inputs and looked choppy as hell compared to previous releases. Enemy Territory quake wars, which is also idtech4, was locked at 30 and was deemed unplayable by many.. I think it was one of the reasons the game tanked. It was actually painful to look at in motion.

Console devs always push excessive graphics at the expense of gameplay because the publishers want wow factor over playability. This was true in the 8bit and 16bit days too. Some games suffered so badly they were deemed unplayable. This is why pc gamers value useful graphics configuration capability in their games. Often what the publishers/devs thought as 'playable' was not what the community thought was playable, not that this should shock anyone with today's 'quality' releases.

For a 60fps game there's about 16ms per frame and with current gen consoles about 8ms is lost to API call overhead on the render thread. Of course current gen consoles are years behind and constrain rendering APIs to be called from a single thread but I'd still be very surprised if there was a console that could support a triple A game above 70fps in the next 10 years (for resolutions 720p and above).

You've barely scratched the surface of input to perception lag, here's an answer by Carmack to people questioning another one of his tweets:http://superuser.com/questions/419070/transatlantic-ping-faster-than-sending-a-pixel-to-the-screen [superuser.com]Of course most engines come from a single threaded game mentality where they'd poll for input, apply input to game state, do some AI, do some animations, calculate physics, then render everything and repeat. Current gen consoles has freed that up some but most engines didn't go above 2 or 3 major threads because it's a difficult problem to re-architect an entire engine while it's being used to make a game at the same time. Sadly the better games gave user input it's own thread and polled input every 15ms or so, queued it up, and then passed it on to the game thread when the game thread asked for it. Input wasn't lost as often but it didn't get to the game any faster.

Yeah, I read about that.. some games/drivers/engines are absolutely terrible. I think I was spoiled by the earlier quakes.. of course they had bugs too, but todays games are terrible. I suppose not everything is a competitive shooter, but that doesn't mean it should drop or lag input.. It makes the game incredibly frustrating to play.

Old-style arcade games and every game console prior to the Dreamcast forced the interlaced CRTs into a non-standard progressive mode called 240p by the retro-gaming community. And though the scrolling on these was at 60 Hz, the actual sprite animation was occasionally as low as 8 Hz because old 2D raster graphics systems didn't support real-time inbetweening [wikipedia.org] of sprite cels.

"Old-style arcade games and every game console prior to the Dreamcast forced the interlaced CRTs into a non-standard progressive mode called 240p"

240 frames progessive? I doubt that - the CRT hardware couldn't have done it. Did you mean 24 frames? Even if you did , CRT TV sets receiving a signal through the RF input would have still have been doing 50/60hz refresh.

How on earth do you translate 240p to "240 frames progressive" without making the [effectively] industry-standard terms "480i", "480p", "720p", "1080i", and "1080p" equally meaningless?

It means 240 scanlines progressive - old NTSC television sets normally like to run at 480i, but they're tolerant enough to handle video signals which don't have the extra half-scanline at the end of each frame and display it non-interlaced.

Rubbish. The hardware is built for interlaced - it has no way of knowing that it shouldn't skip a scanline line because its a progressive signal. All you'll see with a progressive signal is the screen flicking between each half of the picture spread across the whole screen with single line blank gaps.

Easiest way to see it in action is to play a PSone game that does 240p (Like PSone Diablo) on a PS2....using component cables connected to an HDTV. Some HDTV's like mine have trouble syncing to a 240p signal over component (I would have to toggle inputs till it syncs) Play the same game over S-Video and it's fine.

but how do you do progressive when the hardware is built for interlaced?

The vertical sync pulse is delayed by half a frame before odd fields according to this diagram [sxlist.com]. Delay it and the analog hardware will begin retrace a half scanline later, which produces an odd field. Don't delay it and the TV interprets it as an even field.

The real reason is if you want to target 60, you have to aim higher because if you just take a bit too long, your framerate drops dramatically.

Target 30, and you can probably render everything in time and have time to spare. But target 60 and miss, and you'll stutter, visibly.

That's the real issue - it's also why PC gamers go for the fastest video card even though their monitors may only refresh at 60Hz or so - you need to be able to do 60+ fps constantly in order to hit 60 fps solidly. Dip below that and y

1. argument from antiquity (it's old so it sucks)2. argument from inverse popularity (no one does it now so it sucks)3. appeal to realism (when did I say quake was realistic? I said higher steady framerate allows for better perception of action)4. ad hominem. I'm not butthurt. Perhaps you prefer COD et al because you can't play something requiring more attention and lower reaction time. It's alright, I'm not crazy at quake either.. I was only a bit above average as far as competent players go, but I enjoyed the fluid, fast gameplay much more than the tedious waiting and camping of CS, action quake and its subsequent 'realism' clones. There's no need for insults.

If anything, it's the dominant playerbase who reason like your post who are to blame for why so many games today lack actual gameplay learning curves. There's nothing to master and it's all about pressing the right button at the right time a la dragon's lair single player, or having a real time rendered backdrop for VOIP 'multiplayer' conversations...all of this while fumbling around with simplified gameplay mechanics despite the fact they were dumbed down specifically to make the pad workable at all. That's not what I got into gaming for, but to each their own.

Somewhere there is a study done on air-force pilots that showed they could perceive details at 1/500th of a second. The human eye certainly works at much higher "speeds" than that silly myth of 30 fps suggests.

The difference is very noticeable, but the "problem" is reduced due to enormous input lag that is present in most console setups. Also in action heavy scenes you will notice it less that everything is moving less smooth.The difference in 30 fps vs 60 fps for cameras is less noticeable due to motion blur unless you slow down the rendering. Sure you can make 30 fps games look smoother by applying motion blur, but that only makes the end result blurrier.

Would you rather have double the detail at 30 FPS, or half the detail at 60 FPS?

It depends entirely on the game. In a twitch shooter like Quake where you expect constant feedback, things feel drastically wrong at 30fps. In a single-player shooter? These are rarely built for competitive players, and don't need quick response. I can handle 30fps if it has decent motion blur, like Crysis. In an RPG? 30fps is mildly annoying but playable.

But that's only what I can tolerate, without shelving the game for a future video card. If I had a choice? I'd pick 60fps over 30fps every time. It's one

Can we please stop with this falsity already? In an FPS, you most assuredly can tell the difference between 30 and 60fps. More frames, means more, smoother, motion, which means higher accuracy. 30fps also looks a bit "juttery" with fast motions, especially with digital graphics, since there is no recorded motion blur to cover it up. Also, why all the brouhaha over the Hobbit being at 48fps and not the standard 24, if no one could notice it?

Why do people complain about the lack of "warmth" in a CD versus vinyl?

Because they're accustomed to vinyl's distortions, such as groove noise and an overall loss of highs, combined with the different behavior of level compression caused by vinyl's New Orthophonic preemphasis curve. It's the same thing causing people to say The Hobbit looks like a soap opera at 48 fps: they associate 48 fps with storytelling conventions used in soap operas.

Digital amplification needs rectification and either symmetric or separate amplification with the two signs of polarity.

How is this true? It's possible to convert digital to analog with an unsigned DAC and rely on an analog high-pass filter later in the circuit to eliminate DC. Do you also have a problem with class-D amplifiers [wikipedia.org] in general?

And the nyquist limit is SOLELY to reproduce the FREQUENCY of the tone. Not the loudness and not the phase.

Loudness is covered under the noise floor measurement, and modern noise shaping techniques push this well under -100 dBFS for the frequencies to which the ear is most sensitive by moving more of the dither noise to the 16-22 kHz band. Phase is the reason that the sampling rate is twice as h

Everybody can perceive frame rates faster than 30 fps. In fact, almost everybody can perceive frame rates faster than 100. Check the linked article, this is really a tricky question. Some things to consider:

- Games have no motion blur, or, as many modern games are implementing now, they use a pathetic, fake imitation that looks worse than no motion blur at all. Hence, they need much higher frame rates to show fluid motion. At 60 fps with current technology (including so-called next-gen), motion will look much better compared to 30.

- Decades of cinema have been training most people to perceive low-quality, blurred animation as 'film quality', and smooth, crisp animation as 'fake' or 'TV quality'. Many, many people consider a 48fps Hobbit to be worse compared to a 24 fps one. This is a perception problem. Games could have the same issues, except they've evolved much faster and most people didn't have the time to get used to bad quality.

- Consider the resolution problem. Higher resolution requires higher fidelity. At higher resolution, you'll demand higher quality textures and shading to reach similar levels of immersion, since details are now much more apparent. Same thing happens with animation and higher frame rates. This doesn't meen we should stay at low resolutions, 16 colors, black & white, or 30 fps. This just means we need to do better.

- And... a game is not film, and latency matters. A lot. At 30 fps, you need to wait twice the time to see any feedback from your input. In most games you will just train yourself to input the commands in anticipation without even knowing a word about latency, but in action games, where your reaction time matters, latency is a problem. And many other sources of latency add to the sum, such as clumsy 'smart' TVs post-processing your images, or badly engineered 'casual' motion wi-fi controllers.

In other words, yes, I'd rather have half the detail and 60 FPS. Except if your game is no game at all, and just a 6 to 10 hours movie. Since most of the top videogame chart entries fill this description today, I can see why many developers will remain at the 30 fps camp.

Yes, that 0.016 second of difference betwen 30 and 60 fps matters. Yup. that's some super high latency there. It really throws off the shots.I mean there I was firing the gun, waiting that extra 0.016 of a second to see where the impact landed before firing another shot, repeating this action a few hundred times per second....

Oh and a no true scotsman fallacy too, in the form of a personal opinion that no recent game is a -REAL- game but really just a long movie.

Also depends on what they are doing with that extra processing power. Are you making a game that is more intuitive? That reacts and learns better? That has AI that is more intelligent that adds to game play?

Really 30fps is the range of reasonable quality. You get a diminished return as you increase fps especially if the rest of the game doesn't perform to the same standard.

Would you rather have double the detail at 30 FPS, or half the detail at 60 FPS? Considering most people can't perceive frame rates faster than 30, it makes a bit of sense to push more polygons instead.

Stop with this misinformation. Most people definately CAN percieve framerates faster than 30.

The reality is, most people would like games to be programmed for actual quality and let the hardware be the issue for 60FPS and not simply let people be lazy by aiming for a low bar. You don't get double detail at 30 FPS, you get 1/4 the detail because it's targeted at consoles.

I'm getting really tired of this myth being repeated all the time. Unless your vision is TERRIBLE, you can totally perceive FPS higher than 30. Just because people don't realize why something looks "weird" or "different" doesn't mean they're not perceiving the higher frames. Just look at all the reviews and posts complaining about the high frame rate version of The Hobbit. If people couldn't perceive those extra frames, they wouldn't be complaining that it "looked too real" or "like a soap opera."

Are you kidding? Not only do I notice the difference, the difference.is huge. Oh, and remember CRT monitors, with very high refresh rates at lower resolutions? Not even 60fps is "more than enough and well beyond the treshold of noticing a difference". Small dips are noticeable, too.

Of course, I'm talking action games that require precision and aim. I am not talking about cutscenes, "mash button to trigger random crap" gameplay, puzzle games or

people who complain about higher framerates never seem to have a justification other than 'it's not what I'm used to'. What about the 48fps made it suck? Please avoid using 'audiophile-like' subjective/emotional terms.

Well, there's a bandwagon of snobbery out there about this issue. Kinda like people who say vinyl or vhs is superior to digital audio and video, I suspect this whole 'butt is it art' routine is more about social exclusivity and differentiation (and unhealthy doses of insecurity) than it is about their actual experience. I could understand if someone got motion sickness from the higher rate and didn't like that, but otherwise I cannot understand why someone would want animations deliberately choppy.

With today's style all about fast cuts and jerkycam, I think the higher framerate would help the viewer track the action.. It helps in games and I suspect it would help me in such scenes, esp when they pile on the blur and urinal tournamint style colored lighting..

The only time I found the 48fps showing to be uncomfortable and weird was during very fast action, jerky motion sequences. It suddenly feels like high-fidelity jerkyness, which makes it lose its tendency to portray "oh noez, stuff is blurry and out of control, even the camera", and just feels like "why is the dude shaking the camera so much?"

I guess my interpretation of jerkycam was always "why the hell is he shaking the camera so much?" Its' annoying and distracting, especially when it's every other scene. If the sharpness of movement isn't sufficient it's because the movements aren't sharp enough. The lower framerate just hid that.

It's just a way of doing action on the cheap. The special effects and stunts don't have to be as good because no-one can see them clearly. A bit of low budget CGI looks much better when blurred and our of focus and only on the screen for 1/24th of a second.

Transformers invented a variation where the CGI has so much detail and is frames so poorly on screen that you can't make out where the character's limbs are or what is actually going on anyway, so again it seems to be better than it actually is. If you step through the action sequences frame by frame there is a very clear disconnect between the CGI and real objects that get thrown around by poorly hidden explosives and hydraulics. Terrible camera work hides a multitude of lameness.

Kinda like people who say vinyl or vhs is superior to digital audio and video, I suspect this whole 'butt is it art' routine is more about social exclusivity and differentiation (and unhealthy doses of insecurity) than it is about their actual experience.

What's your point? People are sometimes irrational in their choices, and, of course, sociological factors play a role in determining them. Otherwise a large part of high-end markets in all kinds of domains as well as most corporate branding would vanish overnight. Objective measures, e.g. whether people would fail a blind test or not, are fairly irrelevant if people do not consume blindly. The things we are talking about are meant to be interesting and primarily entertaining. Sure, you can spend a decent am

It's less blurry and doesn't give you headaches, why would ANYONE want watch a movie that's NOT blurry or -- if seen in 3D -- gives you headaches?

I do agree that it doesn't have the "cinematic" feel of standard movies, so it feels weird when you watch it -- different. But it's so clear, smooth and headache-free that it's worth losing that. In fact, I'd like to see a movie in 60 or 75fps someday.

people who complain about higher framerates never seem to have a justification other than 'it's not what I'm used to'. What about the 48fps made it suck? Please avoid using 'audiophile-like' subjective/emotional terms.

I ended up liking it by the end of the film, but the "in your face" realism was quite a shock at first. I went into it thinking that there would not be much difference but movements seem much more abrupt and real, and facial expressions seem more lifelike. I put this down to seeing every micro-expression, each twitch of the eye or slight tremble on a smile. I can see that some people wouldn't like it; probably a "Cal Lightman" would get sick of seeing the expression of fear in seeing an Orc was really hidin

Oh I should add that there is a big difference between the Hobbit cinema HFR and HD displays that I have seen. I don't know whether this is due to compression of fast moving artefacts or physical persistence in the monitors but it is clearly very different.

I put this down to seeing every micro-expression, each twitch of the eye or slight tremble on a smile.

I agree. It's a new layer of realism that we're just not used to. It's similar to watching a Blu-ray at 1080p for the first time and being rather displeased by the sight of every pore, freckle and mole that you otherwise wouldn't notice on actors.

That's just conditioning -- you're used to seeing sitcoms in higher framerates than movies. If sitcoms were traditionally filmed in color and movies traditionally filmed in black and white, you'd be ranting about how much color sucks in movies.

You might want to put "Uncanny Valley" in quotes and capitalize it so that people don't think you're just posting random words...

I haven't seen The Hobbit yet so I can't comment but I don't see how it can be worse. Not really. Not unless you were going into the cinema thinking "Oh, I really MUST analyze the frame rate thing down to the last minute detail so I can have an opinion later".

I bet if it was the other way round, if we'd always had 48fps and Peter Jackson was experimenting with 24fps to give it an

It was really easy to tell the puppets apart from the CGI. I was actually disappointed by how reliant they were on CGI for the antagonists, especially given how awesome the the masks and makeup looked in the LotR films.

Stupid AC, movies are not games, in games you want the highest framerate possible because this (usually) means quicker response times from keyboard/mouse/gamepad, increasing the feeling of immersion in the game.

This is especially so with the Oculus Rift type headgear being developed, the less lag between your input and the computer's visual output the more immersed you feel, with movies you're simply an outside observer.

Would you rather have double the detail at 30 FPS, or half the detail at 60 FPS? Considering most people can't perceive frame rates faster than 30, it makes a bit of sense to push more polygons instead.

When it comes to games, you can tell the difference between 30 fps and 60 fps. TV/Movies, No, you can't. Video games, yes you can.

I should of mentioned the reason why.

When you shoot video you capture single pictures. When people are moving in these shots, the have motion blur. How much motion blur depends on how fast they are moving and how many shots per sec you take. Our eyes see the motion blur and our mind fills in the rest, which is why we are okay with 24 & 30 fps for movies/videos.

When you do video games, each frame is smoother, doesn't have the motion blur that real life video would have. Granted, games started adding in motion blur, but it's not the same. This is why the more frames per sec generally make games look better and play better.

I could most definitely tell the difference between The Hobbit at 48fps and a normal movie. I didn't actually like the effect much, but I could tell the difference.

When gaming and constantly monitoring my FPS, 30 was playable, 60 was nice.

I remember with Quake 1 experimenting with different resolutions on my 486 with software rendering - 320x240 actually looked very "realistic" to me simply because it was rendering so smoothly. It looked like live action through a low resolution camera. I usually played at

When it comes to games, you can tell the difference between 30 fps and 60 fps. TV/Movies, No, you can't. Video games, yes you can.

Wrong, you can easily tell the difference in TV/Movies as well. I had my students do a test: display the same movie, side-by-side-by-side, running at 120/60/30 fps (on 120Hz monitors, naturally...the lower fps versions are made by dropping frames/duplicating the ones left, so they all ran at 120Hz, but different fps). They could ALL tell the difference.

You certainly never get anything as high as 120hz anymore unfortunately. I just checked and the best I could find was 75hz vertical at 27inch. You might be able to go better than this if you stay small but you really need a screen size of 26inch for FPS gaming to pick people up at long distance if they are hiding behind stuff an

You can have a mouse and keyboard. You can have multiplayer. You can have no lag. But you can't have them all. Mouse and keyboard + multiplayer = online PC game with net lag. Mouse and keyboard + no lag = single-player PC game. Multiplayer + no net lag = same-screen multiplayer game with gamepads.

Neither DirectX nor OpenGL support proper triple buffering to avoid tearing at variable frame rates. Because of that, if you want tear-free rendering, but cannot keep up at 60 fps all the time, you must render at 30 fps or 15 fps, but not, say 48 or 56 fps. You can render at any variable frame rate if you allow for tearing (which most games do and avoid the headache of v-sychs altogether).

You don't need tripple buffering to avoid variable frame rates, you just need variable levels of detail. Rage does exactly that. As it works through the scene it has a time budget for rendering different things, and if drops detail when it notices that it is behind. It works really well, the main complaint being that sometimes it is a bit too pessimistic and drops the detail level lower than it really needs to.

So next years consoles are going to be inferior to last years PC?
Personally I think between PC and mobile, the console is doomed. This will never happen with iDevices but Android tablets already support HDMI out and input from bluetooth controllers. All we need is for them to get a bit more powerful (Nvidia is advertising a 6 fold power increase between Tegra 2 and Tegra 3) and a method of transfering large games (SD card) and they will become plugin replacements for consoles.

Oh! (OH!)
Yo! Pretty ladies around the world,
Got a weird thing to show you, so tell all the boys and girls.
Tell your brother, your sister, and your ma-mma too,
'Cause we're about to throw down and you'll know just what to do.Wave your hands in the air like you don't care.
Glide by the people as they start to look and stare.
Do your dance, do your dance, do your dance quick,
Ma-mma, c'mon baby, tell me what's the word?

It's a given that most will target 30fps since more shinies looks better in screenshots and youtube videos than 60fps does. And most consumers can't tell the difference until put a 60 and 30 fps version side by side and let them play.

The leaked/rumored PS4/XNext specs show them as equivalent or slightly weaker than current mid-high gaming PCs, and those can't do 60 fps locked on all the recent shiny games at 1920x1080 with all effects on (except those like CoD MP that specifically target it), so it's unlikely the consoles would. Cheap components is the driver, especially for PS4.

But there's no reason a fighting game or fps can't aim for 60fps on the new gen if it wants to. Use your shaders and effects wisely and no problem.

IMHO, Rage was a fun game, with a lot more driving segments than I expected in your normal FPS. Ammo was pretty scarce but I always had enough to get by. I'm still not sure why everybody seemed to hate it. Was it the lack of chest high walls?

None of these games NEED 60fps - they all look nice with a consistent 30 and 60 wouldn't hurt but I'd rather graphical fidelity than frame rate. ESPECIALLY with the law of diminishing returns kicking in to full effect th

A display (television or monitor) has a fixed refresh rate. Assuming vertical synchronization is turned on to avoid tearing, you're pretty much limited to a framerate which evenly divides into the true refresh rate of the display. If the refresh rate is 60 fps, possible targets include 60 frames per second (providing 16.7 ms of computation time per frame), 30 FPS (providing 33.3 ms of computation time per frame), 15 FPS (providing 66.7 ms of computation time per frame), and so on. Anything below 30 FPS is kind of a joke, so nobody reputable would consider allowing more than 33 ms computation per frame in a shipping game.

Unless, you use a technique called "triple buffering", in which case you can have tear-free variable frame rate at any rate. Unfortunately, none of the major 3D APIs have provisions for this. I always wondered why such a fundamental omission for a graphics rendering API.

TechReport analysed the nVidia 680 a bit after its release and had a piece on adaptive vsync [techreport.com] which should answer your question.

Quoted from an nVidia software engineer:

There are two definitions for triple buffering. One applies to OGL and the other to DX. Adaptive v-sync provides benefits in terms of power savings and smoothness relative to both.

- Triple buffering solutions require more frame-buffer memory than double buffering, which can be a problem at high resolutions.

- Triple buffering is an application choice (no driver override in DX) and is not frequently supported.

- OGL triple buffering: The GPU renders frames as fast as it can (equivalent to v-sync off) and the most recently completed frame is display at the next v-sync. This means you get tear-free rendering, but entire frames are affectively dropped (never displayed) so smoothness is severely compromised and the effective time interval between successive displayed frames can vary by a factor of two. Measuring fps in this case will return the v-sync off frame rate which is meaningless when some frames are not displayed (can you be sure they were actually rendered?). To summarize- this implementation combines high power consumption and uneven motion sampling for a poor user experience.

- DX triple buffering is the same as double buffering but with three back buffers which allows the GPU to render two frames before stalling for display to complete scanout of the oldest frame. The resulting behavior is the same as adaptive vsync (or regular double-buffered v-sync=on) for frame rates above 60Hz, so power and smoothness are ok. It's a different story when the frame rate drops below 60 though. Below 60Hz this solution will run faster than 30Hz (i.e. better than regular double buffered v-sync=on) because successive frames will display after either 1 or 2 v-blank intervals. This results in better average frame rates, but the samples are uneven and smoothness is compromised.

http://www.anandtech.com/show/2794 [anandtech.com]
"So, this article is as much for gamers as it is for developers. If you are implementing render ahead (aka a flip queue), please don't call it "triple buffering," as that should be reserved for the technique we've described here in order to cut down on the confusion. There are games out there that list triple buffering as an option when the technique used is actually a short render queue. We do realize that this can cause confusion, and we very much hope that this article

Triple-buffering with vsync allows to decouple the frame rate of the renderer from that of the presenter.

Your rendering is slightly slower then 60 fps, say 58 fps. With double-buffering with vsync you have to present at 30 fps. With proper triple-buffering with vsync you can present at 58 fps.

Most games don't care about vsync and will present at the rate of the renderer, causing mid-frame tearing. If you're lucky, the tearing will occur on the top of bottom of the frame and won't be too bad.

Why the big jump from 30 to 60? How about you target 35 fps or 40 fps?....

LCD monitors and TV's tend to update the screen at 60 frames per sec. That is 60hz. While 30 is okay, because it goes evenly into 60, 60 is optimal because the framerate and the screen refresh (update) happend at the same time.

Now why would you want to keep that upgrade treadmill running? I for one quite enjoy the fact that I can play many of the latest games on a $100 video card and can focus on efficiency (just bought a Radeon 7750, which doesn't even need an additional power connector) instead of brute force... And the games look great. Does Battlefield 3 (the first PC game I've played that nearly *requires* a quad-core to run well) really look better than, say, Call of Duty MW3? MW3 feels like it needs about half the processi

PC gamers should have paid for their software more often if they wanted dev's attention.

Oh, hey there troll. I guess you missed the part that PC gaming will be outselling the entire console industry by the first quarter of next year.

Never mind that piracy is rampant on consoles, so rampant that it makes the stuff on PC's look like kids stuff. But hey, what do us elitist PC gamers know. Oh I know what we know, the industry has turned from "taking risks" to "taking no risks."

I guess you missed the part that PC gaming will be outselling the entire console industry by the first quarter of next year.

How much of that is revenue from sales of multiple copies to one household? Major-label PC games are more likely to require a separate copy for each player [cracked.com], as opposed to a copy per household like Smash Bros. (4 players non-split), Mario Kart (4 players split), and Xbox 360 versions of Call of Duty series (2 players split) support.