What to Look For, Test Setup

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons. Here is the schedule:

We are back again with another edition of our continued reveal of data from the capture-based Frame Rating GPU performance methods. In this third segment we are moving on down the product stack to the NVIDIA GeForce GTX 660 Ti and the AMD Radeon HD 7950 - both cards that fall into a similar price range.

I have gotten many questions about why we are using the cards in each comparison and the answer is pretty straight forward: pricing. In our first article we looked at the Radeon HD 7970 GHz Edition and the GeForce GTX 680 while in the second we compared the Radeon HD 7990 (HD 7970s in CrossFire), the GeForce GTX 690 and the GeForce GTX Titan. This time around we have the GeForce GTX 660 Ti ($289 on Newegg.com) and the Radeon HD 7950 ($299 on Newegg.com) but we did not include the GeForce GTX 670 because it sits much higher at $359 or so. I know some of you are going to be disappointed that it isn't in here, but I promise we'll see it again in a future piece!

If you are just joining this article series today, you have missed a lot! If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with. In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display. Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating. So, please check out my first article on the topic if you have any questions before diving into these results today!

Summary Thus Far

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons. Here is the schedule:

Welcome to the second in our intial series of articles focusing on Frame Rating, our new graphics and GPU performance technology that drastically changes how the community looks at single and multi-GPU performance. In the article we are going to be focusing on a different set of graphics cards, the highest performing single card options on the market including the GeForce GTX 690 4GB dual-GK104 card, the GeForce GTX Titan 6GB GK110-based monster as well as the Radeon HD 7990, though in an emulated form. The HD 7990 was only recently officially announced by AMD at this years Game Developers Conference but the specifications of that hardware are going to closely match what we have here on the testbed today - a pair of retail Radeon HD 7970s in CrossFire.

Will the GTX Titan look as good in Frame Rating as it did upon its release?

If you are just joining this article series today, you have missed a lot! If nothing else you should read our initial full release article that details everything about the Frame Rating methodology and why we are making this change to begin with. In short, we are moving away from using FRAPS for average frame rates or even frame times and instead are using a secondary hardware capture system to record all the frames of our game play as they would be displayed to the gamer, then doing post-process analyzation on that recorded file to measure real world performance.

Because FRAPS measures frame times at a different point in the game pipeline (closer to the game engine) its results can vary dramatically from what is presented to the end user on their display. Frame Rating solves that problem by recording video through a dual-link DVI capture card that emulates a monitor to the testing system and by simply applying a unique overlay color on each produced frame from the game, we can gather a new kind of information that tells a very unique story.

The capture card that makes all of this work possible.

I don't want to spend too much time on this part of the story here as I already wrote a solid 16,000 words on the topic in our first article and I think you'll really find the results fascinating. So, please check out my first article on the topic if you have any questions before diving into these results today!

How Games Work

Because of the complexity and sheer amount of data we have gathered using our Frame Rating performance methodology, we are breaking it up into several articles that each feature different GPU comparisons. Here is the schedule:

The process of testing games and graphics has been evolving even longer than I have been a part of the industry: 14+ years at this point. That transformation in benchmarking has been accelerating for the last 12 months. Typical benchmarks test some hardware against some software and look at the average frame rate which can be achieved. While access to frame time has been around for nearly the full life of FRAPS, it took an article from Scott Wasson at the Tech Report to really get the ball moving and investigate how each frame contributes to the actual user experience. I immediately began research into testing actual performance perceived by the user, including the "microstutter" reported by many in PC gaming, and pondered how we might be able to test for this criteria even more accurately.

The result of that research is being fully unveiled today in what we are calling Frame Rating – a completely new way of measuring and validating gaming performance.

The release of this story for me is like the final stop on a journey that has lasted nearly a complete calendar year. I began to release bits and pieces of this methodology starting on January 3rd with a video and short article that described our capture hardware and the benefits that directly capturing the output from a graphics card would bring to GPU evaluation. After returning from CES later in January, I posted another short video and article that showcased some of the captured video and stepping through a recorded file frame by frame to show readers how capture could help us detect and measure stutter and frame time variance.

Finally, during the launch of the NVIDIA GeForce GTX Titan graphics card, I released the first results from our Frame Rating system and discussed how certain card combinations, in this case CrossFire against SLI, could drastically differ in perceived frame rates and performance while giving very similar average frame rates. This article got a lot more attention than the previous entries and that was expected – this method doesn’t attempt to dismiss other testing options but it is going to be pretty disruptive. I think the remainder of this article will prove that.

Today we are finally giving you all the details on Frame Rating; how we do it, what we learned and how you should interpret the results that we are providing. I warn you up front though that this is not an easy discussion and while I am doing my best to explain things completely, there are going to be more questions going forward and I want to see them all! There is still much to do regarding graphics performance testing, even after Frame Rating becomes more common. We feel that the continued dialogue with readers, game developers and hardware designers is necessary to get it right.

Below is our full video that features the Frame Rating process, some example results and some discussion on what it all means going forward. I encourage everyone to watch it but you will definitely need the written portion here to fully understand this transition in testing methods. Subscribe to your YouTube channel if you haven't already!

If you are going to set up a multimonitor display at 5760x1200 or 5040x1050, but only have a single GPU or a pair of low powered ones, just what kind of performance can you expect? That is the question Techgage wanted to answer and to that purpose they tested frame rates at those resolutions with NVIDIA's GTX680 and two different 660 Ti's in SLI as well as an HD7970 and two different 7850s in Crossfire. As you might expect the game tested makes a lot of difference in the results, with many seeing the SLI'd 660 Ti's in the lead while other memory hungry games preferred the large cache of the Radeons. Check out the individual results of your favourite games in the full article.

"Considering next-gen cards are still months away, we didn't expect to bring any more GPU reviews until the second quarter of 2013. However, we realized there was a gap in our current-gen coverage: triple-monitor gaming. In fact, it's been almost two years since we last stress tested games at resolutions of up to 7680x1600.

We're going to mix things up a little this time. Instead of using each camp's ultra-pricey dual-GPU card (or the new $999 Titan), we're going to see how more affordable Crossfire and SLI setups handle triple-monitor gaming compared to today's single-GPU flagships."

When they first tried ASUS' new GTX 670 Direct CU II with 4GB of memory on its own, [H]ard|OCP had difficulty recommending it over a 7970 but they planned to try two cards in SLI to see if that would improve the comparative performance. The competitors are a pair of 2GB 670s, a pair of 3GB HD7970's, a pair of 2GB 680s and of course two 4GB 670s, all powering a system at 5760x1200. Unfortunately the quote from the conclusions spells out the results "It's like putting beefy off-road tires on a Yugo", so while it will give you the ability to use some higher graphics settings, overall you are still better of with HD7970s or GTX680s.

"We review two ASUS GeForce GTX 670 DirectCU II 4GB video cards in SLI under NV Surround resolutions. We'll answer the question as to the value and validity of 4GB of RAM on a GeForce GTX 670 GPU video card in SLI. Far Cry 3, Hitman Absolution, and all our other games will be taken to the extreme to get to the bottom of 4GB GTX 670 cards."

For the tests they ran, [H]ard|OCP used the latest Catalyst beta, 12.6 and ForceWare 301.42 WHQL as both drivers proved able to provide proper multi-GPU performance on Max Payne 3. In AMD's case it provided improvements to single card gaming as well. The games graphics options provide a nice tool which displays how much VRAM your configuration will require so that you can get an idea if your card will be able to handle the settings before you even play the game. SLI did scale better than Crossfire but even still, both multi GPU rigs could handle the max settings at 2560x1600 and when used singly could still sit around the 60fps mark. Check out the full review here.

"HardOCP is on top of Max Payne 3 to find out what graphics options it supports and how a GTX 680 and a Radeon HD 7970 perform. We also wanted to know if SLI and CrossFireX worked, and how performance scales. In this preview of performance and image quality we take a look at all of this in the first chapter of this game."

In our launch review of the GeForce GTX 670 2GB graphics card this week, we had initially mentioned that these $399 graphics cards would support SLI, 3-Way SLI and even 4-Way SLI configurations thanks to the pair of SLI connections on the PCB. We received an update from NVIDIA later on that day that in fact it would NOT support 4-Way SLI.

The message from NVIDIA was pretty clear cut:

"As I’m sure you can imagine, we have to QA every feature that we claim support for and this takes a tremendous amount of time/resources. For the GTX 680 and GTX 690, we do support Quad SLI and take the time to QA it, as it makes sense for the extreme OC’ers and ultra-enthusiasts who are shooting to break world records."

My reply:

But with the similarities between the GTX 680 and the GTX 670, is there really any QA addition required to enable quad for 670? Seems like a cop-out to me man...

I saw it mostly as a reason to differentiate the GTX 670 and the GTX 680 with a feature since the performance between the cards was very similar; maybe too similar for NVIDIA's tastes with the $100 price difference.

Well this afternoon we received some good news from our contact at NVIDIA:

"Change in plans.....we will be offering 4-Way SLI support for GTX 670 in a future driver."

So while the 301.34 driver will not support 4-Way configurations with the GTX 670, 4-Way SLI will in fact be enabled after all in a future version. We'll be sure to keep you in the loop when that happens and the super-extreme enthusiasts can rejoice.

This does go to show that the fundamental differences between AMD's license-free and seemingly more "open" CrossFire technology and NVIDIA's for-fee SLI technology. With enough feedback and prodding in the right direction, NVIDIA can and does do the right thing, just look at the success we had convincing them to support SLI on AMD CPU platforms last year.

[H]ard|OCP has assembled a review of the two best GPUs on the planet, in triplicate. It got off to a rough start as there is a serious issue with the last several Catalyst drivers, preventing you from using EyeFinity on Tri-Fire systems so they needed to revert to the release candidate that appeared back in January. The NVIDIA machine was easier to configure, once they realized that for triple surround they had to stay to one monitor per card. The PCIe lanes were provided by the ASUS P8P67 WS Revolution, which allowed these cards to really show off their stuff. Make sure you check out the power consumption page, you may be very surprised at how little power the GTX680s needed to run.

"What do you get when you install three GeForce GTX 680 cards for 3-Way SLI and then three Radeon HD 7970 cards for Tri-Fire? You get insanely fast gaming performance and a gameplay experience that begs to be compared delivered by both. We find out which multi-display configuration is better for gaming in Eyefinity and NV Surround."