Sleeping Dogs – HD 7970 versus GTX 680

Our Sleeping Dogs testing shows that the game is quite a bit more GPU intensive (at least at our quality settings) than I had originally thought. The initial test results have the HD 7970 a solid margin faster than the GTX 680 and HD 7970s in CrossFire an even bigger advantage ahead of GTX 680s in SLI.

But, just as we witnessed with BF3 and Crysis 3, there is definitely a problem brewing for AMD. At 1920x1080 the frame times are very consistent for the single cards and NVIDIA’s SLI solution but for the AMD cards in CrossFire, the experience is a mess, filled with runt frames.

The result is an observed frame rate average well below what would be reported by FRAPS and essentially no faster than a single HD 7970 graphics card. Interestingly, the spikes of higher observed frame rate match up perfectly with the very few “tight” areas of our frame time map.

The good news for AMD is that the HD 7970 is consistently faster than the GTX 680 in a single card test scenario and the results are very even. The very bad news of course is that two Radeon HD 7970s in CrossFire are only as fast as a single card when looking at your perceived frame rates. That gives the GTX 680s in SLI the win almost by default.

Our ISU based stutter graphic shows a very similar story with the CrossFire frame times easily pull away from the rest of the group by the 80th percentile.

At 2560x1440 the story starts once again with the HD 7970 outpacing the GTX 680 and CrossFire running much faster than the GTX 680s in SLI.

The plot of frame times tells the whole story though and the inconsistent frame times and runts that are plaguing the CrossFire technology.

Thus, the observed FPS we see here is much much lower than what FRAPS sees and in fact is just barely faster than the performance of a single card!

The minimum FPS percentile graph shows another angle of the same problem for AMD – the frame rates are identical for single and dual GPU combinations. Despite the fact that NVIDIA’s GTX 680 is slower than the HD 7970, with SLI working correctly and efficiently it is able to scale Sleeping Dogs from 24 FPS average through the entire run to about 46 FPS.

Our graph of ISU actually shows us that when we take out the runts the frame time variance from the CrossFire cards is actually kind of minimal up until the 90th percentile after which it skyrockets.

We couldn’t get appropriate results with the 5760x1080 testing on Sleeping Dogs and the HD 7970 CrossFire setup so instead we are looking at just another 3 card set of graphs. At this setting we have turned down the AA from the Extreme setting to High, which explains the capability for these GPUs to run at all in this resolution. The GTX 680 is once again slower than the HD 7970 though the 680s scale correctly and quite well taking performance from 30 FPS on average to 51 FPS or so.

There is a bit more frame time variance than I would like to see with the GTX 680s in SLI with a few dozen hitches obviously seen in our image above. Single card results continue to be level and reliable for both AMD and NVIDIA.

The observed frame rate remains the same from the FRAPS metrics.

The Min FPS percentile graph shows us the value of consistent frame times on the GTX 680 – it is nearly a straight line across the screen, starting at 30 FPS at the 50% mark and only dropping to 27 FPS at the 99% level.

The frame time variance graph (our ISU) shows a very flat pattern with the GTX 680 and HD 7970 cards all the way through the test runs though the SLI configuration does see more potential for stutter with a rising line as we hit the 99th percentile.

I have a hard time trying to grasp exactly how erratic input would affect the results. I have a feeling, based on my constitution (I get simulator sickness with poor latency), that the best case is which ever has the lowest worst case interval times.

..But then you have occasional latency increases. Of course those increases are to remove redundant frames, and once increased, they probably don't need much adjustments most the time.

This whole topic always gets me going back and forth, but my instincts is overall, even if latency is considered, even spacing matters more as it adds more useful points of input, assuming it adds only marginal/occasional increases of latency.

so true, Mechwarrior online is such a CPU driven game, makes it a benchmark about this, wich i can reproduce.

for one example if i set my mouse polling rate at 1000ms, wich will require more cpu cycles of data, the game will stutter in heavy team fights where the cpu (my i7 920 @ 3.8mhz 200blk) would need more free cycles to render the data smoothly.

on same map and similar fight situations and with my mouse at 125ms, the game does not suffer the way it does at mouse 1000ms.

Another anommaly is with sound, if i 24bit 192000hz on my essence st, i will have similar issues like my mouse.

but if i use a 16 44000hz setting, the game has more cpu cycles to render in a smoother fashion.

This reasoning ive also found some games react better with a gamepad rather than a mouse or keyboard combination.

Can you address the visual quality differences between the two cards? Specifically on Sleeping Dogs, the 660 Ti seems to be missing some lighting sources outside - most noticeable is the cafe/shop lights before you go down the stairs, and then the store across the street right at the end of the vieo.

March 29, 2013 | 03:10 AM - Posted by I don't have a name. (not verified)

Fascinating article. I think it'll take a few reads to fully comprehend everything that is being said. Thank you indeed, I found it fascinating. Certainly, as a 7970 owner, I'll be holding off on a potential second 7970 purchase for the time being.

I run 2x gtx 460's is sli and while I dislike screen tearing I've noticed that options such as vsync, active vsyn and frame rate limiters actually make the experience less smooth as appears to have been highlighted in this article.

I've considered getting a 120Hz monitor just so I can run without any of those options at a decent frame rate but use sufficiently high settings so as not to go above 120Hz and so incur screen tearing.

Thinking further I'd like Nvidia to develop a variation of their gpu boost technology that would actually down clock the gpu to prevent frame rates from exceeding the monitors refresh rate.....think this would give the benefits of no screen tearing without the negatives of vsync and the like.

Actually using gpu boost dynamically to both under and overclock the gpu to achieve a target frame rate could be a very nice way of producing a smoothed experience without any of the negatives of other methods as its occurring directly at the gpu instead of in the game engine to display time line.

CF sucks just like SLI. What's your poison, input lag or frame metering? Do poeple understand what "runts" are? CF is actually rendering the frames, you just don't benefit as they're too close together. One frame renders the top 1/4 of the screen when the next frame starts. Your top ~200 lines are the fastest on your screen. ;)

I don't see how. Jumping between 30 and 60fps or whatever is not an enjoyable, nor smooth experience.

So, if you can enable vsync AND allow the game to sweep through a normal range of available framerates, does this negate the increased frame times of constantly switching back and forth between high fps and low fps?

V-sync, even with triple buffering, still jumps back and forth between 16ms and 33ms, but it does it between frames. A 120hz monitor helps here, as you can have 25ms frames too, so it is less of a variance.

Furthermore, is playing a game without vsync enabled REALLY an option?

Are you sure gamers all over the world disable it to be rid of the latency issues? I'm not so sure.

I'll happily take a little latency in a heated round of counter-strike than end up dead, or missing my shot because 50% of the screen shifted 8 feet to the right. (screen tearing).

Pretty much all games are unplayable without the use of vsync and I'm not convinced it's a personal preference, either, if you enjoy your experience while you're tearing frames - I'd just call you a mean name that insinuates you're not telling the truth.

While I can see the potential in this kind of testing, and some of the issues you have mentioned are valid, you have drawn quite a bold and one sided conclusion using the competitor's software. I'll save my judgements for when this becomes open source.

About various vsync methods:
They're not the same code nor are available through the same ways.
But they are the same methods and persue the same results.
SLI and Crossfire are not the same thing...
But...

"You take the red pill - you stay in Wonderland, and I show you how deep the rabbit hole goes."
- Morpheus, The Matrix

"As a rule, human beings don't respond well when their beliefs are challenged. But how would you feel if I told you that the frames-per-second method for conveying performance, as it's often presented, is fundamentally flawed? It's tough to accept, right? And, to be honest, that was my first reaction the first time I heard that Scott Wasson at The Tech Report was checking into frame times using Fraps. His initial look and continued persistence was largely responsible for drawing attention to performance "inside the second," which is often discussed in terms of uneven or stuttery playback, even in the face of high average frame rates."

That is really attention-grabbing, You're an excessively skilled
blogger. I have joined your rss feed and sit up for
in quest of more of your great post. Also, I have shared your web site in my social networks.youtubeyoutubeyoutube

This is a fascinating and quite informative bunch of articles.
Still i'm having some doubt, but as "bystander" stated above it's hard for a person to get challenged so hard in their beliefs.
This will make it hard for other sites to do as deep reviews as you do though and I hope you somehow start using open source on the code used so everyone who thinks you are payed by either "team" can check it out and even do some of the tests themselves with some modifications depending on the hardware used.

One thing I would like to know is if three cards would make any difference at all? I know its even more rare for people to have three cards and if you would use your conclusion then it wouldn't change much.
And this splitting you use, it is a passive one or is it somehow doing something to the stream?

I feel that is one of the so much vital info for me.
And i am happy reading your article. But wanna commentary on few basic things,
The web site taste is ideal, the articles is really great : D.
Good activity, cheers. facebook
﻿facebook
﻿facebook

OK, You have flooded us with a ton of charts and stats, but can you put in a paragraph or two explaining what the gaming experience FEELS like? Does a game play better with or without SLI/XFIRE, vsync on/off, etc. In the end the gameplay experience is what matters MOST.
That is something hardocp.com does well.

"Another stutter metric is going to be needed to catch and quantify them directly."

EXAMPLE: If Average FPS (over 60 seconds) is 100, then Total Frames observed over 60 seconds is 6000.

If ONE SINGLE FRAME is above 100ms, then for the y-axis value '100' (milliseconds), the x-axis value will be '99.9983' (percentile), i.e. one minus (1/6000).

If FOUR FRAMES are above 30ms, then for the y-axis value '30' (milliseconds), the x-axis value will be '99.9933' (percentile), i.e. one minus (4/6000).

If TEN FRAMES are above 20ms, then for the y-axis value '20' (milliseconds), the x-axis value will be '99.8333' (percentile), i.e. one minus (10/6000).

So instead of PERCENTILE on the X-AXIS, you can put NUMBER OF FRAMES on the X-AXIS. For the y-axis value of '100' (ms), the x-axis value will be '1' (frame), for y-axis '30' (ms), the x-axis will be '4' (frames), and so on.

you have my deepest sympathies. awesome job you got there. thank you for sharing and introducing us to this new benchmark system, it is for sure facts more trustful than any other way the majority uses to measure FPS nowadays.

keep it up. you have my support and probably the whole community is also on your side.

I find your VSYNC tests to be invalid because the game needs to be able to run at minimum of 60 FPS for VSYNC to work properly, FPS can not be allowed to drop below 60. So you should actually fiddle with the game settings until you get a minimum of 60 FPS and only then enable VSYNC and test the results, because that's what a knowledgeable player would do. Nobody plays the game at full settings and VSYNC enabled if they can't get MIN 60 FPS, that's just stupid.

FRAPS can manipulate what it receives so if this software is on the same place , then it too can manipulate anything - also , having yet another layer WILL slow down everything , once the frame is finished its being intercepted before being sent to the RAMDAC.

So basically this NVidia software , which you`ve had for a year , has helped NVidia `silently` attempt fix there own SLI frame syncing issues , but now you`ve `come out` against AMD.

NVidia pay roll that good now? they do need help since tegra 5 will be old before its out , given that ARM have now sampled X64 V57 on 16nm

Thank you very much for finally exposing this AMD Crossfire scam using runts and ghost frames to artificially inflate frame rate and sell its products based on false data. I applaud you for giving us a clear picture of AMD's dirty runt&ghost frames games, AMD plays many benchmark games that's why AMD fled BAPco in shame. Nothing AMD claims is to believed, and if it wasn't for honest independent websites like yours that expose AMD for what it has become, AMD would still be selling their inferior products based on false viral marketing and propaganda pumping message board bullying.

Being the under dog doesn't give AMD the right to blatantly pump false benchmarks as fact. If AMD has to cheat to sell its products then AMD needs to be exposed as a cheater and people need to be made aware of it, thank you for doing so.

Thank you for a tremendous amount of work, diligence, and integrity...

I thoroughly enjoyed the video with you and Tom Petersem. Although, I have to mention; I have been very disappointed in Nvidia since my purchase of a group of GTX 480s, believing, from day one, I had thrown away more than $1500 for three unmanageable 1500 Watt hairdryers marketed as graphics cards, which were subsequently relabeled GTX 580s, once the bugs were worked out, kind of like Microsoft's Vista to XP, kind of like scamming on people--no, definitely scamming.

I have always been an enthusiast of the Nvidia since the days of 3dfx and had likewise always enjoyed anticipating and buying Nvidia's new products, and the GTX Titan is awesome.

With memories of ATI, Matrox, and Nvidia (I still have my RIVA 128), a home has been found within my memories, and that is why I am excited about what you and the rest of PC Perspective have done and are going to do. With collaboration you-all are moving a beloved industry onward toward a better future for us and for the companies we want to succeed.

Using a competitors product where they get to define the capture data and define the test results is not accurate scientific method.

Has anyone asked the question as to why Nvidia defined a runt as 21 scan lines?

Has an analysis been done to see how many scan lines an Nvidia product has produced that are not considered runts because they are above the arbitrary 21 scan lines, vs how many on an AMD product just happen to be at 21 or below?

I am skeptical, as you should be too as reviewers and analyzers, that the 21 scan lines was chosen because, for whatever reason, Nvidia products produce frames that are 22 scan lines or greater, so therefor 21 scan lines were chosen as runts.

This would seem to be even supported by your own data, where Nvidia products do not produce any runts on any test, which would seem a remarkable situation unless you consider the fact that they get to define the metric that determines what a runt is.

Is it possible to rerun these tests where the runt was defined as perhaps 28 scan lines instead? 35 scan lines? Greater? How about report only fully rendered frames as the framerate?

These above would be much more accurate tests, as they would determine if there was a noticeable jump in AMD related runts and what the number of Nvidia related runts would become.

Would it surprise anyone that Nvidia produced 22 scan line runts vs AMD 21 scan line runts and that is why Nvidia chose 21 scan lines as the break point?

I am new to the forum and have been following this discussion for a while now and thought I would register. I currently own nvidia (690) and I have also owned ATI (7970 CF).

In regards to your comment sir or mam, I think you make an extremely valid point. I have been looking through these articles and every time I see the graphs I almost feel like there is a slight bias, maybe not intentional, as you mentioned, but I do think it is there. looking at some of the CF results I just don't find them accurate in regards to user experience. I remember playing on one 7970 ghz in crysis 3 then upgrading to a CF setup and I could immediately tell the difference the game play was a lot better, there was the occasional stuttering but nothing as bad as these graphs make them out to be... This is a slight subjective view but I do think we need testing methods that are more thorough and as you say, more towards the scientific method.

I own nvidia and I think anyone who does will get exited at what these graphs are saying as for amd owners you will probably be a bit disheartened, from everything I have read I can only conclude that nvidia has had a big hand in this testing {mostly indirect) and therefore I can not take these results too serious.

There is definitely truth in what these graphs say but I think it is blown a little out of proportion, especially with the parameters of testing set to be nvidia optimized.

The important thing here is to note that ALL setups are BELLOW 20 ms... what does that say? All are playable.

so in my opinion, from what I just saw, if I have to choose between a titan or two sapphire vapor-x 6gig in CF, I would choose the latter. Unless you want to SLI with the titan, then it is titan FTW! But, that is just what I would do based on the results.

Again, what is shocking to me is how different these results are to PCP... Toms hardware has been around longer than any site I know of and they have always been held in the highest regard (not an opinion).

It will be interesting to see what the new AMD runt fix in July will do to these results. I am thinking they might be more than just back on the board after that (speculation).

I know this if off topic but I'm looking into starting my
own weblog and was curious what all is needed to
get set up? I'm assuming having a blog like yours would
cost a pretty penny? I'm not very web savvy so I'm not 100% positive.
Any suggestions or advice would be greatly appreciated.

With Tom’s Hardware disclosing, from the link you provided, that they do not consider their dual-card and dual-GPU results as accurate and, with their tests showing the Radeon HD 7950’s time variance at 23.8 in the 95th percentile, how is it you see PC Perspective’s research as invalidated?

I first went to Maximum PC and Tom’s Hardware to try to understand this complicated subject, but they are both behind on the subject of why we are spending much money and getting unpredictable quality on our screens.

Ultimately, good research is fundamental, and it is obvious PC Perspective has committed a great deal of resources in order to be helpful and do a good job. They have worked to present a balanced perspective for us to consider, and it is still being translated into tangible empirical value. However, I am certain, with collaboration of Tom’s Hardware and others, PC Perspective’s results are accurate, significant, and will benefit us all.

Thank you for a tremendous amount of work, diligence, and integrity...

I thoroughly enjoyed the video with you and Tom Petersem. Although, I have to mention; I have been very disappointed in Nvidia since my purchase of a group of GTX 480s, believing, from day one, I had thrown away more than $1500 for three unmanageable 1500 Watt hairdryers marketed as graphics cards, which were subsequently relabeled GTX 580s, once the bugs were worked out, kind of like Microsoft's Vista to XP, kind of like scamming on people--no, definitely scamming.

I have always been an enthusiast of the Nvidia since the days of 3dfx and had likewise always enjoyed anticipating and buying Nvidia's new products, and the GTX Titan is awesome.

With memories of ATI, Matrox, and Nvidia (I still have my RIVA 128), a home has been found within my memories, and that is why I am excited about what you and the rest of PC Perspective have done and are going to do. With collaboration you-all are moving a beloved industry onward toward a better future for us and for the companies we want to succeed.

I dont see this huge problem i guess i am blind or only run 1 screen but i have played all titles listed and got better FPS in all of them using my GTX680's and My 7970's.

I still prefer my AMD cards for now for these reason:

1)In benchmarks my AMD cards kill my 680's in crossfire overclocked.

2)Graphics just look allot nicer on the AMD cards.

3)Biggest problem is Nvidia cards Cant Mine!!! (That's the big killer there for me)as id rather make $500.00 off my cards a week if i feel like it and be able to game as well.

I am not an Nvidia or AMD Fanboy as i have 680's in one of my builds and 7970's in a couple others plus have bought many of both cards in between.

I think maybe there is a problem for those running triple screens that hopefully AMD fixes as you have to admit they did a hell of allot on drivers recently that gave huge performance boosts.But 1920 x 1080 60 hertz i have no issues and def a big difference when i add a 2nd 7970 as i have swapped a card to other builds and put it back in do to loss of performance. The other thing i use my cards for is overclocking and Benchmarking and they def show huge performance there.I had my 680's over 1300mhz and they couldn't come close to 2 7970's at only 1225mhz.

So id have to say for working computers i will run my 7970's until there is no more money to be made and i will game on the 680's.

Pretty much not many games need more then 1 card anyway unless running multiple screens and high resolutions, Hell an APU can Max most console port games on the Market but the very few true PC games we actually have.

Yes i agree AMD fix the damn problem. But i also don't think Nvidia fan boys should be rooting for this because if they do the 7970 will be a nasty card all around that's capable of allot more then playing just games.

Just my opinion and experience on the to many cards i have owned to count in my life from both brands.

If only we had a tool such as radeon pro to tweak the crossfire to make it operate properly. If only it existed...

The crysis 3 results are a bit questionable as in game vsync was causing havoc for nvidia (input lag) and amd (stuttering). Also crossfire was not engaging properly unless you alt-tabbed (this still occurs half the time). Not to mention the weird fix of opening an instance of google chrome to fix some of the problems with frame rate people were having with AMD setups.

My 7970 cf with radeon pro and other fixes works perfectly for me with Crysis 3, but there are some people still having issues. Also depends when the testing was done as when patch 1.3 was first released it caused massive problems for AMD cards that were later fixed.

Might have a typo or grammar error in the paragraph before the last in the Vsync topic. Anywho, have you ever considered triple buffering on AMD solutions as well as that config on 60Hz vs 120Hz as well? Input latency sure will be an issue but it'd be nice to know if it's better with 120Hz monitors.

I have read this article four or five times and I find it intriguing. I must admit to not understanding most of it though. However, I must say being the owner of THREE 7970's, MSI Lightning BE, MSI Ghz OC Edition & Club3D RoyalAce at a cost of around £1300.00 GBP, just shy of $2000.00 USD I feel somewhat cheated. I hope AMD's forthcoming "Fixes" will redress these issues. Brilliant article and I am looking forward to all the follow ups.

it got a smooth 120+ fps,
was on a 60Hz display,
and in addition to the regular 'vsync' spot, it would 'vsync' at the '1/2' way spot?
aka update the display at the top and middle, updating at these same two spots every time, and only updating at these two spots.

Would the middle 'vsync' spot be annoying? helpful? noticed? informative? etc...? (This sounds like a good way to see how important fps is)

Nice review. I'm interested as to how this tech is evolving.
But now I'm curious- I've read some of your test methods- but I may have missed something. I've seen mostly games that are more single player/first-person. Is that part of your methodology? I'm thinking of more intensive object rendering titles like Rome Total War II that has to render myriads of objects and stress memory more. Have you considered something like that?