Frame Rating: Eyefinity vs Surround in Single and Multi-GPU Configurations

Discovered Problems with Eyefinity

Hopefully you followed (or just read) our previous Frame Rating stories and learned about the problems that AMD CrossFire had with runt frames and artificial performance results. As it turns out though in a CrossFire + Eyefinity system, the issues are very different and likely not as simple to fix. I found three individual concerns when running our AMD Radeon HD 7970s in CrossFire at 5760x1080: dropped frames, frame interleaving and stepped tearing.

Dropped Frames

This is the easiest problem to understand and demonstrate. With our overlay that is applied to game frames as they leave the game engine, but before the graphics system takes hold of them, we are not only to accurately measure performance on the screen but discover missing frames. This is possible simply because we know the expected pattern of colors and can easily detect when one of them is missing.

Our test pattern follows a sequence of white, lime, blue, red, teal, navy, green, aqua, maroon, silver, purple, olive, grey, fuchsia, yellow, and orange. These colors then loop and we can use the scan line length of them to measure frame rates very accurately. But if a color is missing in the order, say white then blue, we know that lime was missing and completely dropped from user’s point of view.

These successive captured frames from Bioshock Infinite skip the lime, red, navy, aqua and silver colors indicated that every other frame in this sequence that is being rendered by the game is NOT being displayed. With single display testing we would see RUNT frames in those places but it appears that with Eyefinity the problem is worse and those frames that the game engine submits are never displayed.

Why is this important? Tools like FRAPS that measure performance at the same point as our overlay is applied are essentially DOUBLING the frame rate in this instance, giving unfair and unrepresentative results. In our Observed FPS metrics those missing frames are accounted for correctly with lower frame rates.

Interleaved Frames

Dropped frames were easy to understand but we are about to get more complicated with our discussion. Look at this:

Click to Enlarge

What you are seeing is one frame being displayed at the same time as another, interwoven together. This new problem is being dubbed “interleaved frames” and is unique to Eyefinity. I have witnessed this happening in every game we have tested though the games do shift between dropping frames completely and interleaving them together in this way.

This problem not only causes lower perceived frame rates but also can cause some visual anomalies.

Click to Enlarge

This image from Skyrim shows the problem at work once again and if you look at the vertical lines of the structure you can clearly see the result of an interleaved frame: portions of the image alternate between positions back and forth rather than once with typical Vsync tearing.

After seeing this specific issue I spent a lot of time making sure it was not caused by or influenced by the overlay itself. To double check we captured the output of the screens without the overlay enabled and through simple frame grabs we found interleaving continued.

Click to Enlarge

This animated image shows (in a very slow motion fashion) how the frame interleaving is seen in gameplay.

As it turns out, the interleaved frames also result in another visual anomaly we are calling stepped tearing. Take a look at the image below:

Click to Enlarge

Not only are there intermixed frames in this screenshot but the horizontal tears are NOT level, instead they appear to grow or shrink. As a result, and combined with the interleaving, these tears are more noticeable to the end user than the standard vertical sync tears and also are affecting observed frame rates.

Click to Enlarge

Look at this same Skyrim screenshot from above. Notice on the left hand side you’ll see there are clearly five different frames slices (looking at the left side wood strut). But the color bars on the overlay on the far left hand side only see three frames. The first “slice” of that wooden support doesn’t start until somewhere in the middle of the mountain range area in the background.

I was also fearful that perhaps our capture equipment was responsible for the errors and problems seen in these screen grabs and recordings. To enforce the problem I picked up a camera that could record 120 FPS at 1280x720 resolution and recorded the Eyefinity configuration. In this image below you can obviously still see the frame interleaving as well as the stepped tears.

Click to Enlarge

Clearly this problem is real and isn’t a result of our testing methods. It was just nearly impossible to find before the creation of this overlay and the testing methodologies at work here.

We have developed a working theory of what is happening and why we are seeing these interleaving and stepped pattern issues. When running in a multi-GPU configuration of any kind the contents of the buffer from each of the secondary GPUs must be copied back to the primary card that has the display physically connected to it. That card’s frame output buffer is then scanned out to the displays.

I believe that interleaving is caused by incomplete frame buffer copies from the secondary GPU to the primary GPU. If the copy is slow or stalls at any point, the current scan out process (being drawn at the screen at that specific moment) starts to re-read the primary GPU’s frame buffer rather than the secondary GPU’s as it simply didn’t make it in time. When you see a back and forth pattern of frame interleaving it indicates that the copy is actually “catching up” a bit but then falling behind yet again. This may happen a few times before a third frame (from the primary GPU) is ready and is pushed to the buffer easily.

This is likely also the reason that we see dropped frames – the copy of data from the second GPU to the primary is slow enough that the next frame from the primary GPU is ready before a copy takes place.

It is also possible that the interleaving is an issue of synchronization and not bandwidth, but without more data and information from AMD it is hard to tell as an outside observer.

I do want to note before anyone comments ask this question: no enabling Vsync in the game or the control panel does NOT fix or change these problems.

The stepped tearing issue is caused by a different but related property – AMD does not appear to have a synchronization step on each scan line but instead on each pixel. Rather than wait for each scan line to output before updating the frame buffer AMD allows the buffer to update at any point which would cause the variable height frame results we are showing you above.

I asked AMD for more information on why or how this is happening but they decided not to comment at this time. When I approached NVIDIA they said only that they have enabled “locks and syncs” for their frame copies and scan line outputs on SLI.

UPDATE: We did finally get some feedback from AMD on the subject after quite a bit of back and forth. Essentially, AMD claims the problems we are seeing are due only to synchronization issues and NOT from bandwidth limitations. Since the file copies are done over the PCIe bus, only an instance of near 100% utilization on it would cause degradation – and the only instances of that would be from heavy system memory access. However, if you are accessing main system memory with the GPUs in your PC then other performance bottlenecks are going to creep up before CrossFire scaling.

If that is the case then AMD should be able to fix the CrossFire + Eyefinity issues in the coming weeks or months. A bandwidth issue would be much harder to deal with and could mean a fix would have never arrived for HD 7000-series users.

hi Marc, I can assure you that myself and several other senior executives and engineers at AMD are deeply committed to making sure you get the experience you deserve.

We believe that Ryan treats this subject fairly, reports accurately and he works closely with us. We are grateful of his recognition of our leadership where we have it, even if I dont personally agree about our competitors 'sexyness'!

AMD is always all about selling defective and inferior products. They did it before with their "Tri core" CPUs. Those were just Quad cores that had a 4th core that they couldn't get to work. AMD GPUs and drivers are shit. You get what you pay for.

You do realize that binning is a key part in the silicon-industry? By your logic, the Titan is crap because it only has 14 of the GK110's 15 SMX units activated.
When a produc-line is announced, there are actually few different dies being produced. 3820, 3930, 60 and 70 are all 8-core "xeons" with cores disabled due yield issues.
EVERYONE do this.

And please, stop spreading the false claim that the drivers are bad. Maybe they were back in good ol' '04, but that is long gone. AMD has actually had better drivers than Nvidia this generation...

By AMD's admission, no. This problem listed in the Catalyst 13.10 notes only affects Crossfire configurations that do not use a bridge adapter across the graphics cards. This coming from AMD's own Andrew Dodd.

If this were a fix for our problems AMD would surely be crowing about it.

You guys rock pcper. Been here for years. Love amd, but they need to get their act together. Emphasis on the good faith part ya know. Don't become the crappy option. Amd has so much great iP if they could only get their software side together they would be SIGNIFCANTLY more competitive.

I hope this fix AMD says they are working on will help with my 6970's too. I know they aren't worth that now, but I paid $700 for the pair, a couple years ago, to power my 3 screens. I'm about to sell them on craigslist and get an nvidia card, if it doesn't.

Keep hitting them until they fix this. I bought 2x 7970's in January 2012 and I noticed immediately the problem. Its not like AMD has only known about this since the beginning of this year, thousands of users were reporting this problem a year before that. It really put me off dual cards until I got a pair of 680's and found that the grass was much greener on the Nvidia side with SLI.

We need this fixed so we have some competition in the high end of GPUs.

I'm glad to see AMD is putting focus on 4K and hope they have a 4K eyefinity solution soon.

The only way NVIDIA will ever support anything other than 3x1 surround is if AMD turns up the heat. NVIDIA if you are listening, you need 2x1 and 2x2 surround support at any resolution to stay competitive on the consumer side. No one is dropping thousands of dollars on Quadro's just to get that one feature.

I think technically YES you can support 3 4K monitors like the ASUS PQ321Q with AMD today...but the main issue is going to be PERFORMANCE. Even without frame pacing and interleaving problems how much horsepower would you need for that?

I don't think they were doing it knowingly, no. That doesn't excuse them though; they should have been able to see these problems before and been trying to fix them before this all started to happen this year.

Hi Ryan, I hope 4K connectivity is scheduled to be included in all future reviews of hardware, like laptop reviews. Back in June I needed to buy something quick when my system crashed and it would have been great to know if any low to mid range laptops could at least drive a 4K display. I am not expecting benchmarks of a Chromebook running Metro Last Light at 4K but it would be nice to know if I could display Sheets on a Seiki display with an A10 based laptop.

1. You need to state if your using AMD Beta Driver 13.8A (8-1-2013), or AMD Beta Driver 13.8B(8-19-2013). If you're using 13.8A on purpose during a discussion/benchmark on Surround and 1600p, multiple viewers could come to the conclusion you did this on purpose to make AMD look bad. AMD Beta Driver 13.8A doesn't have 1600p support. It only addresses the issues for DX11 API. 13.8B addresses 1600p and surround, if I am not mistaken. A possible upcoming 13.8C may address DX9 API, or it could have already been done in the new 13.9 WHQL update.

2. Personally, I can't take your discussions on a more serious manner. In your conclusions, you state things that give me the impression that you don't fully understand graphs, or have poor views of AMD Graphic Cards. At the very least, it is leading me to believe that you are bias towards Nvidia. Having favoritism, or a bias point of view to one company over the other, isn't a good way to approach a discussion or benchmark on any product. It doesn't help you seem serious, experienced, or reasonable to both bases (AMD and Nvidia users). It only tells readers that you pander to one side, and talk crap about the other brand's short-comings. AnAndtech doesn't do it, Guru3D doesn't do it, techpowerup.com doesn't do it either, and they all come out with really good benchmarks about computer-based products. Both bases read their benchmarks because they aren't bias. Mr Shrout, you are bias either because you are letting people know of your hatred towards AMD, or you want to cater discussions and benchmarks that make AMD look bad to the Nvidia Base. Those are reasonable conclusions. If I don't see a benchmark on here discussing why the GTX 600, 700 and Titan series doesn't fully support DX11.1(support only software, but not hardware-wise), you are only going to prove me right.

3. Looking at the Frametime Variance Graphs that you posted, AMD 7970 will have a lower minimum band because the cards push lower latency to produce batches of frames. Problem with it, and it's true, is somewhere along the way, they will produce "runt frames." Frames that aren't one whole frame. It could be like 0.8 frames, or 0.9, or 0.7. On the other hand, it takes less time for AMD video cards to produce those batches of frames. Nvidia takes longer to produce the batch because, hardware wise, the system probably calculates whether it needs to spend more time producing an extra "whole" frame. That's why their minimum frame time band is higher than AMD. The hardware is always trying to push 1.0 frames times x amount of frames to a batch.

1. You are incorrect in this assumption. No beta or full release of driver from AMD addresses Eyefinity.

2. I don't understand the relevance to DX11.1 reference here honestly. This story isn't about API support but rather multi-display + multi-GPU gaming. As to the bias question, this is something that gets targeted at people all the time when their results clearly show an advantage to one side or another. Throughout our Frame Rating series of stories I have continued to tell the truth - that AMD cards are fantastic for single GPU configurations but need work on the multi-GPU side. You can have your opinion obviously, but obviously we disagree. As do many of the readers commenting here.

3. Sorry, I'm going to need more explanation on what you are saying here. Frames are not produced in "batches" at all. I think you are trying to describe the runt frame problem maybe?

2. Personally, I can't take your comment on a more serious manner. In your post, you state things that give me the impression that you don't fully understand reviews, or have poor views of NVidia graphics cards. At the very least, it is leading me to believe that you are bias towards AMD. Having favoritism, or a bias point of view to one company over the other, isn't a good way to approach a discussion or benchmark on any product.

Sucks how that works, doesn't it? Oh, for your point 3, it doesn't matter how fast a card can batch process *anything*, so long as what's presented to the user is inferior to the competition. The result is all that matters. Rolling back to point 1, your statements are moot as they are made without the far greater level of knowledge Ryan has - as he speaks with AMD about these various beta versions on an almost daily basis.

As AMD have just stated that their Hawaii gpu's are smaller and more efficient but not intended to compete with the "ultra extreme" gpu's of Nvidia (aka 780/titan)as this is something that will be addressed by AMD's multi gpu cards....then it is all the more essential that AMD sorts these problems out properly and completely otherwise their product/business model is flawed as badly as their multi gpu performance.

someone always brings this up, & both ati & nv have said for years that AFR brings the most performance in the least complex way (unless a game engine has inter frame dependancies)

in the past, ati had tiled CF as an option, also scissor mode

but think about this, let's say you're doing 2 tiles at a horizontal split, you may end up with one card rendering an empty sky, the second rendering a ton of detail, basically resulting in a useless solution that doesnt scale

on top of that, you have to synchronize the tiles to display a final image at the same time, but the cards cant physically render at the exact same time, so you'll introduce lag or artifacts (which eyefinity does see)

i would say AFR is good enough & the way to go for multiple cards, but i would want to see a new paradigm... do you remember the first core2quad? it was 2 duals stitched together, imagine if 2 gpus were stitched together (no more mirroring the vram, just adjust the socket connections)

Will you be taking a look at the Phanteks Enthoo Primo case? According to Hardware Canucks it might be the "Case of the year", not bad for such a small company entering the case market. I would be interested in what you think about it.

How are your displays connected? I was having this issue until I connected all of my displays via DisplayPort. I know this is not ideal but it has eliminated the issue for me. I have 2 HD 7970s in crossfire and 3 Dell U2410 displays.

If the Asus PQ321 supports DisplayPort 1.2 and the HD 7970 supports DP 1.2 as well, and DP 1.2 can do 4k at 60Hz, then why is 4K necessarily a "dual head" affair? Is that simply due to the way the Asus was designed?

"Currently there are no timing controllers that support 4K@60p. In order to drive the asus/sharp at 4K@60p, two separate TCONs are used. This is why this monitor has the unique capability of supporting dual HDMI. Each HDMI port feeds into its own TCON.

There is no 4K display that can do 60Hz without tiling. 4K@60p TCONs are supposed to start shipping in small amounts this year and in mass quantities in 2014."

Well currently rolling with 2x7970's on a 1920x1200 triple display setup. Can't say I ever really been personally bothered the various issues raised in the article in regards to the frame interlieaving and stepped tearing enough to stop playing, though I trust the guys over at PCPer to give it to me straight. I noticed the stuttering with crossfire more than anything else you guys brought up with your new testing methodology. I think most of us gamers at least gained a better understanding about the various issues involved. Sometimes my benchmarking applcation(be it FRAPS or Dxtory) would say I was getting a certain frame amount but the game just felt too jittery, whereas if I disabled crossfire the game felt more smooth even with a lower framerate.

That is not to say I haven't thoroughly enjoyed my 7970's/Eyefinity setup. When I've been been able to play at Eyefinity resolutions I've done so, when I haven't I've just adjusted my quality or resolution settings until I could get a smooth enough playing experience.

Do I hope that AMD is able to smooth out those circumstances where I can't play at a give resolution/quality due to micro-stuttering with crossfire, yeah that would be awesome. I think a lot of us out here still don't have a full appreciation for the phenomena due to not having been able to test multi-GPU solutions side by side, so it just comes down to "the game doesn't feel fluid enough at my current settings so I'll dial them down until it does", which I'm sure people have different sensitivities to. Keep up the good work PCPer crew.

At first I thought this article may have been over egging the problem with eyefinity + crossfire. Having now disconnected my second HD 7970 and played a few games in eyefinity I have seen that I is not. Radeon Pro may tell me that I'm getting half the FPS that I was but my eyes see the same low FPS experience.

Not impressed AMD, I feel like a chump for spending £300 on a card whose only additional effect to my system has been extra heat and noise.

Still at least I can go back and play Farcry 3 now with out the giant oversized HUD problem.

A damned good read thanks Ryan. AMD owners should be pleased that these issues are highlighted and making sure AMD keep on their toes. Like the FCAT article, it was good to see AMD address the issue and get it fixed and again, it was PCper who made AMD aware of the issues (like they didn't already know!)and forced them into sorting that out for their users.

I think a lot of hardware maker define CPU differently then Microsoft.you can ask I wrote a bug report to and today 13.10 beta.if I recall message signal interrupt and its extended variant were implemented in vista for consumer?ROFL we know how vista was received so this might be one overlooked good thing.my case?in regedit MSI was enabled (sad was not for some reason ,can't enable it)but no amount of MSI set!(if it isn't set isn't it defaulting to one msi / socket?but I have 4 CPU in my i5 2500k(ya only physical CPU ms say)so imagine amd 8 core fx lol stuck with 1 MSI / msix.I think this is the cause.sadly on my system none were set . I normally tweak but from what I saw on ms it isn't a case of 0 or 1.and ms recommend hex value.Rolf a bit too complex for my knowledge.but you guys know a lot of hardcore tweaker . if I'm right ? I would be like what the eck am I the only one that used vista ?