Sapphire Radeon R9 Fury 4GB Review with CrossFire Results

Fiji brings the (non-X) Fury

Last month was a big one for AMD. At E3 the company hosted its own press conference to announce the Radeon R9 300-series of graphics as well as the new family of products based on the Fiji GPU. It started with the Fury X, a flagship $650 graphics card with an integrated water cooler that was well received. It wasn't perfect by any means, but it was a necessary move for AMD to compete with NVIDIA on the high end of the discrete graphics market.

At the event AMD also talked about the Radeon R9 Fury (without the X) as the version of Fiji that would be taken by board partners to add custom coolers and even PCB designs. (They also talked about the R9 Nano and a dual-GPU version of Fiji, but nothing new is available on those products yet.) The Fury, priced $100 lower than the Fury X at $549, is going back to a more classic GPU design. There is no "reference" product though, so cooler and PCB designs are going to vary from card to card. We already have two different cards in our hands that differ dramatically from one another.

The Fury cuts down the Fiji GPU a bit with fewer stream processors and texture units, but keeps most other specs the same. This includes the 4GB of HBM (high bandwidth memory), 64 ROP count and even the TDP / board power. Performance is great and it creates an interesting comparison between itself and the GeForce GTX 980 cards on the market. Let's dive into this review!

The Fiji Pro GPU differs from the Fiji XT found on the original Fury X in a couple of important ways. First and foremost, the stream processor count drops from 4096 down to 3584, a loss of 512 or 14%. That means AMD's Fury card will include 56 compute units compared to the 64 compute units found in the Fury X and along with the stream processor change comes a decrease in associated texture unit count as well. That drops from 256 to 224, the same 14%.

The "up to" rated clock speed on the Fury GPU is 1000 MHz though we already know of some overclocked models that are shipping on day of release (which is July 16th). Those overclocks appear to be minimal (~40-50 MHz) so don't expect a lot of variance quite yet. That 1000 MHz reference speed is just 50 MHz shy of where the Fury X ran with the water cooler integration; I know that some users will be surprised that there aren't more performance gains and headroom found in that original Fury X product.

The memory sub-system remains unchanged on the Fury from the Fury X. We are still using the stacked memory with interposer combination dubbed HBM, high bandwidth memory. This is another pretty complex topic that if you are unfamiliar with is worth reading up on. We have an editorial on the dramatic changes that HBM offers for GPUs that you should comb over. The memory clock speed of just 500 MHz remains over the 4096-bit memory interface, resulting in 512 GB/s of total memory bandwidth, the same as on the Fury X. Also, there are still 64 ROPs / raster operators on Fury, the same number that ships on Fury X.

Interestingly, the TDP of the Fury is identical to that of the Fury X, despite that fact that we are running at a slightly lower clock speed and with 14% fewer stream processors in operation. We are likely seeing the result of AMD's Fiji GPU binning to get the best GPUs on the flagship Fury X product. That improved silicon will often run at lower voltage thresholds for the same performance level and thus the "slightly worse" GPUs that make their way into the retail Fury cards need a bit more juice to get the 3,584 SPs up and running stable. In our real world testing this is echoed as well, where the Sapphire R9 Fury card uses just 6-7 watts less power than the Fury X sample.

Peak theoretical performance drops from 8.6 TFLOPS down to 7.2 TFLOPS with the decrease in clock speed and shader count, but that is still a number that exceeds the highest rated GM200 GPU. But as we know from our previous Fury X review, TFLOPS ratings do not correlate to actual gaming performance all the time.

So now we know what is the same - what is different? It's all in the implementation.

these are better than i expected, but at this point im just waiting to see price and performance of the Nano, with a Fiji XT core on something that tiny and using only 175w-ish, thats gotta be clocked super low

Compared to the 980, a 10% increase in price, for a 5%-30% increase in performance is not shit. If you think there is no point for these cards, then you obviously don't have to buy one. I am debating getting a Fury to replace my 7950 for 1440p gaming.

That is an interesting prospect. the hawaii GPU's get oddly efficient when downclocked to 900mhz. So running CF with my MG279Q could yield a very nice upgrade for a couple years at around $240 for the 290.

"NVIDIA still has a fighting chance with the GeForce GTX 980 of course: it is $50 less expensive, includes a free game (currently at least) and has the advantage of more frequent and usually more reliable drivers behind it. But AMD has stepped up its game and released a high-end GPU that should make NVIDIA worry. And that's great for everyone."

While I agree that Nvidia has had more frequent driver releases, I don't know if the claim that their drivers are more stable can be substantiated.

That's false. GCN used to be lower than Kepler. Right now, GCN has overtaken Kepler in performance. History has shown that like with all new GPU archs, performance does increase with time as driver matures.

Great card but not worth an upgrade from my Sapphire Vapor-X R9 290. I might have to wait for the next iteration...That and the MG279Q I just bought extends longevity even more! 1440p max settings is pretty attainable at 35fps with the 290.

Strangely Anandtech did the same thing. Maybe because they were rushing to get it out first? I am finally on a 1440p monitor so I am satisfied, but overall Fury X struggled at 1080p compared to 980ti so I would be curious how this does.

I am hesistant to boost other sites, but since PCPER has no 1080p testing anymore I can say that Sweclockers do.

And on 1080p, this card gets beaten by 980 by a few percentage points consistently, and that's without any OC'ing. And we all know about Maxwell's massive overclocking capability.

So at 1440p or above, the Fury is a great card. At 1080p it gets beaten and as you said, that is where most people still are (and will remain for the foreseeable future). Maxing out Witcher 3 on ultra at 1080p means that even the 980 Ti dips below 60 fps.

And there still is no card that is 4K-ready. Look at GTA V. Even Big Pascal probably won't breach 60 fps on ultra at 4K on GTA V.

why would they include 1080p benchmarks this cards arent meant for 1080p player it just wouldnt make sense to have 500+ card and 100 dollar monitor.i think pcper is targeting the right audince with this benchamarks

Asynchronous monitors like GSYNC are smoother at lower frame rates so keep that in mind for high-end gaming.

Aiming for 144FPS with good quality settings is always going to be problematic... cards will get better but games will also get more demanding.

For 1080p benchmarks use Techpowerup, though I don't quite agree with the scores since you should use less AA at higher resolutions to have a more apples-to-apples comparision of visuals. Roughly speaking 30 to 40% higher frame rates is what I'd estimate (i.e. same settings except 8xMSAA 1080p vs 4xMSAA 1440).http://www.techpowerup.com/reviews/MSI/GTX_980_Ti_Gaming/24.html

Agreed, 1080p is still very relevant and it's sad they don't test it anymore.

For those saying high end cards like this aren't meant for 1080p I couldn't disagree any more. If you like to crank up the eye candy with loads of AA 1080p testing is relevant.

Before anyone starts saying "get a higher resolution panel" a lot of people game in their living room. I have a 1080p projector at 130" and will never go back to a tiny monitor. And don't even suggest a 4k one as they are high $$.

I agree with the price being too high for me, but exhaust space at the back...Barely any heat ever comes out of these vents on an aftermarket GPU like this. Especially considering most use vertically aligned heatsink fins. I think keeping the shorter PCB vs what Asus did was a smart move because the fan blows the air right through the heatsink and doesn't get trapped anywhere but in the case.

I thought the big selling point with these Fury cards was they are so small to fit in small cases. So to have to extend the PCB and/or cooling takes away that advantage and just shows are poor heat/power is on AMD cards.

Why no reviews take on that IQ issue on nvidias card?
just give fps isn't enough.
Must put everithing on maximum IQ and found a way to get the same IQ on nvidias card, because is already know that nvidia cripple their cards IQ to get more FPS.
So for us that intend to game 4K and or buy a expensive card must put IQ over FPS.
If the benchmarck is all about FPS then give the video of each test on the review pages for us to see the diference in order to choose fairly.

Very good job on the review, liked it :) What it tells me is 4k gaming is going to take a lot of money, SLI or Crossfire, to me it looks like AMD should let it's vendors/partners make a air cooled Fury X at $599.99 lol

Hi Ryan.
With everyone wanting"MOAR"and AMD pushing close to the limit
of this architecture to compete, have you considered trying some under-volting?
Seems the Asus card on a few sites drawing 30 to 50 Watts
less than the Sapphire...................

I agree. My big thing in the past 5+ year has been undervolting and finding a chi sweet spot.

The 290x is a great example, as it shatter all the TDP result we see on reviews.. if you dont mind running it 5% to 10% lower clock.

AMD got great chips, but doesn't seem to read tech site review to at least make one product that wont get destroyed by the press.

"Its to hot", "use to much power", "Not recommended"
Then a few month later an nvidia card comes out that use more power, it hotter, but its receives awards after awards...

No one seem to be looking at price/performance, where AMD 'used' to be a clear leader.
With this Fury release, AMD now eliminated their only advantage... Who in the world in running this company to be so out of touch with simple concepts.

I totally agree. I feel like NO ONE at AMD really takes advice from normal everyday gamer's when it comes to GPU's. AMD's only advantage was as you said price/performance, which they still have on the 290/290x. These new cards are great performers but the price just isn't going to work for AMD.

I think the obvious question is as the sapphire version uses what appears to be a fury x pcb is it possible to unlock the core with a bios flash? More than likely its a fury x with some shaders disabled via a bios, seeing as the card has a dual bios it shouldn't be that hard to test?