Finally, AMD’s 300 series of graphics cards have hit the shelves, and offer a modest performance boost over the previous 200 generation thanks to mild improvements in both architecture and higher clock speeds. This review will be focused on the R9 390 8GB, which AMD are targeting to the 1440P crowd.

The Radeon 390 features 2560 Shaders, 64 ROPS and 160 TMU’s, meaning there’s certainly plenty of shader power to be had. The GPU is still built on the same 28 nm process that you’d have found in the previous generation. This compares rather nicely to the R9 290X and 390X, which have a slightly higher shader count (2816) and 176 TMU’s.

The sapphire runs at a core clock of 1010Mhz, with memory speeds at 1500MHZ (6000Mhz effective) on a 512-bit bus, providing a rather nippy 384 GB/s of memory bandwidth, the exact same amount as found in the R9 390X. It’s not quite up to the memory bandwidth insanity of the Fiji’s High Bandwidth Memory, but it’s pushing the technology enough to ensure that the GPU’s shaders won’t be starved for data in virtually any scenario. This is a rather nice improvement over even the 290X, which featured the same memory clock; but on a far narrower 384-bit bus (leaving just 332 GB/s bandwidth).

The Sapphire Nitro also demands two eight pin power connectors, running a TDP of 275W. The GPU is fairly silent too, with it’s tri-fan configuration keeping the GPU cool. Even kicking the fans up to 90 percent (which under our testing, in the heart of the British Summer is considerably higher than the 50 – 60 percent the fans will reach under load of left to their devices) you’ll find the noise levels only vaguely intrusive.

Content’s of the box are rather slim however, containing no PCI-E power cables or connection converters, and only a driver DVD, HDMI cable, manual and warranty information. Not a big deal, but also a little ‘lighter’ than some packages.

Benchmarking

We’re running the latest available drivers (15.15.1004) for the R9 390, and they are slightly different to those for the 200 series. Because our testing with the R9 290X was conducted with earlier drivers, the performance values are likely thrown off a little, but you’ll still see that the R9 390 is still within a hair of AMD’s once top dog. All games are at their latest patched versions – we’ve included only Mantle tests with Thief because of limited time.

1080P Benchmarks

1080P for the R9 390 hardly presents a challenge (as you’d expect for any card in the price range), with every game easily playable at well over 60 FPS with everything at max settings, with the exception of Metro Last Light, when using SSAA (for obvious reasons).

1920×1080 (average)

Radeon R9 390

Radeon R9 290X

GTX 970

FireStrike

12,146 (GPU test only)

11,336 (GPU test only)

11,522 (GPU test only)

Metro: Last Light (SSAA)

47.33 FPS

46.65 FPS

46.21 FPS

Tomb Raider FXAA

82.8 FPS

77.9 FPS

80.1 FPS

Bioshock Infinite

97.4 FPS

96.4 FPS

109 FPS

Thief

82.1FPS (MANTLE)

97FPS (MANTLE) 65.0 FPS (DX11)

72.1 FPS (DX11)

Sleeping Dogs

70.8 FPS

67.5 FPS

68.3

1440P Benchmarks

AMD are targeting the R9 390 at those with 1440P screens and above, so the question is how does the card manage to handle the games? Fairly well as it turns out, with all of the titles we’ve tested on hitting the frame rate.

2560×1440 (average)

Radeon R9 390

Radeon R9 290X

GTX 970

FireStrike Extreme (1440p)

5,404 (GPU only)

5058 (GPU only)

5,407 (GPU Only)

Metro: Last Light (No SSAA)

51.6 FPS

53.3 FPS

54.2 FPS

Tomb Raider FXAA

56.3 FPS

58.2 FPS

55.1 FPS

Bioshock Infinite

69.35 FPS

67.44 FPS

74.2 FPS

Thief

71.8FPS (MANTLE)

76FPS (MANTLE)

51.7 FPS

Sleeping Dogs

81.5 FPS (high AA)

82.3 FPS (high AA)

83.3 FPS

Overclocking the R9 390 Sapphire Nitro

As I mentioned earlier in this review, it’s currently the heart of the British Summer – which obviously means higher temps and lower overclockability; even so the R9 390 managed around a 10 percent increase in clocks. We could probably get a little more out of the card, if we decided to push and tweak things – but we were quite happy with the core running at 1104 Mhz and the RAM running at 1554. Your fortune in the silicone lottery may be different, but we were able to hit these clocks without worrying about pushing additional voltage or pushing fan speeds up. The results led to 3DMark extreme hitting 5826, a 400-ish point increase over the default clock speeds.

AMD Vs Nvidia Technology

For the 300 series, AMD implemented the “Frame Rate Control” – which reduces the GPU’s power consumption by having it set a frame rate target. Let’s assume you’re only running a 60hz monitor, then having your GPU render at 300 fps doesn’t help any – so setting it to 60 FPS will ‘throttle’ the GPU and have your system produce less and and suck less juice from the wall socket. It’s not something we’ve had much time to test during this review, but it’s an intriguing idea.

The GPU market has changed significantly in the past 12 months, and not just because of the imminent release of DirectX 12. You’ll likely know Nvidia and AMD are both pushing their own technologies. Nvidia have G-Sync and of course hardware Physx. AMD meanwhile are pushing their Mantle API and True Audio technology, and FreeSync, which is their counter to Nvidia’s Gsync.

If you’re interested in Virtual Reality, AMD are also releasing LiquidVR SDK’s too, the technology claims to release the latency of VR hardware… then there’s Nvidia’s GameWorks technology, which will (supposedly) also be moving to virtual reality support too.

It’s tricky to ‘predict and then buy’ for the future, and despite Mantle’s very impressive performance numbers with BF4, Hardline, Dragon Age Inquisition and Thief, it’s still a work in progress. There’s also the question of how much support it’ll have once DX12 is fully launched. There’s currently a reasonable smattering of Mantle API titles, so it’s still a very nice bonus if nothing else. TrueAudio’s future is somewhat less known. Thief (for example) uses it and it does reduce CPU usage, but not enough games currently have the future to sway a decision over a graphics card.

Meanwhile Nvidia’s G-Sync has been released and does a great job of both eliminating screen tearing and reducing latency which is associated with V-Sync enabled. Those who have used the technology are reluctant to go back to their old screens, but currently the selection of screens its available for is limited. In addition to this, it adds to the price of a new monitor. With that said, FreeSync is gaining a lot of traction in the market, and does a pretty damn similar job – and cheaper too. It’s hard to argue with “cheaper” – and it’s not as though the monitor range is poor either. As much as G-Sync is awesome, FreeSync is possibly better for the average customer, simply due tothe monitors costing less.

I actually really do like Nvidia’s hardware Physx technology, in certain games it adds a lot to the atmosphere, such as the wisps of smoke in Assassin’s Creed 4 Black Flag. There are other effects in Metro Last Light, such as debris and dust, and so on. It’s a nice extra, and for someone like myself who regularly makes graphics comparisons it’s a nice extra. For most gamer’s though they’re willing to do without it.

It’s worth for a moment discussing DownSampling – the act of running your GPU at a higher resolution than your monitor can handle, and then having the GPU ‘scale’ it to your monitors native resolution. It proves greater visual details, and certainly was a feather in Nvidia’s cap – as it was extremely tricky to do with AMD cards. Nvidia adopted the feature and called it ‘Dynamic Super Resolution’. AMD have now added the much requested feature, and have labeled it ‘Virtual Super Resolution’.

While AMD have their Raptr application, and Nvidia their GeForce experience, tweaking your settings using these applications is a roll of the dice. For gamer’s who’ve a reasonable understanding of the basic settings, you’re better served to handle things yourself. If you do want to record your gameplay, GeForce does have Shadowplay, which I personally find quite useful. While AMD do have alternatives built in with Raptr, the technology isn’t quite so far along as ShadowPlay, thus I have to give the subtle nod to Nvidia on this point.

All in all, there’s certainly an argument of whose cards offer the best exclusive technology… but if DX12 rumors are true (and they’re unconfirmed at time of my writing this) you’ll be able to pair up an Nvidia and AMD card into the same machine and gain the benefits of both GPU’s.

R9 390 Conclusion – is it worth buying?

The decision to purchase one GPU over another is rarely clear cut. While much of the focus of AMD’s new range of GPU’s have of course been on the Fiji line, there is still a lot of good things to be said about the 300 series too. While it’s technically very much a “rebrand” of the 200 series, it’s not just a “copy and paste” job, and there are a few changes in the architecture. Chiefly, the amount of VRAM has been cranked up, more bandwidth too – plus higher core clocks help to improve the performance. The architecture has also seen slight tweaks – with slightly lower voltages (from what the rumors are) and better memory timings also helping to contribute somewhat to improved performance.

If you’re currently the owner of a 290 (or similar), then you’ll probably wish to give the 390 a miss – but if you’re coming from a lower end or older card, then the 390 represents great value for the performance. At the time of writing, there’s a considerable price difference between the two (depending on vendor and shop, it’s around £100 difference) meaning the 390 is probably the better buy for many customers, and 2 cards in crossfire would eat 4K alive, with a huge frame buffer and about 10 TFLOPS of compute performance.

Advert

By using this website, you agree to be bound and abide by the User Agreement and
Privacy Policy. Please note this website contains affiliate web links and we may
recieve payment in exchange for your clicks.