Share This article

Ever since AMD launched the Hawaii family of GPUs (R9 290 and R9 290X) the company has maintained a bifurcated structure. Certain features, like TrueAudio and the new superior XDMA engine for Crossfire support have been present on some GPU models but not on others. Today, AMD is launching a new 28nm GPU (codenamed Tonga) and formally known as the R9 285. This new card takes Hawaii’s superior feature set and brings it down to the $250 price point — but then layers on new features of its own. Unofficially, Tonga appears to be the debut of the GCN 1.2 architecture.

The R9 285 carries an MSRP of $250 and it’s openly positioned as a GTX 760 killer. Can AMD deliver on that promise? Let’s find out.

What’s new about the R9 285?

First, the R9 285 supports TrueAudio, AMD’s superior Crossfire scaling solution, XDMA, and the modified GCN front end that AMD introduced with the Hawaii family. That means eight Asynchronous Command Engines (up from two) and support for four primitives per clock cycle (everything but the R9 290 cards top out at two primitives per clock). AMD claims this will give the R9 285 a huge tessellation performance advantage. Overall, this sure looks like GCN 1.2 (though AMD reminded us that such nomenclature was created by the press; it’s not an official designation).

The R9 285’s new features are as follows:

Support for 16-bit floating point and integer values. This is advantageous in low power settings, where 16-bit precision is more than good enough. Up until now, AMD simply used 32-bit values for all calculations — switching to 16-bit in specific contexts will save power and internal bandwidth without noticeably degrading image quality. AMD also notes that this new Tonga GPU supports sharing data between SIMD lanes and has an improved task scheduler.

Color compression. Frame buffer data is now stored in a lossless compressed format and the GPU can read and write compressed data. This leads to substantial bandwidth efficiency gains and allows AMD to shrink the memory bus from 384 bits to 256 bits.

Such efficiency gains would normally result in a smaller die, but that’s not the case here. The R9 285 has a 359mm2 die and 5 billion transistors, compared to 352mm2 and 4.3 billion transistors for the old Radeon HD 7970/R9 280X. AMD has confirmed to us that the new chip has 32 CUs total (with 28 functional), implying that AMD has the option to bring a new version of the core out if it chooses to do so — but even the additional CUs don’t explain why a GPU with a smaller memory bus is both larger and more dense than its predecessor.

The new die is about 14% more dense than the original R9 280X/HD 7970, but in line with the R9 290X family.

New video decode block. The R9 285 also includes a new video decoder block for full hardware decode of H.264 4K streams. H.264 base, main, and high profile up to 5.2 are all supported. That means the new block can handle decoding 4096×2304 streams at 60 fps. There’s also a new fixed function video transcoder unit (VCE) that supports full hardware encoding to H.264, but AMD didn’t provide additional details on features or capabilities that distinguish this new unit from previous generation hardware.

They also said the 285 had the same GCN features as the 290 on a AMD stream I was watching on twitch, I asked them the question.

Blockchains

Same *features* (TrueAudio and some other stuff), not necessarily the same core design, such as the new bandwidth compression technique which isn’t available in Hawaii. When you’re tweeting an AMD rep on twitter, you’re talking to someone from the social media department. Not an engineer.

What on earth is AMD doing here? I like the card, but they have no excuse whatsoever for somehow managing to go backwards/flat in perf-per-watt, particularly with Maxwell looming overhead. AMD’s constant neglect of efficiency is the reason why the main they’ve hopelessly lost to Intel, but now it looks like they’re adapting the same approach to their GPUs as well. nVidia learned their lesson with Fermi, but AMD seems to have ignored that bit.

Joel Hruska

I suspect price/performance ratios may be off somewhat because there’s an increasing amount of evidence suggesting that the R9 285 is a cut-down version of a larger design and that design would back significantly better performance.

Fused off chips typically aren’t as power-efficient as their full-fat versions even if the full configuration still consumes more power.

As for AMD? I still get the feeling they’ve got something coming at 20nm. But I’m betting that this chip is the architecture we will see in Puma+.

pelov lov

They’ve generally followed the n or n-1 for their new APUs, so you’re likely spot on regarding their 20nm cat family followup. If Carrizo comes early next year, then it’ll likely have the same GPU as well (especially if both are in fact designed for 28nm).

But if Maxwell is coming this year on 28nm and it has the same perf-per-watt improvements we’ve seen from the 750(Ti), then I fear it may be a bloodbath until the 20nm GPUs come full force. If AMD’s latest and greatest still can’t match up with Kepler on the same 28nm node, what’s it going to look like when Maxwell gets here?

Joel Hruska

Well, let’s look at this from a historical perspective. Typically NV will launch a high-end GPU with one SKU below it. That means we’d expect to see a GTX *80 at a price point between $400 – $500.

The next-gen GPU they launch will target the R9 290X and R9 290. Typically NV follows up with midrange cards within 3-6 months.

This raises the really excellent question of how far off 20nm is, and whether AMD will have anything more to compete with. But the likelihood that NV will challenge the R9 285 in a few weeks is pretty low.

craig calkins

As a GTX 760 owner and a fan of this website, I feel like you did not test enough games, and all the ones you did test tend to favor AMD. Although the only game I play on this is Metro LL, and not very often. And with my game play all maxed out I get ~35fps, with a damn Phenom II X4 945 95 watt.

http://www.mrseb.co.uk/ Sebastian Anthony

Heya! Thanks for taking the time to comment.

Can you suggest some games for us to test, to paint a more even picture? (In this case though, it really does sound like the R9 285 is better than the GTX 760 — but don’t worry, I’m sure NV has something planned in retaliation!)

Cards up and to the left are the best GPUs. The R9 285 wins out against the GTX 760 on both price and performance and against the GTX 770 on price/performance ratio (though the GTX 770 is a touch faster).

AMD published their own BF4 results that show the R9 285 beating the 760 in BF4, so I didn’t bother running it.

jdwii

its not like this is the only site look for other sites its quite clear that a 285 is faster(by a small amount)overall compared to the 760.

My main gripe is power efficiency it uses 10-15% more power compared to a 770 while being 10-15% slower. So overall Amd is still 20% worse compared to Nvidia in power efficiency i know people think this isn’t that big of a deal but it is when it comes to APU’s with limited power budgets plus wow are they behind in video playback when it comes to power consumption.
One of the main reasons i bought a 770 over a 280X is power consumption i didn’t want a 330 watt card when it performs the same as a 220 watt card.

AlbiteTwins

I thought the R9 280X only had a TDP of 250 watts? Still you’re right, the GTX 770 beats it performance per watt.

Guest

Quite a fail IMO. And too late. If Nvidia keeps the 750Ti trend, then GTX 960 will blow this 285 out of the water.

Casecutter

That’s the question we’ve all been wondering is Maxwell efficiencies really going to scale up?

Zunalter

Okay, great news, but what hashrate does it pull bitcoin mining? :P

Joel Hruska

No one mines BTC anymore with GPUs and even LTC mining has fallen by the wayside as far as I’m aware.

Zunalter

Alas, you are correct…

Phobos

Not bad of a video card but gives very little to no reason to move if you’re coming from a r9 280.

Casecutter

Sorry this is a minor side-step, die size appears roughly the same, not any big improvement perf/watts, while based on TSMC 28nm pricing… it goes nowhere big! It could easily just named be just the 280R (as in re-spun).

Though there’s upsides, like the ability to achieve 40% higher memory bandwidth efficiency, with 1/3 the bus and just 2Gb. There’s the new video decode (UVD) and encode (VCE) engines to allow for higher throughput, not any big thrill to gamers. One thing not getting press “five billion transistors” verses Tahiti, at 4.3 billion. Both chips are made at TMSC on a 28nm process. How is it we hear Tonga’s is just ever slightly smaller than Tahiti, although more transistors? Lastly all the expected extra’s like; PowerTune dynamic voltage, TrueAudio, XDMA for CrossFire transfer via PCI Express, and updated display outputs with support for the latest DisplayPort standards.

Something in Tonga isn’t adding up, I think there’s more to the this chip and it might be found in the full XT iteration.

Joel Hruska

One theory is that the full 32 CU implementation actually has a 384-bus. That would explain the die size and transistor count.

Joel Hruska

Casecutter,

I thought I’d drop back in and add this. If you look at the low-level improvements, a lot of what Tonga packs is very impressive. But when Nvidia debuts Maxwell in a few weeks, I expect them to debut it at a very different price point and not in direct competition with this GPU. (That is just speculation, to be clear).

I don’t expect AMD to attempt to take this tech and respin a Hawaii yet again on 28nm — but I *do* think we’ll see further evolution of the GCN GPU design on 20nm and sooner rather than later.

Casecutter

After checking all the reviews I agree there are a lot of improvements in things like; min. FpS, frame time and Fps by percentile that appear to be enhanced. I also come across there’s huge discrepancies in power consumption numbers across all sites. It depends on the model of card if using Furmark, Unigine, Cryis 3, or say 5 titles averaged like [H] does, although in their tests we see that if you had reference 285 vs. spec 760 the Tonga works something like 10-15% more power, while feasibly say 6-9% better in performance. If you had similarly OC AIB customs, I think Tonga uses more power than the 760 but, it appears to still be making good use of it, however that gap does amplifies and not in Tonga’s favor. I think custom 285’s are a smart re-spin of the Tahiti LE, though sure not any upgrade (if on a 270X, 760, 280). I think when you look at similarly OC’d AIB customs of various flavors (a 760 or 285) the 285 is the right buy right today (in the current Nvidia/AMD pricing). In the end I see AMD being extremely aggressive by Black Friday/Christmas with price/rebates, these will deal a $200 USD and there’s nothing Nvidia can do about it, as they won’t/can’t truly compete with a GK104 at the low of margins continuously unless EoL.

I think AMD will procure this the mainstream, and yes Nvidia will be there to pull $400-500 enthusiast sales, the real struggle will be the $300-400. If AMD has a Tonga XT ready, while where Nvidia has their next X60 part for release… there’s the next real play.

Winston Smith

After the farce of Mantle (still stuttery on bf4) the throttling problems, the heat, as well as the sheer noise of the 280x I’ll be going Nvidia next time I get a card. Sick of this crap.

Joel Hruska

I didn’t have any stuttering problems with Mantle on BF4 the last time I tested it, though that was several months ago. And the third-party coolers introduced by companies like Asus keep the card quiet — 6-7db quieter than the initial stock cooler.

SumGuy954

256bit vs 384 bit. So is there no point in going for the 384 bit cards?
I always assumed the higher bit cards were better.

I know things change as time goes, but the model numbers suggest not much time has passed between these cards. I am a bit lost here…..

AlbiteTwins

I’d imagine its a reflection of the new architecture. Its for the same reason why a 384-bit GTX 580 performs worse than a 256-bit GTX 680. The naming scheme will being under R9 2xx does it make it confusing, but this is very much a different card than the non-OEM Radeon HD 7000 series or the rest of the R7 and R9 series. :)

The same is true for the GTX 750 and GTX 750 Ti when compared to the rest of the GTX 700 series.

Joel Hruska

Sum,

Generally speaking, higher #’s equal more bandwidth, and more bandwidth is good. However this has always been a trend more than an absolute fact. Historically, both AMD and NV have tended to release the first iteration of an architecture with very large busses, then pull those busses down somewhat (often with minimal performance impact) as the architecture was refined.

The original G80 from Nvidia had a 384-bit memory bus but the later G92 had a 256-bit bus. It was just as fast as G80, save in a few memory tests where the older card’s bus was important.

Both AMD and Nvidia tend to use busses between 256 bits and 384 bits wide for their high-end parts.

SumGuy954

I am just disappointed my 384 bit card is not any faster than the 256bit card. I am due for an upgrade. I hate to think i am just tossing my money out the door.

I was considering the 280x not too long ago, but the numbers were not enough of an increase to replace my HD7970. I am also getting the impression Nvidia is suppose to look better for similar specs. Not sure if there is any truth to that, but my fellow Corp Mates claim it is so. Ubisoft was recently exposed for this, so at least there is alittle shread of truth to the claim.

You are the expert. Might be a good idea to make an article on Nvidia vs AMD on the same game with screen shot comparisons for over all detail at what fps. Not sure your sponsors would allow that, but the truth of whether it is better to spend more on nvidia for same specs would be useful.

Joel Hruska

Sumguy,

Give me an idea what your price point is, what resolution you play in, and how often you typically upgrade — I’ll tell you whether or not you’ve got any good options on the table.

(Also, tell me a bit about your system — CPU and RAM, whether or not you have an SSD).

Regarding image quality: There was some stuff related to Nvidia’s GameWorks program (I can get you some links for that), but the problem with GameWorks had to do with performance penalties more than image quality. Nvidia and AMD generally both work hard to ensure that a game renders identically on both their cards.

Where you sometimes get differences that relate to a game’s “look” is in frame rate smoothness and speed. That may be more of what your friends are talking about.

I have built a few newer systems for customers. I have not seen a huge improvement over my set up when I ran bench marks before releasing them. I no longer work in that field

My price point is about $300 to 350. I usually wait for flash sales to get a $450 card for $300 or so. My fps are about 45 for bf4 and everything else its 60 to 140 depending on game and if it has frame lock. I don’t play BF4 anymore, so its not a concern considering all my other games are over 60 fps. Game i play most is simbin titles, shooters and space shooters like Star Conflict.

I run at 1080p, max everything except shows if it gives me issues and the anti aliasing, anisotropic which I run at 2x 2x to 4x 4x. I tried the 16x 24x but only seen a drop in fps not an improvement over clarity.

I am just wondering if I am missing out on clarity being loyal to AMD GPU’s when I see things like I did with watch dogs which I do plan to buy.

My freinds cards are nvidias. I have not seen them in person, but one had the HD7950 before he got his 660ti. He swears it looks better. That’s seems like a lower rated card, but I have heard this many times over time. I would like to see this myth busted or confirmed. I don’t have the resources to test out cards like that.

Thanks for replying, sorry for long winded reply.

Joel Hruska

Sumguy,

I’ll try to find you some image quality comparisons but I’ll be honest — I suspect any difference would come from the game settings and nothing else.

It’s true that on occasion, one GPU family or the other will render a game slightly differently, but that’s just not a normal problem.

Regarding your system: The QX9650 may be getting old, but it’s still a strong CPU for gaming. You’re losing *some* performance there but not upgrading to a newer Core i7 system, but not a huge amount.

Nvidia will be announcing new products in a few weeks. Recommend you wait until then no matter what.

Ray C

This is all good until the next thing Nvida comes out with

AA

Great card from amd in reducing power consumption and yet maintaining performance. But nvidia is proving to be a real threat in performance per wattt. I mean no point if your products take the performance crown but it uses enormous power.

Joel Hruska

Actually, I’d disagree with that. On desktop, many people don’t care — provided that the product is *quiet.*

Sure, if Product A and Product B are the same speed, and Product A draws 100W while Product B draws 500W, I’ll probably go for Product A. But that almost never happens — and it’s the noise that people hate more than the idea of a GPU that draws 40-50W more electricity at load.

Unless you run the GPU all-out 24/7, 50-100W simply isn’t noticeable on the final power bill.

AlbiteTwins

I’m glad AMD has made some advancements in power consumption to performance ratio, but it seems like they have only just caught up to Kepler. While the R9 285 performs noticeably better than a GTX 760 in most cases, at the same time it has a slightly higher power consumption (190 watt TDP versus 170 for the GTX 760, a difference of 10%). Some games have a performance increase of 10% or more to compensate for this, some don’t. I’m worried for AMD when Maxwell comes through. Hopefully AMD will find a niche with lower prices though.

Michael

when will it be on amazon for purchase?

eric

I don’t get this I just replaced my GTX 660 with the R9 285 so you can’t say I am bias one way or the other .
Here’s what I don’t follow.
I read a number of reviews a couple panning the R9 285 and giving results on various games with frame rates that were less than a GTX 760.
Then i kept browsing and I found a couple more reviews that showed the GTX 760 was severely caned by the R9 285 .
That’s what i don’t get which review is accurate or are they all biased either to Nvidia or towards AMD ?

Andon M. Coleman

Something about “Frame buffer data is _now_ stored in a lossless compressed format and the GPU can read and write compressed data.” rubs me the wrong way.

Color/stencil/depth has actually been losslessly compressed in GPUs since the days of the GeForce FX and Radeon 9700 when they began processing framebuffer memory as a bunch of specially encoded tiles. Delta color compression is a new technique, but to say that the color buffer is _now_ stored in a lossless compressed format implies that it was not before.

Northsidelunatic

2gb of vram (no thanks) gonna have to wait if want something with full direct x 12 feature set that can run 1080p 60 fps on amd side it seems

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.