The AMD Radeon RX 480 Review - The Polaris Promise

The Polaris Architecture - 4th Generation GCN

Polaris marks 4th generation of AMD’s GCN architecture, an evolution of a design put in place with the HD 7000 series of graphics card in 2011. This iteration adds some crucial new pieces to the puzzle though it does so without drastic fundamental changes that might have caused problems on a new process node. Additions and changes include improvements in geometry processing, variable resolution rendering, memory controller and compression changes, asynchronous compute modifications and display output support.

From a high level, the Radeon RX 480 looks very similar to previous GCN-based chips. The block diagram above details what we already know: 36 Compute Units, 2304 stream processor, 1 GCP (Graphics Command Processor), 4 ACEs (Asynchronous Compute Engines) and 2 HWS (hardware schedulers). There are four geometry processors, one for each shader engine, 144 texture units and 32 ROPs / raster operators. If you have seen a diagram of the Radeon R9 290X Hawaii GPU, this is similar. The inclusion of a second hardware scheduler should help asynchronous compute capability and is made even more interesting by the fact that it apparently existed in secret on Hawaii as well.

One of the areas that GCN has continually fallen behind NVIDIA’s GPU architectures is geometry processing. AMD is hoping to improve that situation somewhat with Polaris by increasing throughput. The architecture adds primitive discard acceleration that helps to remove triangles that are hidden or are zero pixels in size from the pipe before processing occurs. As you increase multi-sample antialiasing levels this discard acceleration will scale performance improvements up. A new index cache for small, instanced geometry lowers the overhead of data movement on the chip as well, helping to improve total geometry throughput in workloads that use the feature for mass object quantities.

Though the operations per clock of Polaris is identical to that of Hawaii, AMD did put in work to improve the efficiency of the shaders to decrease power consumption. Things like instruction prefetch changes, higher buffer sizes per wave of instructions, a slightly tuned L2 cache and support for native FP16 and Int16 all work in favor of lower power. Though they do not sound significant, these changes should result in a net efficiency improvement for shaders of the RX 480 / Polaris of 15% when compared to the R9 290 / Hawaii.

Shader intrinsic functions are a feature that is common on the console space but is just now being serviced to the PC. The idea is that content developers can insert assembly directly into code without having to run your entire application in assembly. This allows for specific cases where a coder knows the most efficient way to do something to actually implement it without adversely affecting the rest of the application. With shader intrinsic functions you’ll be able to access low level shaders that wouldn’t otherwise be available and loop operations without overhead penalties.

The memory controller has been improved with Polaris in order to support GDDR5 memory up to 8 Gbps resulting in 256 GB/s of bandwidth. I did ask specifically: the Polaris 10 GPU does NOT support GDDR5X. Other changes in the memory system result in higher effective and relative memory throughput including updated DCC (delta color compression) and a doubling of the size of the L2 cache when compared to the R9 290.

As you should expect, AMD continues to push its technological advantage in asynchronous compute and was open to discussing the differences between preemption, concurrent and prioritized compute models. While NVIDIA’s Pascal definitely improved on the company’s implementation of asynchronous capability, AMD still has the advantage in some key areas, including the addition of the Quick Response Queue that allows dynamically shifting priority to be assigned to in-flight workloads, adjusting compute performance as the application demands. This is an area that can be utilized by late warp functions in Oculus VR.

Part of what makes this granularity possible is the inclusion of dedicated hardware schedulers in the silicon that offload scheduling and are able to administer real time prioritized queues. With Polaris AMD now has enabled two HWS units. According to AMD, there are two currently in place on Fiji-based products but were only used internally to validate in testing they could be used in tandem. The result is a dual HWS implementation in Polaris.

I think that unlike the green team, with proper cooling, we may be able to see higher clocks. I like what RTG is doing and if you leave everything on Auto, for most gamer's, it should be OK

ATI has always been about dam temps and power draw, give me power!, which I have always been about. I don't mind 80% fan speed, that is what closed back headphones are for =)

I still feel the possibility for upping performance, in this architecture. What remains to be seen is what the AIB partners do with this chip. I am hopeful that with proper cooling and what I have been seeing from reviews, is that there is headroom to get higher clocks.
#MakeAMDgreatAgain

The RX 480 comes in at an excellent price and sets a new standard for its relative performance level. Even if you were leaning towards the added Nvidia technologies (PhysX, G-SYNC), the 970 is simply an older card that doesn't have some of the features current-gen cards have (also true with the last-gen AMD offerings). It would be, in my opinion, a bad decision to buy a GTX 970 over an RX 480, even if the price was identical.

But comparing it to a GTX 1070 and saying "can't justify the cost of entry" really makes no sense. Either you need the power for your intended usage that the 1070 provides or you don't. If you do, the RX 480 is a bad choice because it won't do what you need it to do. If you don't, the GTX 1070 is a waste of money. If you need an Nvidia card, at least wait until the GTX 1060 comes out and hope it will have a similar price/performance.

I am wanting to do 4K, I am not picky on frame rates, things don't have to be 60+ all the time, I don't need every setting sky high (especially can't imagine needing AA at 4K), but I haven't seen any reviews so far with any games at 4K.

Between you and other websites(one major one) only doing/focusing on the DX11 benchmarks, folks can see where the money is going to FUD Up on AMD while spinning positive towards Nvidia.

Watch the websites that practice those lies of omission, by only testing on the older graphics APIs, and not even attempting any DX12/Vulkan games. So once the fully optimized DX12/Vulkan games are out then there can be more benchmarks done.

When all the RX 480 features are tweaked and more and better games make use of explicit GPU multi-adaptor and the new graphics APIs then 2 RX 480s may just be a very good deal in getting GTX 1080 levels of performance at a very nice RX 480 price savings(for even 2 RX 480s), and DX11 is not the way forward for gaming, as there is DX12/Vukan that are out and being developed for! And Programming of the Polaris HWS units is in microcode, so the HWS units can be re-programmed and their scheduling algorithms can be improved over time with new microcode/firmware updates.

At Least Charlie over at S/A is doing a point by point comparison of each of the Polaris execution Units' new feature tweaks/improvements, for shaders, tessellation, compression, sound, scheduling etc.

That Nvidia has more money to hire in astroturf land, and send those terfing squads out in force! Nvidia is sure making DX11 its focus still, but DX11 is now on the way out.

You arbiter are looking for that spin against AMD and for your green masters, including one other prominent Nvidia favoring poster on one tech website in particular going over to another prominent Linux OS/Linux test suite based testing website and spinning for Nvidia there.

DX12 has been tested, it's just that not many titles exist. Of the few that do, AotS may be heavily optimized towards AMD's RX-480 (thus not representative of most DX12 titles) and most of the others if not all only have some DX12 code tacked on.

DX11 is on the way out?
Sort of, but how many games will the average person have in their Steam library that use DX12 by the end of 2017?

In fact, how many DX12 titles will ship relative to DX11 in the next year?

It's not tough for me to recommend cards now though (aside from waiting for prices to get closer to MSRP). If you have about $260 then get an RX-480 8GB.

If you have $240 or so and can't swing any more get the 4GB RX-480. (after-market like Asus Strix or similar with 8-pin or 2x6-pin recommended for RX-480).

In the $400 plus it's simply GTX1070 or GTX1080, again once prices stabilize.

There's really no overlap, nor any great DX12 data except that you should avoid NVidia 900 series if your budget is in the RX-480 range.

Directx 11 on the way out. LOL. Why are video cards backwards compatible to directx9. Blizzard still makes games directx 9/10 compatible such as StarCraft 2, Diablo 3 and World of Warcraft. They are a very profitable company. Directx 12 isn't viable yet with only 350 million user base for Windows 10 only. All other windows platforms number in a billion or two or more. So yeah there's that and other hurdles to jump. If Microsoft makes directx 12 available to Windows 7/8 users via patch then you can talk declining. Games take a few years to make sometimes and you're not going to scrap it and go with new directx right away and true directx 12 games are still a year or two away but still will have directx 11 version. When you see mostly only dx12 versions of games coming out you can make your statement confidently.

"The immediate comparison to NVIDIA’s GTX 1070 and GTX 1080 clock speeds will happen, even though the pricing on them puts them in a very different class of product. AMD is only able to run the Polaris GPUs at 1266 MHz while the GTX 1080 hits a 1733 MHz Boost clock, and difference of 36%. That is substantial, and even though we know that you can’t directly compare the clock speeds of differing architectures, there has to be some debate as to why the move from 28nm to 14nm (Global Foundries) does not result in the same immediate clock speed advantages that NVIDIA saw moving from 28nm to 16nm (TSMC). We knew that AMD and NVIDIA were going to be building competing GPUs on different process technologies for the first time in modern PC gaming history and we knew that would likely result in some delta, I just did not expect it to be this wide. Is it issues with Global Foundries or with AMD’s GCN architecture? Hard to tell and neither party in this relationship is willing to tell us much on the issue. For now."

14nm is more densely packed than 16nm, and maybe AMD was going for more cores per die with a higher density design library process tweak to get more dies per wafer and price them lower to grab that mainstream market share. Maybe AMD went with more layers and a denser circuit structure that can not be clocked as high. But maybe with a little better cooling solution the part's clocks can be higher with the AIO coolers and the custom boards.

There are higher density designs/libraries variations even among the different GPU SKUs with some designs made to achieve smaller dies at the expense of higher clocking ability, for more dies per wafer and better pricing metrics. AMD will be able to with other designs have the circuit pitch increased or GF will have that 14nm high performance tweak with the licensed from Samsung 14nm process, Samsung is sure to be tweaking that 14nm process for higher performance in future offerings so GF can license any newer Samsung 14nm LPP processes.

By only a marginal amount: these processes from TSMC and GloFo are a bit misleadingly named. They've picked one particular element of the patterning process they can perform exceptionally highly, and chosen that as their naming metric. In practical terms, they are almost identical in feature size, as can be seen from the Chipworks comparison of the two A9 dies.

Yes BUT GPUs use high density automated design/layout libraries and have more layers than the CPUs that utilize the low density automated design libraries that use less layers and pack transistors less densely to allow CPUs to be clocked higher. At 14nm and using higher density layout design automated libraries to achieve more massively parallel processing units per unit area, GPUs can not be clocked as high as CPUs! And there are even variations among the GPU style high density design libraries that allow some GPUs to be clocked higher than others!

The smaller a process node gets the less the numbers of substrate atoms per unit area there are to absorb the heat phonons and transfer these heat phonons efficiently to carry heat away from the transistors. Even Intel had problems at 14nm, but CPUs are laid out less dense by design, GPUs are made designed/laid out denser, and that means less heat can be allowed to be generated. When you cut a process node size by half(28nm to 14nm) the density goes up by 4 times and depending on the process node/circuit pitch and the overall layout (done by the automated design libraries/software) GPUs can be very densely packed and unable to be clocked very high. So the design engineers and the yield engineers along with the bean counters design the GPUs for the intended market, and the trade-offs are calculated well in advance of the final design being frozen and then certified and brought to market.

I forgot to add, AMDs CPUs have all the async compute implemented in their hardware so those extra circuits add to the heat budget, but increase the performance relative to any GPU designs the implement async compute in software/firmware and less in their hardware.

Why do you think that Nvidia went with 16nm and higher clocks to make up for some of that asycn-compute disadvantage that they have!

Designing with high clock targets is a bit of a risk. Designs aimed at higher clock speed targets require deeper pipelines. The number of pipeline stages unfortunately can not be changed easily, so those decisions may have been made a very long time ago. Making the pipelines too deep can result in a huge amount of extra power consumption and so can having too high of clock speed target for your process technology. Nvidia certainly spent a lot of man hours doing optimization to optimize power and clock speed for the 1080/1070. That isn't that much of an issue for a $700 card, but it isn't exactly optimal for a mid-range $200 card. The 480 design can probably be tweaked a lot considering it is both a new design and a very new process. We might get a more optimized design later for a 480x or something with higher clock. I think the 480 represents a good value as is though, especially with DX12. I don't think Nvidia's DX12 hardware support is equivalent yet; AMD has worked on DX12 like hardware and drivers much longer due to their invention of Mantle, which is very close to DX12.

Bah, maybe from the new process, the design is not a great departure from prior GCN iteration.
Also Polaris should be compared to Pascal before deciding how much good are, of coerce this will be possible only when NVIDIA release a card with a similar sized GPU

This card is being sold wholesale. I think AMD is doing this to boost up their fab while gaining press and market share.

Developing the fab technologies is an ART. In order to ramp up production to produce cheaper you have to spend money in one way or another so that the fab can get the practice and expertise it needs in order to meet future targets.

One way is to produce parts and just trash them only to bundle in their cost to products that pass spec. The other is to sell lower spec parts in bulk.

AMD chose a lower frequency part that matches the previous generation from nVidia at a low price point to help their fab master the art of production. They are logically going where AMD always goes, to the value market where most of the money is to be made.

If AMD wanted they could easily produce a GTX1080 card and likely do have internal models. They would just cost twice to three times what nVidia charges.

The practice AMD gives their fab now will also help with their future CPU runs.

Without the proper testing mule there is no way of Knowing if that is the GPU/die itself or the card's other power drawing features, so it may not be any of GF 14nm process node's standard performance to blame. And remember AMD's async compute is going to be keeping more execution resources running with few remaining idle because of better scheduling, so expect more power usage simply because the execution units are being fully utilized. That smaller 14nm die is going to get hotter, and if the testing SKUs is of a reference design the cooling may not be there to stop some of the power/heat related extra power draw related issues.

Hot circuits have higher leakage and if the cooling solution is not on top of things then the heat feeds back into the circuits causing more leakage leading to more heat. It's a vicious feedback cycle sort of thing.

How much larger is the 1070's die. Is the 1070 a binned 1080 part with some units disabled, and it has more dead/unpowered silicon available with which to absorb and dissipate the heat relative to a RX 480's die/die size. The RX 480 is on 14nm node is about 14% smaller than the 16nm node and at 14nm is more densely packed over the same unit area. And what is the circuit pitch on TSMC 16nm node compared to GF 's 14nm node. The RX 480's die is much smaller so less heat transference can happen over a unit of time compared to a larger die on an larger process node.

Maybe the AIB RX480 boards will have better cooling, more testing needs to be done. More testing needs to be done with DX12/Vulkan full optimized games, and maybe there needs to be some driver tweaks over the next few weeks also. it's a brand new card so teething problems come with any new GPU release.

Calling competing is not the right term here. They wanted to reclaim a big portion of the market share they have lost in the past few years; as well as build hype for their future cards. By putting out such a huge value card they are cementing their name out there again and grabbing the biggest part of the market share. Contrary to what everyone thinks the flagship cards make up very little of the market share as a whole. For a sub 250 dollar card there is no comparison for the 480, it's a no brainer, a complete wash, not a competition. Nobody in their right mind would buy anything but a 480 if they have sub 250 dollars right now for a video card.

yep depends what you wont out off a graphics card, for me vr bear minimum will not do, my 970gtx which i have had for a over a year is not enough for demanding games in vr, need more POWER, not for me.

So it has about the same power target as a Nvidia GTX 970 and performance wise it's trading blows with it usually coming out on top. Given that there are massive discounts now on the 970, i even got mine for 220 euros 3 weeks ago, I fail to see how AMD can win the graphics market. Sure, it has DP 1.3 & 1.4 but for most people that will not matter as much as the Nvidia branding which is on most people's mind for the last year.

I really wanted AMD to have a win here but I fail to see how they can conquer the market for the time being. Maybe I'm not seeing something but in my eyes this is a flop in terms of excitement in the way it's positioned now.

It will win with the VR games/newer games using async compute, and the RX480 has more async-compute future proofing ahead. There will be more DX12/Vulkan fully optimized titles putting more gaming compute onto the GPUs in the future, so look for those user's systems that have less powerful CPUs to benefit even more from Polaris. Let's start testing the games with weaker CPUs, and as time goes on and the games become able(Via DX12/Vulkan) to do more on the GPU gaming compute acceleration on async compute enabled Polaris GPUs, let's see how the RX480 improves with time. Those GTX970/Maxwell GPUs are not going to do well on future Fully DX12/Vulkan enabled game titles. Let's test for 6 months to a year and see with all manner of CPU based SKUs and with the RX480 and GTX970, Nvidia can not program their way out that async compute deficiency in their GPU's hardware.

This is merely "OK", not the big win I was hoping for AMD. It's certainly the card to get right now at 200-250$, but it probably won't be after the 1060 launches.

It looks to me that either GloFo 14 nm is very limited, or GCN just doesn't have the legs to clock much past this without considerable changes to the architecture. Perhaps if Vega is done on TSMC we'll have something to compare to, but it's alarming that Nvidia can get 2 GHz clocks and AMD's chips have a hard time reaching 1.3 Ghz.

Doesn't look too good in terms of AMD gaining much market share back with this. Given AMD's resources, perhaps it's wishful thinking that they could surpass their competitors at this point.

AMD needs 14nm process to match what nvidia did with 28nm two years ago? Where I live the requested prices of RX480 seem to be on par with 970 custom designs. So great job AMD you have done well... Trololollololooo.

Sincerely, the Internet. (some of us liked your ads though - We give you that much.).

Is GTX 970 on stock?
Because that's tad low.
I own both R9 290x and GTX 970.
In witcher 3, GTX 970 butchered R9 290x by 10-15fps on 1080p.

Honestly with these price point, it's quite a good upgrade for the people who own a card that is LESS powerful than a R9 290 or GTX 970. Remember his is $199 card folks, dont get your expectations to high or you'll wind up with a broken heart :D

I'll be slapping one of these in to replace my venerable 7970-ghz. Its such a steal, I'd be crazy not to.
Later I'll buy an AIB vega part when those are available, since things are looking good for vega. 2nd Gen GLOFO 14nm, and a shitload of ROPs/CUs. Bring it AMD!

True to my word. Barely better than a stock 970. A partner 970 or overclocker's reference one will beat it comfortably for comparable or less wattage. ROFL. Who was right anonymous? You can suck one. Must cite source. I can give you can bunch if you want but we both know it isn't necessary. I'm not the a**hole Nvidia fanboy you claim I was. It is everything I said it would be except possibly worse. It's a good value if you only look at price. In fact I feel sorry for AMD. It is what it is. Maybe I should change my user name to RX 480 1080 gtx killer not. Hype was over top for this. Who told everyone not to get excited over it. If you're an AMD fanboy this is a good buy for you. Everyone else not so much. I should rub it in more but Nvidia might raise the prices even more because of this fail.

Anyone who declares a winner this early in the game with the new DX12/Vulkan fully enabled titles on the way, and the benchmarking software still needing to fully catch up with the New gaming/graphics API ecosystems, is truly an egregious fool, or a paid astroturfer, or in your case both!

Never trust any reviewer who declares an ultimate winner this early in the competition, especially with the new DX12/Vulkan APIs, games, gaming engines, and benchmarking software needing the time to fully be able to properly adapt and measure the hardware/games after all this rapid software/hardware technological change in such a short amount of time.

That's rich. I'm paid by no one. I wish I could start my own tech site though. I call it like I see it. I've been involved in PC gaming since 1990. I've seen quite a bit and maybe know a thing or too. About time any volume of dx12/Vulcan games hit the market, these cards will be obsolete as Nvidia will have designed a card that well supports these Api's. That's assuming dx12 doesn't die from lack of support. No company is going to risk financial livelihood by only making dx12 version of game. I notice you don't mention openGL which is as viable but is it because AMD cards don't do it as well?

So you would rather be stuck with the underperforming DX11 version of games for the next 2 years? Regardless of budget, I wouldn't buy any Nvidia card right now; they are behind in hardware support. Although, I consider paying any more than about $300 for a video card to be foolish. DX11 will die quickly. DX12 is here to stay and will be adopted very quickly due to the console market. Both major consoles can support DX12 titles, and DX12 is strongly favored due to the improved multi-threading capabilities. Both have low power 8-core CPUs. I hope you aren't recommending to your friends that they buy obsolete Nvidia cards.

I sure don't want the larger power draws of dx12 either. Basically the CPUs overhead is decreased a little and wattage drops a little. However, the GPU consumption increases beyond what the CPU loses and this is with or without Asynchronous compute enabled. This should not be happening. A lot of sites tout CPU wattage can drop 50% which is nothing compared to a video card wattage increase which these sites don't tell you about. For the little performance gain I'd rather not have directx 12 or asynchronous compute.

Awww, yaaaay! The little fanboy liar is back! I'm so glad to see you, I missed slapping you around the comments.

And you've already started right in with the lies! "I'm not the a**hole Nvidia fanboy you claim I was." And yet every comment you've made since you came back proves that, yes, you really are.

Now, in that big long comment thread a week ago where you consistently embarrassed yourself and didn't even know it, the only claim you made about the RX 480 was that two of them in Crossfire wouldn't come close to a single 1080. But, wouldn't ya know it, they do a pretty decent job keeping up at least. And the difference in price is bigger than the difference in performance. Oh, and those are reference cards, too. A pair of AIB cards with significantly better cooling and power delivery will only get better.

No, two reference RX 480s do not beat a single 1080, as things stand right this minute. (See what I did there? That's called "accepting the facts". I hope you learn to do that soon, because you're wrong REALLY OFTEN.)

You're still wrong about your claim that Freesync only working up to 90Hz, and so far everything you've shown to support that claim has been on one 144Hz monitor with a Freesync range up to 90Hz. Show me something that doesn't rely on the Asus MG279Q that supports your claim and we'll talk.

Then there was this gem: "There is a lot bigger difference between AMD review samples and retail samples than 1-1.5%." And then you pointed to two completely different review sites, using two completely different hardware platforms, and referenced the overall FireStrike scores instead of the graphics scores, I guess because you somehow thought that was proof enough.

What about your claim that buying the less expensive AMD card would wind up being more expensive once you include the added power cost on one's electric bill? Hint - it would take about 18 to 20 years to make up the difference on one's electric bill. Are you going to admit you were wrong about that? (No, of course you're not.)

Remember saying this? "Do you really think a $200 video card is going to clock at 1500 mhz when their Furyx enthusiast card maybe could reach what 1150-1200 mhz. A 380 core clock is 970 mhz. The base of Rx 480 is presumed to be 1266 mhz is 30% boost and 1500 mhz is 55% boost. Doesn't seem too likely."

We'll see when the AIB cards come out - if some of them can get over 1500MHz, will you admit you were wrong? (Or will you turn around and cling to, "nuh uh, that's not $200!" instead?)

I also remember this little nugget of cowpie: "AMD fanboys use all the excuses in the world. I've read so much BS from them such as I still get 60fps with my ancient HD (insert model number of your choice). No need to upgrade yet. Well you fanboys are the reason AMD is in such dire straits. Buy a product once in a while instead of bragging."

Amusing that someone trying to power a 4k monitor with his GTX 760 would think his own argument didn't apply to him. I'm shocked, I tell you. Shocked.

Oh, don't forget about this one: "Nvidia cards have more built in limiters and protections in their cards versus AMD." I'm still hoping you're actually going to talk about what "limiters and protections" Nvidia has that AMD doesn't have. Because it sounds more like a claim that you pulled out of your bottom and stated it as a fact in the hopes that everyone would think you knew what you were talking about and accept it. I think you made it up.

In fact, I know you made it up. You said so yourself - you were "assuming" that AMD's marginally higher failure rate was "probably" related to temps and "possibly" less protections in place. Just admit that it was just your fanboy bullshit and move on. Clinging to a lie when you've been proven wrong is just sad.

Oh I'm so glad you're back, princess. I'm gonna have so much fun with you.

Yeah you haven't proven me wrong either with facts and cites. Yours is largely opinion as well. You cherry pick things and a lot of stuff I posted was right but you didn't address them at all. Such as Radeon's horrible power consumption in 1080 video playback. You glossed over most and picked a few out and said I was only right about maybe one thing. OK. I could post more but don't really care to.

About freesync you may be right but about it going up to 144hz but it's most effective range is between 40hz to 90hz meaning it performs best here. Yes it's supported range can be 9 hz to 200hz? but thus far only goes down to about 30hz. I don't think either freesync or gsync have to do much beyond 90 hz because you need one or two powerful cards to tap this.

You are entitled to your opinion as well. Is the going over to spec of PCI express power limits fake then? AMD has admitted to the problem already. I think this is more serious than Nvidia's dvi doesn't work above certain range for overclockable Korean monitors. So I'm an ahole for trying to spare AMD fanboys from harming their systems. OK.

About review samples versus retail there isn't much to go on as most tech sites do not buy retail samples to compare to the press samples so no surprise won't find much. It was disappointed purchasers who said they didn't get anywhere near the numbers reviewers get. Although may be due in part that the Furyx performs better with a stronger CPU $1000+ that most people don't have. But shopping a few cards around to the reviewers because of "limited" supply, doesn't look on the up and up. As far as I know Nvidia hasn't done this but both companies probably cherry pick cards for review.

Electricity costs about .07 where I live for 1 kilowatt hour. This is cheap compared to Europe and other places. A difference of only 60 watts of power over 18 years at 8 hrs a day is going to cost $220 dollars more assuming rate holds same which it won't. It's around $12.25 a year more. Want me to prove my math. A comparable Nvidia card don't cost that much more. LOL. It's $37 to $61 during a card's average lifetime of 3 to 5 years. More time playing or way higher rate as well as increase in rate over time effects this. Maybe drastically. Wattage of comparable AMD to Nvidia can go 100 watts+ more for a few more frames more or with asynchronous compute enabled. Is asynchronous compute "free" performance then. If it's worth it to you for the average increase of 5-10% then OK but it isn't free. Another point you glossed over. So do you exaggerate much.

Exactly. The $239 (8GB) reference hits what near 1400 mhz so far at best. The $200 (4GB) model is supposed to be weaker with slower RAM. Not confirmed yet however.

As for the 760gtx, I bought that when it was new 3 years ago and I bought my 4k monitor less than 6 months ago. I gave my old system away to a needy person at work and gave my 1080 monitor to him as well. My card will do 4k on higher details than the console does with 720-900p at least 30 frames. It's usually well over 30fps but doesn't hit 60 unless I compromise settings a bit. Only exception is AC Unity, which I get same frame rate at higher settings as it uses my card's entire 4 gigs even at 1080. Older games of course. I always have option to play at 1440 or 1080 as well. The monitor is future looking and has amazing detail as well. I am also looking to upgrade my card to play newer games better as well. No rush it's adequate for now.

More may have been a less than optimal choice of a word. Maybe it implied number. I should have said "better" protection.

What do you presume the higher failure rate correlates to. Maybe cheaper parts on a cheaper quality video card from a deep in debt company possibly cutting corners. You usually get what you pay for. I say that because it could be the reason or something else entirely. I'm not an engineer. You are entitled to your opinion as well. Lie is a strong word tell me 100% I'm wrong because you know the real reason. I'm waiting for your proof. Get real.

Resorting to childish name calling. Who is the one having fun? I'll just have to use better word selection and prove the littlest things to your nitpicking.

"Microsoft’s Chas Boyd was on-hand at AMD’s editor’s day for Polaris and previewed ideas that MS is implementing to help improve multi-GPU scaling. The best news surrounded a new abstraction layer for game developers to utilize for multiple GPU support that MS would be releasing on GitHub very shortly. According to Microsoft, with only “very little” code adjustment, DX12 titles should be able to implement basic multi-GPU support."

The way to go is with the DX12, and Vulkan(More Open than DX12), graphics APIs in charge of multi-GPU load balancing. Let this multi-adaptor be done in the graphics APIs/OSs and more folks involved in creating the Multi-graphics-adaptor load balancing algorithms. This is the way multi-GPU load balancing and support should have been done in the first place, with the OS/Graphics APIs in charge of sending the work to the GPU/s of any make or model that are plugged into any PC/laptop or other computing device.

Keep the hardware drivers simple and close to the metal, and move most of the milti-GPU load balancing support into the graphics APIs/OS, and standardize the way that a computing system accesses its available processing resources. Vulkan lets the GPU ODMs register extensions to the API to handle any new feature sets with the graphics API more in charge of the workloads given to each GPU/processor, and its probably the same for DX12. As far as load balancing between multi-processors GPUs, CPUs and others, it's better to have the entire computing industry in on developing the load balancing algorithms for GPU/ other processor multi-adaptor load balancing, instead of just the companies that make the GPU/Other processor hardware.

So M$ is releasing some middleware, but I'm more in favor of standardizing things more formally in the Graphics APIs and in the OSs, for the proper management of any processing hardware installed on a computing system.

not amazing, even a little disappointing considering it's 14nm 2300SPs and it's pretty close to the 970 (28nm 1660SPs) at almost everything (power usage and performance)
also the 970 overclocks like a champ and will leave the RX480 (which doesn't overclock) far behind...

BUT as a $200 card it's pretty decent, huge improvement over the old stuff like the 960 for sure...
also as a first GPU at GloFo, looks pretty OK I guess? it can only get better!?

anyway, I hope the best for AMD, and looking forward for the 470 and 460, since those would probably be more suitable for me.

Hey Rayan, nice review, it's pity there is not a complete VR comparison especially since the card was advertised for that use by AMD.
I can understand why there isn't a GTX 1070 but why put a GTX 960 with only 2GB? I see that on amazon it cost 10$ more and actually the very same with rebate. I ask this because in some games the difference (both in performance and frametime) between GTX 970 and 960 looks a bit weird and was wondering if could be due to video memory saturation.

Look at this objectively. With this performance level and the advantages of a-sync compute, this becomes the first recommendation to upgrade your rig for VR. For the price, this targets what is 80+% of the actual card market and for the price, offers a massive performance ability for 1080p which is still (BY A HUGE MARGIN) the largest gaming segment.

For a company who needs a bottom line win for revenue and long term sustainability, this is a homerun.

Is HDMI 2.0 no longer backwards compatible with DVI? You should be able to get an HDMI to DVI cable to use this card with DVI monitors without an additional adapter from DP as long as this is still supported. I use one on my HD 7950 and it works just the same as a DVI cable as far as clarity and performance.

Has anyone noticed the ranking of this card in Passmark? Their chart tells a very different story about this card's performance. RX 480 scores 6369 G3D marks, just a hair above GTX 770 (6145) and GTX 960 (5925). It doesn't even come close to the GTX 970 (8661). How do you explain the large 2300 G3D mark discrepancy, when your chart claims that the RX 480 beats the 970?

Thanks but I would advise you comparing results from different source is not a good practice. Now I'm really curious about the GTX 1060, looks to be a really interesting card, SMP, power consumption, clock speeds... looks it will be an hot summer ;)

Comparing results from other sources is called corroboration or validates your findings. Anyone can find one biased source. The more that you can find to prove your point the better.

Yeah the 1060 should be a good card for the masses. I'm guessing price will be $250 or less. I figured Nvidia would keep an ace in the hole. They have the budget to sit on things until they need to reveal them.

You are lucky. These cards draw more wattage than PCI express spec. Could possibly fry motherboard connector or PCI express power connector. They also run hot around 84C. If you're set on getting one wait a week or two for a non reference version with better cooling and or power delivery.

That's because they don't want Polaris to be shown losing to the 970 gtx. HardOCP dropped it from their benches because they said they didn't have time but added Hitman 2016. They said they plan to bring it back until Battlefield 1 comes out. Of course draw your own inferences here.

And the power efficiency doesn't seem much better, if at all, compared to a GTX 970 ?

nvidia seem to be kings of GpU architecture, optimization and design.
nvidia 28nm design beat AMD 14nm finfet ! This is not looking good for VEGA because the RX 480 is twice slower then the 1070, yet consume about the same power.

When nvidia release the smaller pascal chip for the 1060/1050,
AMD window of opportunity will be closed.

I think $240 is to high of a price for a R9 290 class card,
but $199 seem just about ok for the 4GB model.

The card is drawing more power from the PCI-E slot the the slot is designed for. Plus the RX 480 is using more power then the TDP was listed. If you have a cheap oem motherboard. I bet that the card will burn out the PCI-E slot on it. Old classic AMD making promises that they can't keep when it comes to power usage.

That is some pretty desperate trolling. In pcper's very precise power measurements, it draws almost exactly 150 watts, which is the rated TDP. This is also exactly what the pic-e slot and a 6-pin power connector are specified to deliver.

I understand it isn't a direct competitor to this card by design, but I would have really liked to see the 1070 added to the charts here. I am currently debating spending the extra money (once the prices come down to anywhere near MSRP) and getting a 1070 instead of a 480 and visualizing the performance difference would have been handy.

I understand why they didn't added the GTX 1070 to the review but if you think for a moment that the GTX 1060 is the natural competitor of this card does this mean we will not see a GTX 1070 vs 1060 review? seems odd...

It would be nice to be able to have a discussion with out all of the obvious FUD that occurs in any Nvidia/AMD story. This card is about what I expected, and is the clear winner in this price segment, at least until Nvidia has a new card in this segment. Nvidia does not offer any competition to this card right now. The absolute performance and performance per dollar graphs paint a very clear picture unless you are a troll.

Anyway, I find it interesting that this card seems to have 4 ACE instead of 8. The Xbox One has 2, I believe, and the PS4 has the full 8 used in most GCN cards after the first implementation. I missed the interview, so I don't know if Ryan discussed this with Raja. I guess this was probably done to save power? Although, it is unclear how the ACE units interact with the hardware scheduler. It would be great if someone could write an article about this, if the information is available.

The AIB cards coming , and this is coming from Kyle Bennett at H , will comfortably clock between 1490-1600mhz range on the core. 1600 been a golden sample. 1500mhz+ been very common.

That's 970/980 level max oc. So the $$ are yet to be seen for the better coolers etc but AIB 480s are going be a lot better than a 970 is , aib vs aib even, as the 970 loses now in dx11 and badly in dx12.

Even in this review it shows the boost clock is throttled and under the max boost set by quite a margin. Its mainly due to the cooler and power limits on the bios i would think.

TDP is reduced with better cooling, so it will help a little. I don't expect better than 970 power consumption after OC however , which is way higher than pascal which is a bummer. So what you are basically getting here with an AIB card is 8gb ram vs 4 (3.5), much better performance (particularly in dx12) and reportedly very good overclocking.

>480 AIBs will be out before 1060 is also.
Lmao.
There will be no stock of ref 480 before 1060.
For a perfect launch amd needed 100k cards, not 10k.
Aib cards - did you mean aib cards with custom coolers? There are only 2 tease atm, msi/asus and 0 info about specs and availability.

No stock , lol i can buy them now even in NZ. Not any cheaper than a 970 mind you here, but the value is better on the 480 still.

You missed one, leaks on Saphire Nitro as well.

I'm 100% sure we will see AIB 480s before 1060 .. its a paper launch 1060 on the 7th for sure, wait till end of month at least before you see ref 1060 at the earliest. Most reports on the AIBs is 2 wks, so mid July , maybe sooner. And if they have anything like the 1070/1080 shortages then yeah its an utter fail for NV.

So you will have AIBs 480s up against the ref 1060 on launch day for the 1060 is my pick, so could be quite a battle for that performance level.

Still good to see the RX480s are selling in large numbers even ref models.

Some might stay up half the night fiddling with the WattManager trying to optimize power use, then have nightmares... on either company's card. Hopefully someone will share good tips as they get better at it.

"AMD is only able to run the Polaris GPUs at 1266 MHz while the GTX 1080 hits a 1733 MHz Boost clock, and difference of 36%."

This "review" needs a complete rewrite...;) Comparing the $600-$700 GTX 1080 with the $239 RX 480 is idiotic--but I see that doesn't stop you from doing it...;) AMD hasn't released its competitor to the 1080 yet...it's called Vega and should be released around Christmas.

But, you could of course buy *two* 8GB RX 480's for X-fire if you were of a mind to, and have comparable performance, 2x the Vram for d3d12 games, and still be ~$200 less than a single 1080, etc. But if that's the kind of power you want you do better to wait for Vega, imo.

I think the comment was just comparing the clocks, not the performance. I think there is an expectation that with the same or similar process nodes, AMD should have been able to match Nvidia on clocks but they can't.

This can either mean Global foundries 14nm sucks, their engineers couldn't make an efficient architecture, or to keep costs low they are allowing low grade chips through to get volume.

Considering clocks are the way consumers get a boost from their cards for free, it's pretty disappointing how little you can clock these to and how much heat they produce in the process.

The comment makes perfect sense. They simply questioned what the deal was with the frequency being much lower than NVidia's. Pretty simple to me...

AMD has already explained this at PCPER. They started the GPU almost three years ago and were targeting mobile first so the design was optimized for lower frequencies. They switched to get a VR READY product available so were forced to raise the frequencies which is why it barely beats the GTX970 (which is the minimum VR READY GPU).

Buy two RX-480's for Crossfire and have comparable performance? Multi-GPU is not really the way to go yet. It's going to improve though it will take about two years or so to get proper support into game engines.

If we use $650 for the GTX1080, and $275 for the RX-480 8GB which I think will be realistic once prices stabilize, then there is a $100 difference in favor of a solution that may barely bet the GTX1080 at times and other times have the GTX1080 up to 2X as fast in some titles.

In fact, we don't even know if AMD has the single-pass optimization for VR that can increase FPS by 1.6X. If not, and we use 85% then the GTX1080 will be 3X faster in some titles (or have much better visuals as both should have the same 90Hz).

They improved their own power metrics vs 28nm generation hugely - not so much against NV. You would think by the way some people go on here about power consumption that it matters more than features / gaming performance and value.

Its of very low importance on my own scale of important metrics. Still i guess when you can't win Perf/$$ or best in segment points , that's all you have to make a negative point about.

I guess some will pay more for the 1060 to have better perf/watt then spend years catching that value difference up in their power bill lol.

Waiting for the arguements in the next gen over 10 watts differences .. every generation that goes by this will matter and less and less than it already does now.

folks are tools and do not see the whole story. RX 480 has ALOT more under the hood then comparable Radeons or Geforce cards, they are using very top of the line very new power circuitry on every CU which they have not had a chance to optimize for EVERY scenario, drivers for them are NOT optimized etc etc.

Why are "supposedly" GTX970 drawing less power but performing faster in Nvidia biased titles, I WONDER WHY, maybe cause it is Nvidia biased, maybe because the 970 is a much older card they have had time to optimize for, maybe the power circuitry that Nvidia used/uses is older so more "known" etc

RX 480 is a terrific design, I was looking at Radeon R9 380-380x about a year ago give or take and knew this was around the corner, so waited, I currently use a 7870 having owned 2 of them and kept the "better" one, this is costing ~$100+ less then my 7870s did, is using ~40w less not counting overclocking etc and also ~2-2.5x faster with double to quadruple the amount of memory, IMO that is a massive nice jump.

Keep in mind ALL electrical anything can have spikes unless you put a limiter on them, and in the case of high tech, these limiters can cause instant crashes if you are to severe with their limitations via cutin and cutout of power, obviously we do not want this, so there will be "play" in when the chip decides to regulate or not.

Anyways point is, to just base information on one or 2 points is BS when these things are amazingly complex, and while the GTX 1070/1080 are "faster" then 900 series they also chopped away some more and ramped up clock speeds to "make them fast" not as much parts to power, not as much power required, run them with lower clock speeds see how much performance they lose, give them multi-gpu capability via DX12 oh wait they chopped that away as well.

Long story short, I know myself and many others are quite happy with the results we see here from RX 480 considering the performance/power/price they have delivered and we know they WILL get better in time, not held back by proprietary BS Nvidia does and not suffer driver performance castration like Nv has done for decades now, it takes time to fine tune things like this, especially when your development team is MUCH smaller then direct competition, and if we were to go by what all Nv fanboys act, Radeon would not be competitive at all for decades, and that is simply far from truth its just disgusting

I know of one thing, Radeons have not used underspec components, massive raw amperage intentionally shortening component life, and did not chope things away and put limiters in place to hold back performance on purpose and still overcharge, can you say that about Nv, nope.

Anyways, to one above me, they didnt improve architecture like Nv did with maxwell ROFL, go do some reading instead of trollbaiting, and you will see Radeon did a great deal of optimization/improving with Polaris compared to what Nvidia did with Pascal which by all intents and purposes is just a highly overclocked Maxwell(which itself was more or less an overclocked/optimized Kepler)

Your comment about frequency is meaningless to most people. BENCHMARKS are what matter most. AMD could not get higher frequencies because the GPU was intended for mobile so couldn't overclock well (their words, not mine).

Who cares WHY NVidia is faster? They are, so I'll buy their product.

I am recommending an after-market RX-480 to those people in that budget however. I'll rethink that when NVidia has competition with a Polaris GPU.

Above $200 I only recommend the RX-480, GTX1070, or GTX1080 and none until prices drop.

This Card isn't a revolutionary rainbowpoopin Wondercard, but a step up for AMD. It feels a bit like AMD is still one step behind nVidia,
but at least they are still chasing it in some areas.
The only concern i have is that the market is already grassed up by nVidia's 970. I mean look at the Steam Presence of this card.
But also they may enough folks who haven't upgraded to this performance level yet and for them the 480 really is a no brainer in my opinion.

I have a feeling with the 8GB and DX12 capabilities it will age much better then the 970 and that it don't had shown it's full potential yet.
(There are still major Driver issues with GTA V and with the Powerstates in Idle, leading to about 7 Watt more powerdraw than it should.)
Some nVidia biased guys on youtube benchmarked it against an overclocked custom 970 while the stock 480 wasn't overclocked at all.
Im sure some "Greenhorns" already have seen those Benchmarks as the proof of superiority of nVidia over AMD. 8)
But anybody who argued about the 970 have the same or similar pricepoint now most likely forgot that the new 970 price only came to existence because the upcoming release of the RX 480.
(Also an GTX 1060 might very surely influenced by it, price AND Performance wise.)
Also it will be more future proof with it's 8GB, DX12 and modern Display Adapter Support.

I for myself was a bit disappointed by the benchmarks after all the hype, but if you rethink it, it still is a very good card overall.
So i decided to upgrade from my R9 280 that is with OC about 25% less powerful then the 480.
Mainly because of the newer tech inside, because i need freesnyc and a bit more performance now for my new 34" UWQHD Curved Monitor.
And i rather play with less details till Vega than buy a more expensive Card from nVidia that refuse to support freesync.
Otherwise i may had even bought the 1070 for a about 65% higher price.

Question coming from older hardware still running HD7970 Ghz edition, will it run good with I3-3220 so i can carry it around for another few months-year till i can save up for upgrades? my game collection mostly consist of DX9-11 titles so dx12 is not any of my concern atm.

Not sure if it will run well with your i3 processor as most sites only bench with latest Skylakes or extreme Intel processor. Nvidia seem to get more frames when game is CPU limited or directx 11 in general as well as 1080 resolution as processor power is more relevant. The power draw over PCI express spec for RX 480 might be a concern for your older hardware. If you can wait a month or so prices will be coming down because of saturation of new cards by AMD and Nvidia. If you want a stop gap card a Maxwell or even Fury or 300 series AMD cards should start being discounted soon. Don't buy this at full retail as resale value may be poor later and if you're getting by now keep saving.

I think its a good strategy from AMD to capture the 85% GPU market in that price bracket to win some dGPU market share from Nvidia. If not overclocked it keeps the powerdraw down and people can play all their games at 1080p at full settings fine on budget to mid end pc's.

Lets be honest here that is where MOST PC Gamers are at with pc specs and 1080p displays so good move by AMD!

I would return the card before it passes retailer warranty period. This power issue is turning serious. You can wait until either AMD rectifies the problem or partner fixes with 8pin connector and power delivery.

Quality control may have been overlooked to meet demand on a seemingly rushed card. Gotta love AMD Robert's response. It only effects a few out of the hundreds of reviews but failing to mention they didn't test the same way as those that found the issue. I'd call that trivializing and spinning at it's finest.

Maybe the bad cards have poor ASIC quality and shouldn't have been released in the first place.

Poor AMD. Maybe it was part of their master plan to fry your motherboard so that you can just upgrade to the shortly coming Zen with shiny new motherboard required.

@Ryan: if I grab the 480 8 GB . is there a way to force my w1064 bits to use the vram as its main . Ms as an annoying tendency to favor ram(at all cost)128 MB of ram ddr3 ? Would this force OS to use go 480 vram instead or gamer got to beg till we re omln our death bed