The Sapphire RX 580 Nitro+ Limited Edition is a highly overclocked custom variant of the just-launched AMD Radeon RX 580. Performance now beats the NVIDIA GTX 1060, and in the box, Sapphire has bundled two user-replaceable semi-transparent fans with blue LEDs, if you want a little bit more bling.

Consumes 100W more to achieve 5% more perf than a 1060. Guess which chip you wont see in a laptop... again.
Not to mention that it's trading blows with an equally priced custom 1060. And those cards were out for a while.

Wanted to say I'm disappointed, but my expectations were extremely low to begin with. And boy were those expectations met.

From a 1st glimpse it seems like a good mid-range GPU (*and it is a good mid-range GPU).
BUT....
1) Extremely high power consumption compared to the competition (*GTX1060) or even its predecessor (RX480)
2) Its 5% average ahead of GTX1060 reference, but that means that it will be equal with the aftermarket GTX1060s.
3) In addition to comment 2), if we add the fact that NVidia will be launching a refreshed line of GTX1060s with faster memory, then......
the results that come from thoughts 1) , 2) , 3) aren't very encouraging for AMD.

I don't believe the power consumption is an issue with this class of card unless you're running a tiny it build or something. It's not 290/390 territory.

It is a nice selling point to say it's more efficient but in reality I don't really think people care when the card is under gaming load. People will whine about it on the internet but it shouldn't be a. Deal breaker for someone that just wants to game. The price is my main issue in terms of making it competitive with the 1060, nVidia can easily afford to drop the price knowing they have the 1070 and beyond basically untouchable until Vega.

The power consumption is not an issue if the difference is up to 40W or so. But when you consume almost twice the power to achieve the same performance as your competitor ... you gotta ask yourself where you went wrong.

I'm just wondering how you got such low numbers on the 1070 Doom 1080p benchmark. I am constantly hitting the frame limit with mine on nightmare settings throughout the entire game. (yes i finished it lol).

I don't believe the power consumption is an issue with this class of card unless you're running a tiny it build or something. It's not 290/390 territory.

It is a nice selling point to say it's more efficient but in reality I don't really think people care when the card is under gaming load. People will whine about it on the internet but it shouldn't be a. Deal breaker for someone that just wants to game. The price is my main issue in terms of making it competitive with the 1060, nVidia can easily afford to drop the price knowing they have the 1070 and beyond basically untouchable until Vega.

I'm just wondering how you got such low numbers on the 1070 Doom 1080p benchmark. I am constantly hitting the frame limit with mine on nightmare settings throughout the entire game. (yes i finished it lol).

It's funny how people obsess over power consumption all of a sudden. I only see that as a problem if that affects actual heat and overall noise (which is usually connected with heat). Considering this card is one of the quietest cards, who really cares? And it overclocks kinda decent. I mean, 1480MHz is quite nice clock for Polaris. And now someone will throw an miniATX or ITX build in my face. How many people seriously use those compared to normal id or high towers? They are tiny minority no one cares about.

The power consumption is not an issue if the difference is up to 40W or so. But when you consume almost twice the power to achieve the same performance as your competitor ... you gotta ask yourself where you went wrong.

Design choices. AMD has kept the compute resources intact over GCN revisions, while Nvidia was able to dial some of it back, along with the memory subsystem, when they introduced the new compression algorithms with Maxwell. If you go back and look at Kepler and earlier, power vs. performance was darn close from both companies. AMD hasn't found an equivalent "magic bullet" to Nvidia's delta compression, so they're stuck with primarily process gains to this point.

Design choices. AMD has kept the compute resources intact over GCN revisions, while Nvidia was able to dial some of it back, along with the memory subsystem, when they introduced the new compression algorithms with Maxwell. If you go back and look at Kepler and earlier, power vs. performance was darn close from both companies. AMD hasn't found an equivalent "magic bullet" to Nvidia's delta compression, so they're stuck with primarily process gains to this point.

It's funny how people obsess over power consumption all of a sudden. I only see that as a problem if that affects actual heat and overall noise (which is usually connected with heat). Considering this card is one of the quietest cards, who really cares? And it overclocks kinda decent. I mean, 1480MHz is quite nice clock for Polaris. And now someone will throw an miniATX or ITX build in my face. How many people seriously use those compared to normal id or high towers? They are tiny minority no one cares about.

yeah, amd promised but didn't deliver. it's not exactly an effect of the new process, but they supposedly added intermediary power states that should be active in multi-monitor for example.

If you look at the temps page, table at the end you'll see that the card still runs full memory clocks in multi-monitor. compare that to blu-ray, where the clocks are lower and look like they reflect what AMD intended to do.

The power of this card is high, but it should be no surprise as it's clocked much higher than Polaris 10 optimum is:

Reference had 0.980 - 1.070 V, this one has 1.125 V. It surprises me however that they only increased the GPU clocks but kept the RAM speed at the same level. I thought that the RAM speed was the bottleneck for Polaris 10 ...