The question is what is the 580? A rebranded 480 with an unlocked gf100 and a new cooler?

The chip is basically a GF100 w/full speed support for FP16/64-bit texture filtering and improved z-culling technology (mostly seems geared towards improving tessellation performance). And it also sports around 6% more CUDA cores (no SM's are locked, unlike the GF100).

These features slightly improve perf per clock vs GF100 in most situations.

Of course the card itself also sports higher clocks and a different style of cooler that seems to be quieter, and perhaps performs better. It has a few million less transistors, I'd imagine probably because they eliminated some Tesla-only feature from this chip, but that's just a guess.

Oh, and it also has this power-limiting technology described in this post.

Why is this? AMD believes that the instruction streams generated by OCCT and FurMark are entirely unrealistic. They try to hit everything at once, and this is something that they don’t believe a game or even a GPGPU application would ever do. For this reason these programs are held in low regard by AMD, and in our discussions with them they referred to them as “power viruses”, a term that’s normally associated with malware. We don’t agree with the terminology, but in our testing we can’t disagree with AMD about the realism of their load – we can’t find anything that generates the same kind of loads as OCCT and FurMark.

Yup. Thanks.

Quote:

This brings us to Cypress. For Cypress, AMD has implemented a hardware solution to the VRM problem, by dedicating a very small portion of Cypress’s die to a monitoring chip. In this case the job of the monitor is to continually monitor the VRMs for dangerous conditions. Should the VRMs end up in a critical state, the monitor will immediately throttle back the card by one PowerPlay level. The card will continue operating at this level until the VRMs are back to safe levels, at which point the monitor will allow the card to go back to the requested performance level. In the case of a stressful program, this can continue to go back and forth as the VRMs permit.

By implementing this at the hardware level, Cypress cards are fully protected against all possible overcurrent situations, so that it’s not possible for any program (OCCT, FurMark, or otherwise) to damage the hardware by generating too high of a load. This also means that the protections at the driver level are not needed, and we’ve confirmed with AMD that the 5870 is allowed to run to the point where it maxes out or where overcurrent protection kicks in.

The Legion FurMark results tell me that the 5970 is doing exactly what it was designed to do. There are protection measures in place that kick in when thermal or power levels exceed maximum permitted levels, so the card was taking the correct actions to protect both itself and the motherboard.

Now, I would also like to note that this particular benchmark (FurMark) was coded primarily as a stress test. It is certainly not elective in how it utilizes the GPU, as the application basically lights up the entire card to create or emulate maximum power draws and scenarios. No real world app or game does that for an extended period of time. FurMark was lighting up everything, with average power draws exceeding the requirements of any standard app by 20-40 percent.

It is perfectly fine for Nvidia to do this for esoteric apps like Furmark. I do have a problem if the over-current protection is triggered in games though; that means the card was engineered inadequately.

Perhaps future reviews should test power consumption with and without over-current protection to give readers a better understanding of the product.

It is perfectly fine for Nvidia to do this for esoteric apps like Furmark. I do have a problem if the over-current protection is triggered in games though; that means the card was engineered inadequately.

Perhaps future reviews should test power consumption with and without over-current protection to give readers a better understanding of the product.

Both companies do not design their cards (especially the top end ones) to run Furmark. I have no doubt they have the engineering skill to achieve that (if they want to) but this will come at the expense of the consumers as the price of the cards will be driven higher. I think not many end users like this to happen.

I think both AMD and Nvidia do not expect such a game in the foreseeable future. Not only a game has to surpass the limit, it needs to sustain this power draw for ''an extended period of time''. A few seconds? Maybe. 10 minutes and more? Hardly.
And even if so, Nvidia says this can be adjusted with an updated driver.