Both AMD and NVIDIA have introduced elaborate graphics cards power consumption limitation systems with their latest high-end accelerators. While both systems are working on fundamentally different levels at this time, and it seems that nothing is set in stone yet, it looks like both companies are actively looking into ways to reduce the power consumption of their products in scenarios that do not perfectly fit their agenda.

No I wouldn't want my card to be limited to 300W or whatever they choose. I'm no expert but I've never understood why TDP needed to be limited either way - like why did AMD use the same gpu's for the 5970 as for the 5870 but underclock them to limit the TDP?

I'm guessing they want to be pro-environment or whatever but then please don't limit TDP but let us choose. After all, someone running a mega-pc on solar panels/wind turbines is doing more of a favour to the environment than those running much weaker cards on conventional power systems.

No I wouldn't want my card to be limited to 300W or whatever they choose. I'm no expert but I've never understood why TDP needed to be limited either way - like why did AMD use the same gpu's for the 5970 as for the 5870 but underclock them to limit the TDP?

I'm guessing they want to be pro-environment or whatever but then please don't limit TDP but let us choose. After all, someone running a mega-pc on solar panels/wind turbines is doing more of a favour to the environment than those running much weaker cards on conventional power systems.

Click to expand...

To not melt your PCI-E slot or cause PCI-E power connectors to overheat and burn.

As long as it does not effect overclocking and will never kick in while overclocked with extra voltage in game i don't see much of a problem but the fact it only kicks in during stress testing programs is kind of annoying but i must admit i always use ATI tool to find artifacts as it always catches them before things life furmark and kombustor for me.

The only possible problem i can see is if a card is put under water and pushed to the max safe voltage with a massive overclock if it could be possible to reach the power limit although i assume it would only kick in when using hard volt modding and subzero cooling methods.

I think it's OK in principle if there's control by the user, but feel that it will lead to them cutting corners and putting cheaper powercircuitry on the PCB and thus destroy OC'ing and I'm not OK with that.
Unless they don't act that way or the power parts are cheap enough for them to not care too much to save in that area, and don't start designing too much with that thought in mind and reducing the actual GPU and cause a slip in competition.
So it depends on the circumstances and secondary behavior and part prices.

This would've done well as a multi-choice poll. I could imagine someone picking all of these.

No, I know what I'm doing
No, this is just to fudge data in reviews
I'm afraid it will interfere with overclocking

I for one am extremely annoyed by the overclocking aspect, and in general how others seems so dismissive of how large a problem this is. If you can't stress test, you can't have confidence in your overclocks... so you shouldn't even bother overclocking. And no, playing every game for hours till you get a failure is not a practical solution. This has been a long time coming too. There hasn't been a be all end all stress testing program for graphics cards in years. I had a 260 pass OCCT, hours of crysis but crash consistently on hour 5-6 of ut3 gameplay until put back on stock. Now add in power throttling nuking the few stress testing options left, cheaper vrms making cards weaker for overclocking... in a few years they might as well bios lock all clock speeds. We should be in a uproar about it as an enthusiast community, pleading to the more skillful programmers amongst us to make a modern stress testing app.