VGA Power Consumption

For power consumption tests, Benchmark Reviews utilizes an 80-PLUS GOLD certified OCZ Z-Series Gold 850W PSU, model OCZZ850. This power supply unit has been tested to provide over 90% typical efficiency by Chroma System Solutions. To measure isolated video card power consumption, Benchmark Reviews uses the Kill-A-Watt EZ (model P4460) power meter made by P3 International.

A baseline measurement is taken without any video card installed on our test computer system, which is allowed to boot into Windows 7 and rest idle at the login screen before power consumption is recorded. Once the baseline reading has been taken, the graphics card is installed and the system is again booted into Windows and left idle at the login screen before taking the idle reading. Our final loaded power consumption reading is taken with the video card running a stress test using FurMark. Below is a chart with the isolated video card power consumption (system without video card minus measured total) displayed in Watts for each specified test product:

Video Card Power Consumption by Benchmark Reviews

VGA Product Description

(sorted by combined total power)

Idle Power

Loaded Power

NVIDIA GeForce GTX 480 SLI Set

82 W

655 W

NVIDIA GeForce GTX 590 Reference Design

53 W

396 W

ATI Radeon HD 4870 X2 Reference Design

100 W

320 W

AMD Radeon HD 6990 Reference Design

46 W

350 W

NVIDIA GeForce GTX 295 Reference Design

74 W

302 W

ASUS GeForce GTX 480 Reference Design

39 W

315 W

ATI Radeon HD 5970 Reference Design

48 W

299 W

NVIDIA GeForce GTX 690 Reference Design

25 W

321 W

ATI Radeon HD 4850 CrossFireX Set

123 W

210 W

ATI Radeon HD 4890 Reference Design

65 W

268 W

AMD Radeon HD 7970 Reference Design

21 W

311 W

NVIDIA GeForce GTX 470 Reference Design

42 W

278 W

NVIDIA GeForce GTX 580 Reference Design

31 W

246 W

NVIDIA GeForce GTX 570 Reference Design

31 W

241 W

ATI Radeon HD 5870 Reference Design

25 W

240 W

ATI Radeon HD 6970 Reference Design

24 W

233 W

NVIDIA GeForce GTX 465 Reference Design

36 W

219 W

NVIDIA GeForce GTX 680 Reference Design

14 W

243 W

Sapphire Radeon HD 4850 X2 11139-00-40R

73 W

180 W

NVIDIA GeForce 9800 GX2 Reference Design

85 W

186 W

NVIDIA GeForce GTX 780 Reference Design

10 W

275 W

NVIDIA GeForce GTX 770 Reference Design

9 W

256 W

NVIDIA GeForce GTX 280 Reference Design

35 W

225 W

NVIDIA GeForce GTX 260 (216) Reference Design

42 W

203 W

ATI Radeon HD 4870 Reference Design

58 W

166 W

NVIDIA GeForce GTX 560 Ti Reference Design

17 W

199 W

NVIDIA GeForce GTX 460 Reference Design

18 W

167 W

AMD Radeon HD 6870 Reference Design

20 W

162 W

NVIDIA GeForce GTX 670 Reference Design

14 W

167 W

ATI Radeon HD 5850 Reference Design

24 W

157 W

NVIDIA GeForce GTX 650 Ti BOOST Reference Design

8 W

164 W

AMD Radeon HD 6850 Reference Design

20 W

139 W

NVIDIA GeForce 8800 GT Reference Design

31 W

133 W

ATI Radeon HD 4770 RV740 GDDR5 Reference Design

37 W

120 W

ATI Radeon HD 5770 Reference Design

16 W

122 W

NVIDIA GeForce GTS 450 Reference Design

22 W

115 W

NVIDIA GeForce GTX 650 Ti Reference Design

12 W

112 W

ATI Radeon HD 4670 Reference Design

9 W

70 W

* Results are accurate to within +/- 5W.

As we previously mentioned in the Radeon HD 6970 Temperatures section, the Cayman GPU was originally designed for 32nm but was ultimately constructed at 40nm. This increased the die size, and raised the operating temperature to levels that AMD isn't generally known for. Judging from the chart of results above, it appears that the 40nm Cayman GPU may not have created the power monster we anticipated. The AMD Radeon HD 6970 requires one eight-pin and one six-pin PCI-E power connection for proper operation. Resting at idle with no GPU load, the Radeon HD 6970 consumed only 24W of electricity. Compensating for a small margin of error, this falls roughly in-line with idle power draw from the ATI Radeon HD 5870. The noteworthy idle results were actually 7W less than the competing GeForce GTX 570 video card, but not quite as efficient as the 20W Radeon HD 6870. But what about under full 3D load?

Once 3D-applications begin to demand power from the Cayman GPU, electrical power consumption climbs to 233 watts. Measured at full throttle with FurMark's 3D torture load, these results were 8W lower than the GeForce GTX 570 (241W maximum power draw), and 7W less than the ATI Radeon HD 5870. Overall it seems that the 40nm Cayman GPU is fairly efficient, especially considering the 2.64-billion transistors it feeds. The graphical performance more or less matched the GeForce GTX 570, so it's nice to see the Radeon HD 6970 dropping a few watts from the power consumption.

Wow! After all the hype the 6970 is pretty disappointing. That's why competition is great. Without AMD we wouldn't have the GTX 580. NOw AMD better get themselves in gear and come up with another champ or Nvidia will get lazy.

Ohhhhhh...... The disappointment. Can't wait to hear the AMD camps "It's all about price/performance" AND "But our cards are'nt as hot as a Fermi Nuclear Reactor in breach mode". Except that according to the review, the 6970 is about as hot under load as the old GTX480.... Looks like AMD camps are going to have the temperatures they were joking about on the original Fermi come back and bite them on the ass so they will have to revise their comments this time around.Nevertheless, competition in this industry is great for everyone as it always results in aggressive pricing so a big welcome thanks goes out to both AMD and Nvidia in this round for giving all of us such powerful cards at more affordable prices...... In the end, isn't that what we all want?

They used Catalyst 10.11 for the HD 6970. They should have benched with 10.12 at least. Notice how HD 5870 stomps the HD 6970 in Battleforge? Certainly an obvious sign of an unoptimized driver for the HD 6970. A correct optimized driver can make all the difference.

I did some research, and it appears that you are right. Sorry about that, and thank you for the correction.

Catalyst 10.11 seems to be an unoptimized driver for HD 6970. Would you agree, or do you believe that it is working at "full capacity"? If it is working at "full capacity", then why does the HD 6970 lose so badly to HD 5870 in Battleforge? It doesn't seem to make sense. Perhaps when AMD releases Catalyst 10.13 (fully supporting HD 6900 series) we'll see an appreciable improvement? Let's hope!

Here is AMD's response, received only an hour before launch, which I will post to the article in a moment:

"We are aware that there are some abnormal performance results in BattleForge with our new AMD Radeon HD 6900 Series graphics card. Keep in mind this is a new VLIW4 shader architecture and we are still fine tuning the shader compilation. We will be able to post a hotfix for Battleforge shortly that will provide a noticeable increase in performance."

Disappointing? It's marginally faster than the GTX570! The games where it looked 20-30% behind are obviously driver issues. Also, after all what hype? Which hype was that? Virtually everybody I read was saying they were waiting to see what kind of performance this would deliver, and it produces better performance than even the unexpected NVidia card released just days before, while consuming less power. How's that disappointing? NVidiot, are we?

Besides, I think you're a little glossy-eyed to think the 6970 is "marginally faster" than the GTX 570 simply because of "driver issues". Even more so when you consider how Battlefield: Bad Company 2 and BattleForge are both AMD-sponsored game titles. Even 3dMark Vantage was co-developed with AMD/ATI.

Once drivers are more mature, you can expect to get some performance back. But will it be 20-30%? That might be asking a bit much.

For the prices, the new cards are decent. Obviously the 580's temps will be with the throttled speed, so I don't think they will as low as the reported temp during normal use. But I gotta agree, had I needed to upgrade, now is really a great time to upgrade or build a new computer, since prices are very competitive.

I for one will not accept any form of argument around "it's a driver issue". If that were the case, AMD would never discount their card at launch at the levels we are seeing. They knew how it would perform with the new shader architecture and they priced it accordingly. The change to the new architecture doesn't automatically unilaterally guarantee that it will be better than the 5870 all the time.... The 5870 was and still is a great card.

The way I see this card-Performs slightly better than the GTX570 (save for obvious beta driver-related performance problems, such as in Battleforge)-Draws less power than its competitor, the GTX570-Is as quiet as the GTX580 (according to TPU)-Is priced slightly (~$20) higher than the GTX570 (Going by egg prices)

I'm not sure what there is to be disappointed about. The card does what it's supposed to.

Guys I agree with both arguments and being an avid AMD fan because of price/performance Amd has to offer, but it may be still to early to tell what the true performance of this card can do. I agree a 20% to 30% increase based on drivers is a bit much however is possible. Some nvidia drivers in the past stated clearly so and so % performance in said game and so on. And maybe a better driver like the 10.12 will increase battleforge etc maybe 15% or slightly more matching the cards price/ performance over the 580/570. Dont get me wrong I am trying to be unpartial as possible, hell I had a GTX 280 and loved it. ever since the 5xxx series I have had only NV cards even back when NV and AMD were friends. Point in both the GTX 5xx and HD 6xxx series are sick cards and personal preference will be the determining factor.

Some other review sites also gave synthetic test result numbers which surprised and confused me a bit. The 6970 pumped out around 18k with the 570/580 shelling around 25-28k. Synthetic as it may be other sites indicate the 6970 tops out close to the 570 and comes close to the 580 in dx11 syths and games. It would seem that older dx10 apps appear to lag with this card and just maybe a newer driver may just provide a 15% or maybe more increase. IDC tho fps are still awesome with this card as is xfire scaling.

I thought 6970 was gonna be only 5% behind GTX 580 - turns out to be 20% behind, with even GTX 570 beating it. No wonder AMD has been FORCED (by their own FAILURE) to slash prices at launch.

Add to that the fact 6970 is hotter than 5870, much hotter than GTX 580 and only slightly cooler than GTX 480; then add much higher power consumption and higher noise level than 5870 -

All of this adds up to one thing - if you own AMD shares, sell now as Nvidia is set stomp on AMD at least until we see 28nm. AMD's fortunes are headed south for the rest of 40nm, especially given that Nvidia has dual card GTX 595 waiting in the wings to hand 6990 its hat in January.

Looks like there's gonna be a GTX 570 in my foreseeable future. I'm pretty disappointed in ATI after all the hype about this card, and the million and two people telling everyone to save their money and wait for it. I wonder whose prices will drop first, the 570 or the 6970.

Great review as always, but I think there's a typo on the Closer Look page, fourth paragraph: "This video card measures slightly shorter than the 11" long Radeon HD 5870, but longer than the 9.75" Radeon HD 6970."

Slightly shorter than itself, eh? Haha. I'm glad I'm not the only one typo'ing AMD's new model numbers every now and then.

Given the results Radeon 6970 is targeting same market space as GTX 570. This will push price drop of the latter, which are very good news for everyone interesting to upgrade to CUDA supporting card (which are all 3D designers willing to speedup renderng time by use of iRay technology in 3DS Max or other 3D software).

The 6970 was getting 30-50 FPS on ULTRA ALL HIGH with ALL AA etc etc with the cata 10.10 and/or 10.11 on my system. The minute I slapped 10.12a hotfix driver I am now getting 60-170 FPS!!!!! Nvidia has lost in my opinion!

I'm looking for a replacement card, so I'm doing some 'layman research'. One thing captured my mind: Ati's best still uses 256 bit wide memory bus, like it used it on the good ol' 9700 (if my memory serves well). This 256 bit wide bus limits the theoretical mem. bandwidth of any Ati card, because on this 6970 they already using the fastest(?) gddr5 chips (6GHz effective clock). Nvidia, on the other hand uses only 4 and 5 GHz chips, but their memory bus width is variable (like the GPU chip capabilities/performance), they use(d) anything between (64?) 128-512, and their 580 has more memory bandwidth with slower ram chips than Ati/AMD. Till now the faster speed of the new GPU's always depended on the evolution of ram chips (NV 6800 -> 8/9800 / 280/285 bandwidth doubled, perf. too). Future Ati chips will need more bus width or they will be bottlenecked even with the fastest ram, NV on the other hand if they do an '580' with the 285's 512 bit wide bus + Hynix's best (by the 400/500 specs an NV with 2 GB ram would use an 512 bus), the 6GHz eff. ram chip then they will have a whopping 375 GByte/s memory bw.

I've noticed on several graphics cards lately, that it is not possible to crank up the memory speed to the stock rating on the GDDR5. The board crashes way before the rated speed for the memory chips. That tells me that the memory controller in the GPU is the weak link.

So, yes. A wider data path is sometimes a cheaper and more reliable way to get bandwidth. particularly when you are on the hairy edge...

I think the lowest, average, and highest FPS for each of the graphics cards should be benched as I have read that the 6000 series, although not exceeding in most FPS, does exceed in the least FPS, which is often times the most important in gaming.

There is no consistency in the benchmark applications for reporting MIN-AVG-MAX FPS. What I really like is the graph that is provided by benchmarks like METRO 2033 and the old Far Cry 2 benchmarks. Sometimes there is a tiny stutter in the game that takes the MIN number down, and it's really not the fault of the video card, as the game does it on every card. It doesn't really matter if one card dips down to 8 FPS and another dips to 10 FPS, the user experience will be exactly the same.