People in the first few months of a relationship like to mention anniversaries a lot despite the (rather obvious) point that the word denotes a yearly period. "Milestone" would be more appropriate though it does sound less glamourous and perhaps a bit pessimistic (well, in the case of relationships, anyway). Might even seem cynical.Reply

In case it's not clear, since we have a price comparison chart at the bottom, the purpose of the prices up top is to help describe the cards. The fact that the 7970GE is listed for $500 next to the $550 7970 for example is to make it clear that it's launching at a lower price than the 7970. It helps offer some perspective on capabilities and the market segment it's designed for.

I doubt any AIB will actually release GE cards with reference cooling. Most likely they will be custom cooled, so the loudness of the reference card is a bit of a moot point.

It's good to see some decent driver improvements from AMD. I'm still quite happy about 7970 performance at 5760x1080, and it's enough for most games when OC'd. It would be interesting to see though, whether the GE has improved the max OC. Most likely it's no better though, and you'll be better off buying an old custom 7970 for a good price and OC'ing it to the same levels as the GE.Reply

So we can deduce that the prior 7970 overclocks were sucking down an even larger amount of enormous electrical power as those chips are of lower bin.

I guess we need an overclocked power suction chart with an extended table for the amd housefire 7970.

Any savings on card price or a few frames at a resolution near no one owns will be gobbled up by your electric bill every month for years - save 1 watt or 9 watts at extended idle, but when you game it's 100+ watts and beyond with the overclocked 7970 - maybe they should be $300 with 3 games.Reply

Well, it works both ways. You won't always be gaming, in addition there's all that compute hardware that, if properly harnessed, would save you money over competing solutions because you'd get the job done quicker. It used to be pointless to consider using anything for compute that wasn't a Quadro, Tesla or even FirePro, however those days are coming to an end.

Having a 7970 will make sense for compute if that's your bag (there's a reason for the die size plus the extra memory and bus width), but this time, NVIDIA enjoys a performance/watt advantage which might go unchallenged for a while. Unless, of course, that extra hardware on the 7970 is properly leveraged; future games, perhaps?Reply

Nvidia isn't holding GK110 in its sleeve waiting for something. It is unfinished in the first place and there's no manufacturing capacity to produce such a large chip. Nvidia still struggles to fix GK104 design to have good yields. GK110 would be impossible to produce in since it is twice bigger and such will have at least 4 times less yield.Server market is not only much more profitable, it is operating on a contract basis. Nvidia will start to produce Tesla K20 in Q4 2012.

IF(?) desktop card based on GK110 will hit the market it won't be sooner than Q1 2013. And it is not something that you can change really.Reply

"Of course this isn’t the first time we’ve had a hot & loud card on our hands – historically it happens to NVIDIA a lot – but when NVIDIA gets hot & loud they bring the performance necessary to match it. Such was the case with the GTX 480, a notably loud card that also had a 15% performance advantage on AMD’s flagship. AMD has no such performance advantage here, and that makes the 7970GE’s power consumption and noise much harder to justify even with a “performance at any cost” philosophy."

Very true, however the power consumption and heat difference between the 5870 and the 480 was definitely more pronounced.

The 680 is an incredible card, no doubt about it. It may not win in some titles, but it's hardly anywhere near unplayable either. AMD being right there at the very high end is fantastic but unless titles truly make use of GCN's compute ability, the extra power and noise are going to be hard to swallow. Still, I'd own either. :PReply

Toms added a custom cooler (Gelid Icy Vision-A) to theirs which reduced noise and heat noticably (about 6 degrees C and 7-8 dB). Still, it would be cheaper to get the vanilla 7970, add the same cooling solution, and clock to the same levels; that way, you'd end up with a GHz Edition clocked card which is cooler and quieter for about the same price as the real thing, albeit lacking the new boost feature.Reply

Would it be possible to drop the 1920x1200 definition for test? 16/10 is dead, 1080p has been the standard for high definition on PC monitors for at least 4 years now, it's more than time to catch up with reality... Sorry for the rant, I'm probably nitpicking anyway...Reply

I went over this already with the amd fanboys.For literally YEARS they have had harpy fits on five and ten dollar card pricing differences, declaring amd the price perf queen.

Then I pointed out nVidia wins in 1920x1080 by 17+% and only by 10+% in 1920x1200 - so all of a sudden they ALL had 1920x1200 monitors, they were not rare, and they have hundreds of extra dollars of cash to blow on it, and have done so, at no extra cost to themselves and everyone else (who also has those), who of course also chooses such monitors because they all love them the mostest...

Then I gave them egg counts, might as well call it 100 to 1 on availability if we are to keep to their own hyperactive price perf harpying, and the lowest available higher rez was $50 more, which COST NOTHING because it helps amd, of course....

I pointed out Anand pointed out in the then prior article it's an ~11% pixel difference, so they were told to calculate the frame rate difference... (that keeps amd up there in scores and winning a few they wouldn't otherwise).

1920x1080: " GeForce GTX680 is on average 17.61% more efficient than the Radeon 7970.Here, the performance difference in favor of the GTX680 are even greater"

So they ALL have a 1920x1200, and they are easily available, the most common, cheap, and they look great, and most of them have like 2 or 3 of those, and it was no expense, or if it was, they are happy to pay it for the red harpy from hades card.Reply

Your comparison article is more than a bit flawed. The PCLab results, in particular, have been massively updated since that article. Looks like they've edited the original article, which is a bit odd. Still, AMD goes from losing badly in a few cases to not losing so badly after all, as the results on this article go to show. They don't displace the 680 as the best gaming card of the moment, but it certainly narrows the gap (even if the GHz Edition didn't exist).

Also, without a clear idea of specs and settings, how can you just grab results for a given resolution from four or five different sites for each card, add them up and proclaim a winner? I could run a comparison between a 680 and 7970 in a given title with the former using FXAA and the latter using 8xMSAA, doesn't mean it's a good comparison. I could run Crysis 2 without any AA and AF at all at a given resolution on one card and then put every bell and whistle on for the other - without the playing field being even, it's simply invalid. Take each review at its own merits because at least then you can be sure of the test environment.

As for 1200p monitors... sure, they're more expensive, but it doesn't mean people don't have them. You're just bitter because you got the wrong end of the stick by saying nobody owned 1200p monitors then got slapped down by a bunch of 1200p monitor owners. Regardless, if you're upset that NVIDIA suddenly loses performance as you ramp up the vertical resolution, how is that AMD's fault? Did it also occur to you that people with money to blow on $500 graphics cards might actually own good monitors as well? I bet there are some people here with 680s who are rocking on 1200p monitors - are you going to rag (or shall I say "rage"?) on them, too?

If you play on a 1080p panel then that's your prerogative, but considering the power of the 670/680/7970, I'd consider that a waste.Reply

16:10 snobs are seriously getting out-of-touch when they start claiming that their aspect ratio gives better color reproduction. There are plenty of high-quality 1080p IPS monitors on the market -- I'm using one.

That being said, it's not really important whether it's benchmarked at x1080 or x1200. There is a neglible difference in the number of pixels being drawn (one of the reasons I roll my eyes at 16:10 snobs). If you're using a 1080p monitor, just add anywhere from 0.5 to 2 FPS to the average FPS results from x1200.

Disclaimer: I have nothing *against* 16:10. All other things being equal, I'd choose 16:10 over 16:9. However, with 16:9 monitors being so much cheaper, I can't justify paying a huge premium for a measily 120 lines of vertical resolution. If you're willing to pay for it, great, but kindly don't pretend that doing so somehow makes you superior.Reply

They can justify it, the are the amd fanboy. Ever DOLLAR counts when it comes to card pricing, five or ten bucks makes amd the WINNER !!!!!!!! and greatest card value ever for enthusiasts !!!!!!!!!!!

But then, moments later, the nearly unavailable and much more expensive montior is all theirs, at their bosom (moments before they harped amd wins in high rez triple screen no matter the data) - now suddenly they have a 1920x1200 IPS or whatever...

Here's why...

1920x1080: " GeForce GTX680 is on average 17.61% more efficient than the Radeon 7970.Here, the performance difference in favor of the GTX680 are even greater "

1920x1200: " GeForce GTX680 is on average 10.14% more efficient than the Radeon 7970.At slightly higher resolution appears to have slightly worse performance of the new card (compared to 1920x1080). "

That's an over 7% performance difference overall... nVidia still kicks amd's lousy second placer, but it's not SO embarrassing at 17%+....

See, now they all love 1920x1200 and will DEMAND as hyper-harpies that anand keep the monitor rez as is...

In the end it will just be anand "listening" to it's fan base.... R O F L

Dude, they COULD just run their 1920x1200 in 1920x1080 for the benches - it's not hard at all - but you know... amd doesn't look better than crappy as heck then..Reply

I couldn't care less which it is as long as the image is good. I do think you're downplaying the framerate advantage of 1080p over 1200p though as we're talking an extra 11% screen area going from one to the other.

1200p used to be far more common and Apple are one of the manufacturers keeping it alive (along with 4:3 ratios).Reply

You said that you may use some extra setting for Scyrim... How about using some popular extra large texturemap upgrade? It would be more punishing to use those larger texturemaps, and In Scyrim, like Oblivion before, those texturemaps are guitep popular among users of more poverfull graphic cards!Reply

"There’s a silver lining on this one, though. Ahead of this review, I let AMD know about our acoustic concerns and the company claims that most partner boards will employ third-party cooling, not its reference configuration."

So noise is not an issue at all. Cards like sapphire with dual x , gigabyte with windforce, powercolor with PCS+ have good cooler designs. Power will be more. But the Radeon HD 7970 Ghz edition frankly more than makes up for that with its performance at 1600p and multi monitor setups.

If you are on a 1080p monitor and want perf/watt , price perf and a cooler setup go for custom GTX 670. For the rest who have 1600p or multi monitor frankly there is only one option - a custom Radeon HD 7970 card or a custom Radeon HD 7970 Ghz edition. Reply

These features would require a new BIOS. As far as I'm aware, AMD does not support flashing their cards with new BIOS. Anyway, there's nothing there that you can't acheive via normal overclocking anyway (asides from the slightly better chip binning).Reply

Did you try reaching AMD to comment on the rather low performance ceiling on Skyrim? looks as if their drivers are way more CPU hungry than Nvidia's and that's why they are getting capped at a lower rate.Maybe that's what usually hinders performance in other CPU limited titles like WoW?Reply

Ryan As you mentioned Dirt Showdown will take the place of Dirt 3 in your test suite I would like to make a suggestion that a few more games be changed. Max Payne 3 and Alan Wake are good changes. Maybe Crysis Warhead could be replaced by Alan Wake and Portal 2 by Max Payne 3. Another very demanding game which could find a place is Witcher 2 Enhanced edition. Focusing on games released in the last 12 months in your test suite helps prospective buyers / gamers decide based on performance on recent titles which they will most probably be playing. Reply

This time they didn't moan and complain about fairness, because amd got beat anyway, and they wouldn't if it won, which it did not, I must point out, feeling the overwhelming need to state, again, and again.

Amd LIED, with their false advertising about this card - their hot loud slow housefire...

Amd is an evil corporate monster who lies to the little children they sell their products to (and to soccer Mommies who actually pay the card price to keep the lies going - granted they pay less than Abu Dhabi oil sheiks) Reply

It is ALSO more efficient. How clueless are you still ? Why do clueless Cluseau's respond ?

Look, if you ever decide to click the link and take a gander for an hour or two ( my estimation about how long it would take for you to get a round opinion of the massive database of the most popular reviewers concerning these tow cards, don't get back to me.

A gigantic thank you would be nice but I'm not expecting it.

Maybe silverblue needs a friend too, then you can spew name calling together, and giggle, that is likely the extent of the mental capacities, so have at it. Reply

Yet amusingly, you failed to point out the error of the author's ways before somebody here pulled you up on it... I doubt that efficiency is a word that can be mis-translated; the author just used the wrong term. The very fact that you quoted two lines with the same incorrect term proves that you were happy enough to treat it - as is so often in your case - as factual. If anything, the 680 is probably something in the region of 10-15% more efficient per frame than the 7970 based off the collated results on that article, notwithstanding the fact that drivers have been significantly revised for both architectures since then.

You also stated that the article was '"their opinion" though, so "it's not wrong"' but you slate everybody else's conflicting opinions as wrong. Am I the only person seeing an issue with this approach?

I'm really confused as to why you even bother to visit here except to be a class-A troll, and I'm going to take some of my own advice and flat out ignore you from now on unless you actually say something of any use. Ordinarily, I wouldn't tell others what to do but on this occasion, I implore them to follow suit. We should put you in a room with Beenthere just for the hell of it.Reply

You're goners in the head dude.The article, which you still obviously never looked at (as it will crush your amd fan heart), collates reviews from around the web, including this sites.

It's not an opinion, it's FACTS, as best we can get them, in one BIG mathematically deduced pile, and the word is meant to be FRAME RATES, which of course is all you amd fan boys claim you care about, unless of course you were spewing about eyefinity without 3 monitors and no $100 adapter that took a year and a half to come down to $35 not available...

In this case, the house fire amd card sucks the low end of everything, an epic fail on every single metric, and amd has crap for compute software support so they lose there as well, just like any card loses when their driver software sucks in games.Worse yet, amd often takes years to fix anything, if ever...then drops support.

10 - 12% perf lead . So its not as simple as you think. The fact of the matter is Nvidia GTX 680 is losing the majority of games - Deus Ex, Alan Wake, Anno 2070, Crysis 2, Witcher 2, Witcher 2 EE, Dirt 3, Skyrim, Dirt Showdown, Crysis Warhead, Metro 2033. Also the margins in some of the games is very big - Dirt Showdown (50%), Crysis Warhead (15 - 25% depending on resolution) , Metro 2033 (20%), Witcher 2 (20%) , Anno (15%). Nvidia's wins in Shogun 2 clearly. BF3 is not a consistent win when you compare many reviews. Even Batman AC which runs better on GTX 680 with FXAA , runs faster on HD 7970 Ghz edition with 8X MSAA. So its clearly a case of you being in denial mode. so go on keep ridiculing others if that makes you happy. Reply

From your link, Dirt Showdown, where you have just claimed a 50% lead for 7970...

" If the GeForce GTX 680 Radeon HD 7970 equals 1080p without advanced lighting, when it activated its dive performance, Nvidia does not have access to this patch soon enough that in order to propose specific optimizations. It will probably take one to two weeks for this to be corrected."

Let's see 0% or a tie = amd ahead by 50% !!!! according to raghu

LOL - I guess it's all in your heads - not even the reviewers own words can rallte the fantasies of amd out of control fanboys.

The 7970GE isn't a card to buy with a reference cooling solution, obviously. With custom cooling solutions, the noise/temp won't be an issue. It's doubtful you'll see many, if any, manufacturers even release this card with the reference cooler.Reply

In the normal clock tests you test 5760x1200 which is a very good thing. Could you not do the same resolution with your overclock tests as well? I would really like to see how triple monitor performance is overclocked.

Another thing I was wondering, does running triple monitor at 5760x1200 increase power usage of the card or make it run hotter?Reply

1) Obviously it's a bit too late for that in this article, but we can certainly look into doing that for future articles.

2) Generally speaking, no. Unless a card is already operating well under its PT limit (in which case the game is likely CPU-bound), increasing the resolution doesn't significantly shift the power consumption. The actual display controllers are effectively "free" at these power levels.Reply

If any of these people had been paying any attention at all in between articles ( meaning checking on the net) they would already know it takes about 1250 on the 7970 core to equal the 680oc.

1000 doesn't do it. 1050 nope. 1150 nay.

Hexus already proved same core speed results in the 7970 behind. That's already been linked in replies.. so here it is because the amd fans will descend calling names and declaring liar (though they likely saw it before and just can't for the fanboy of themremember, as most brains use the delete key a lot)

I think we can find a 7970ghz edition bios and put it on a regular 7970 and achieve the same performance. I also assume that you have a non reference 7979, like mine a gigabyte wind force you can get a lower temperature and 680 like performance. I just hope that bios is universal. Reply

So, since the 7970 GE is essentially a tweaked OCed 7970, why not include a factory-overclocked nVidia 680 for fairness? There's a whole lot of headroom on those 680s as well that these benches leave untouched and unrepresented.Reply

Standard graph colouring on Anandtech is that the current product is highlighted in green, specific comparison products in red. The graphs on page 3 for driver updates aren't a standard graph for video card reviews.

Computer performance is a big factor in deciding in purchase as well and I am disappointed to not see any mention of this in the conclusion. AMD blows nVidia out the water when it comes to compute performance and this should not be taken lightly seeing as games right now are implementing more and more compute capabilities in games and many other things. Compute performance has been growing and growing and today at a rate higher than ever and it is very disappointing to see no mention of this in Anand's conclusion.

I use autoCAD for work all the time but I also enjoy playing games as well and with a workload like this, AMDs GPU provide a huge advantage over nVidia simply because nVidias GK104 compute performance is no where near that of AMDs. AMD is the obvious choice for someone like me.

As far as the noise and temps go, I personally feel if your spending $500 on a GPU and obviously thousands on your system there is no reason not tospend a couple hundred on water cooling. Water cooling completely eliminates any concern for temps and noise which should make AMDs card the clear choice. Same goes for power consumption. If you're spending thousands on a system there is no reason you should be worried about a couple extra dollars a month on your bill. This is just how I see it. Now don't get me wrong, nVidia has a great card for gaming, but gaming only. AMD offers the best of both worlds. Both gaming and compute and to me, this makes the 7000 series the clear winner to me.Reply

" CAD Autodesk with plug-ins are exclusive on Cuda cores Nvidia cards. Going crossfire 7970 will not change that from 5850. Better off go for GTX580."

" The RADEON HD 7000 series will work with Autodesk Autocad and Revitt applications. However, we recommend using the Firepro card as it has full support for the applications you are using as it has the certified drivers. For the list of compatible certified video cards, please visit http://support.amd.com/us/gpudownload/fire/certifi... "

nVidia works out of the box, amd does NOT - you must spend thousands on Firepro.

Welcome to reality, the real one that amd fanboys never travel in.Reply

CAD as infer is AutoCAD by Autodesk and it doesn't have any CUDA dedicated plugin's. You are thinking of 3DS Max's method of Rendering called iRay. That's even fairly new from 2011 release.

There's isn't anything else that uses CUDA processors on a dedicated scale unless its a 3rd Party program or plugin. But not in AutoCAD, AutoCAD barely needs anything. So get it straight.

R-E-V-I-T ( with one T) requires more as there's rendering engine built in not to mention its mostly worked in as a 3D application, unlike AutoCAD which is mostly used in 2D.

Going Crossover won't help because most mid-range and high end single GPU's (AMD & NVIDIA) will be fine for ANY surface modeling and/ or 3D Rendering. If you use the application right you can increase performance numbers instead of increasing card count.

All Autodesk products work with any GPU really, there are supported or "certified" drivers and cards, usually only "CAD" cards like Fire Pro or Quadro's.

Nvidia's and AMD's work right out of the Box, just depends on the Add In Board partner and build quality NVIDIA fan boy. If you're going to state facts , then get your facts straight where it matters. Not your self thought cute remarks.

Do more research or don't state something you know nothing about. I have supported CAD and Engineering Environments and the applications they use for 8yrs now, before that 5 yrs more of IT support experience. Reply

please put up a graph of the 680 overclocked to its maximum potential versus this to its maximum oc, that would be a different story i believe , not sure though. Please do it because on you 680 review there is no OC testing :/Reply

- AMDs boost assumes the stock heatsink - how is this affected by custom / 3rd party heat sinks? Will the chip think it's melting, whereas in reality it's crusing along just fine?

- A simple fix would be to read out the actual temperature diode(s) already present within the chip. Sure, not deterministic.. but AMD could let users switch to this mode for better accuracy.

- AMD could implement a calibration routine into the control panel to adjust the digital temperature estimation to the atcual heat sink present -> this might avoid the problem altogether.

- Overvolting just to reach 1.05 GHz? I don't think this is necessary. Actually, I think AMD is generously overvolting most CPUs and some GPUs in the recent years. Some calibration for the actual chip capability would be nice as well - i.e. test if MY GPU really needs more voltage to reach the boost clock.

- 4 digit product numbers and only fully using 2 of them, plus the 3rd one to a limited extend (only 2 states to distinguish - 5 and 7). This is ridiculous! The numbers are there to indicate performance!!!

- Bring out cheaper 1.5 GB versions for us number crunchers.

- Bring an HD7960 with approx. the same amount of shaders as the HD7950, but ~1 GHz clock speeds. Most chips should easily do this.. and AMD could sell the same chip for more, since it would be faster.Reply

How can you write a review like this, specifically to test one card against another, then only overclock one of them in the "OC gaming performance" section. Push the GTX680 as far as you can too otherwise those results are completely meaningless; for comparison. Reply

I think that's the way people do every review. However, ordinarily I'd recommend looking back at the 680 review, but as we've seen with the new Catalyst drivers, performance can vary over a relatively short period of time. So, a future article such as "AMD's Radeon 7970 and NVIDIA's GTX 680: How Much Difference Can A Few Months Make?" might be very nice *hint hint*. ;)Reply

For simplicity, the OC data should be put up on this graph for reference purposes and ease of use. Who on earth wants to troll a few reviews and collect this data manually? At the very least include a reference link to the previous article that compares the NVidia 680 and provides the OC scores.

Also, instead of a conclusion write up why not have a result summary showing all performed tests, the cards there were used as reference and provide a tabular view clearly showing the top runner of each test (or top 3).Reply

If you're going to use winzip to game, and support evil proprietary corruption in software by amd while using open source, great, hypocrisy and lying to stone cold stupid amd fans for years works well !Fluid sim - not a gameDX11 DC - not a gameSmallLux - not a game

Oops ! "Empty" suddenly applies to amd when it wins any "benchmarks that are not real world for end users, ever."

I guess empty crap no one uses, declared fraudulently, as a "win", sways the dark hollow spaces in the hearts and minds of the little amd fans. It's sad.Reply

Hello,I am having 3960x and DX79SI and graphics card asus hd7970-dc2t-3gd5i am not able to boot the computer. when i am bootiing the computer on mother board 2 digit led shows "00" duble zero and on led screen shows "0_" and stops, but i can reboot the computer useing ctl+atl+del. i can able to oparate bios. that means the computer is not in hanging mode.

I have the Xfx 7970 Ghz edition and I really am not sure what is the big deal with the noise. My Card is not that loud. Honestly Power control settings @ +20%, Gpu core 1175 and memory @ 1600 completely stable. The games I play are at 1080P MAx everything and my GPU rarely gets above 70C, which is only around 40% fan speed. @ 40% fan speed I literally cannot hear the GPU fan unless I have the speakers completely turned off and even still I have to listen carefully to actually discern that the noise I hear is coming from the Video Card. My experience in gaming the GPU fan noise is absolutely NOT an issue. when I'm running synthetic GPU benchmarking apps like geekbench's Furmark then the card will ramp up around 70% fan speed and you can hear it, but even then it is really not an Issue. I am using the latest catylist beta Driver 12.11 which as added 15% increase in BF3 FPS and 10% increase in Dirt 3, basically taking Nvidia's crown in virtually every game.I do lot's of Video transcoding and the openCL domination this card produces is amazing.Yesterday I trancoded a 1080P 5.3GB .mkv file to .mp4 with nero 11 when using AMD's app acceleration codec the transcode took 20 minutes as compared to 60 minutes when I used Nero's .mp4 codec at the same output settings. Durring the Transcoding the GPU stays at I believe 300 mhz with the GPU at 20% load average. when doing transcoding the gpu hoovers around 111F with the Fan at like 5%.I love this card.My Computer has three states, Idle 60% of the time, gaming and transcoding 40% of the time. At Idle with AMD's zero core this video card is using 10 watts less than Nvidia's 680, In gaming it's beating the 680 in almost every game now, and when it comes to encoding open cl and open gl it's basically a blowout averaging 75% more than the 680. If your an Nvidia fan (I formally was) and open CL is important to you, go with the Fermi cards because on most GPGPU processing they outperform with Kepler cards.

IF you question anything I've said do some google homework. Catalyst 12.11 actually does what they say, I can attest to it at least when it comes to Encoding, playing BF3 and Dirt 3Reply