AMD has unveiled its first graphics card based on its Graphics Core Next architecture, which The Reg told you about in excruciating detail this summer. According to AMD, the card – the Radeon HD 7970 – is also the only GPU to be built using a 28-nanometer process.
"This graphics card represents a revolution in the graphics …

Yup....

Interesting times ahead (hopefully)

Well, AMD did it to intel back in the '90s, and all of the industry (including intel's offerings - which I presently use) are a good deal better as a result. It will be fun to see if they can kick the graphics industry up a notch or two in the same way!

GPU

Based on the cursory overview provided, it actually sounds like a nice GPU. Granted, with the performance improvement in "performancy/sq mm", it leads me to think there's only a small 110% speed bump over the last-gen 6970. Even so, the feature set will be nice. If their ZeroCore works as one would hope, perhaps we'll have a GPU that fair better than 120W at "idle" (rendering desktop only). Incorporating Turbo Boost is an interesting ploy too, as it allows that OCers may have been doing manually/semi-automatically for some time: cranking the GPU into OC mode during game play, and reverting to normal/underclock for desktop use (especially considering that the GPU in some cases sucks more watts than the rest of the computer combined). Wasn't that the point of leaving the Intel HD gfx core on Sandy Bridge strapped to your monitor with a bit of Virtu magic to (hopefully, but didn't work very well) put your gfx card in idle outside of games?

meh same old same old.

AMD did it to intel in the 90's...

well if history repeats itself in that regard, with AMD being AMD, and Nvidia being Intel, AMD will hold the lead for a while, massively publisize it, and then Nvidia will take the lead and romp off into the distance, and, if we continue Nvidia following Intel's current trend, actually scale back their release schedule to let the competition catch back up.

but i don't think so. AMD have the lead. then Nvidia will over take, then AMD will retake the lead.

ad-infinitum. not that this is a bad thing, i just think that the usual marketing guff of "a new era in graphics processing" a little bit of a stretch.

so it does what we do now a bit faster. woopy-doo. how about showing us the way to do stuff in the future? that's the difference between evolutionary, and revolutionary.

i'm sorry but AMD has been on the back foot since getting to 1Ghz first, and showing us how to do 64bit nicely.

wasn't the ATI/AMD "greengrass" - or whatever it was called, graphics arch supposed to stomp all over the competition? but didn't?

You're missing the point...

....AMD (again) have realised they can't win on the how many processors you can cram into a space or how fast you can clock something, so they are doing something that they are good at, looking at the whole pc and going, right, how can we rework this without breaking everything (as Intel have such a huge advantage over AMD, they usually tend to try and force their technolgies through USB3 vs thunderbolt anyone?)

So as with the Athlon, the x64's and now fusion they have gone, ok, let's see how we can increase the whole machine. I know lets get the CPU and GPU working together nicely doing what each is best at.

Haha, no

There are already benchmarks at AnandTech.

Don't even think the GPU division is similar to the CPU division.

AMD's CPUs have been fairly uncompetitive for a while, and Bulldozer is absolutely terrible.

AMD's GPUs on the other hand, are absolutely fantastic, and represent high performance at price points that make NVIDIA often look like a bad deal. The best thing to do for a year now has been to get a 6950 2GB, unlock the shaders to 6970, and enjoy the absolute sweet spot in the GPU market for high end gaming.

The bulldozer design...

The bulldozer design itself isnt bad, Its the socket that AMD needs to improve... The bulldozer Opterons perform far better then the desktop versions.

I have always applauded AMD for sticking with the 940 pin AM2/3+ socket design to promote upgradability, however not this time, they need to dump that design and go with something bigger. since they like to promote long term product evolution, they need to design a new desktop socket with lots and lots of headroom, its ok to build a socket with say 1944 pins and only use 3/4 of them for now, the material to make these sockets are cheap, but the future upgradability is invaluable, hell, they could even make an expanded socket that could accept the corrent AM3+ CPU's, ie make the center of the socket socket pin compatible and have the socket extend out sides of the chip to accommodate a larger chip, or something along those lines.

Also, as for the bulldozer design, when looking at this product release, think they are great things in store for future Fusion products.

won't be surprised to see some poor benchmarks

not because I would expect new architecture to perform badly, but rather because it usually takes a while to iron out programming with new paradigms and in new ways. If this is as revolutionary as claimed, it will be a while before driver writers learn to use the new instruction set properly.

What do we use it for?

I don't want to sound negative about this card. I actually get a warm glow reading about it. But I'm not really a gamer (have played about two in the last ten years), but I get the impression that games developers write primarily for the consoles and then port across an equivalent-ish version to the PC. And consoles are less powerful than a high-end PC + Graphics Card, so are games really making use of the power available in these cards? Correct me if I'm wrong.

Re: What do we use it for?

Consoles only have to shift enough pixels for a telly. (Not even an HD in the case of the Wii.) PC games need to push round enough for a monitor (or two or three), which can be more than that.

Also, the main point of AMDs architecture reworking is so you can offload more general computing tasks to the GPU (which is especially important if you have a Bulldozer CPU!). I expect that this sort of card will be lapped up by OpenCL users.

"I get the impression that games developers write primarily for the consoles and then port across an equivalent-ish version to the PC. And consoles are less powerful than a high-end PC + Graphics Card, so are games really making use of the power available in these cards? Correct me if I'm wrong."

Yep, you're wrong. It seems to be a popular myth that PC gaming is dying, but there are still plenty of games developed specifically for PC, along with an awful lot more that are developed for all platforms at the same time rather than just being ported later. Plus, even the worst ports usually have much, much better graphics on the PC version, even though their interface often ends up sucking donkey balls.

It's also worth bearing in mind that the current generation of consoles is obsolete and will likely be replaced within a couple of years (sooner for the Wii, but it's not really worth talking about hardware for that). Not only will that mean PCs need to keep working to stay ahead, but this sort of new technology is exactly the sort of thing that future consoles will be built out of. Remember, pretty much all improvements in computing have been made incrementally, and without the constant push for more powerful PCs, consoles would never be able to improve either.

Slightly more on topic - from the Nvidia GTX580 website linked:

"Swift. Stealthy. Deadly."

I think my money will go to AMD, since at least their hardware doesn't seem to be threatening to kill me.

sage

But will they ship it?

Same performance as a GTX 580 but costs more?

Despite its price tag and claim to be "The World's Fastest GPU" it still falls just short of the nVidia GTX 580 on some benchmarks. The perfrmance tests show that there isn't much difference between the two cards, they appear to be almost evenly matched and yet the GTX 580 is only £300 - £400 whereas this retails for £400 - £500

It also falls well below the nVidia GTX 590 and in some tests fell below the 6990 albeit those are dual GPU cards but that then begs the question of why the price tag is so high when it isn't anywhere near as powerful.

I hope this trend doesn't keep up because if it does, there is nothing "next generation" about this card when for a cheaper price I can get better performance from what is supposed to now be an obsolete card.

Re: Same performance as a GTX 580 but costs more?

The benchmarks that it doesn't win tend to be game specific (e.g. Metro), it sometimes takes a while for the drivers to catch up, often patches specifically to get the best performance in games.

Generally, overall the difference between a 7970 and a 580 is similar to the difference between the 6970 and 580, thus justifying the launch price - note you're comparing an old cards "settled" price with a new cards lauch price (which is only 10% more than the 580s launch price), expect them (both) to come down.

>>there is nothing "next generation" about this card

It's called bleeding edge! I think it's the only card that supports DX11.1 and it will be some time before we see the under-the-covers benifits of that (large cbuffers for example), and it scales much better in crossfire than the 580 does in SLI

Sounds good, but is it powerful enough to perform a 90 degree rotation on a subset of pixels in the framebuffer, and thus allow a PLP (portrait-landscape-portrait) Eyefinity setup? Cos by all accounts that's still an insurmountable problem for both ATI and Nvidia...

is not a new metric

Perf per area directly translates into "bang per buck." All else being equal, if you can squeeze a given amount of oomph into a smaller area, you're going to improve both your yield and dies per wafer. Of course hardware people have tracked this for years, it just normally isn't presented as a marketing term because it doesn't necessarily translate to a change in sales price (might be an improvement in margins for the manufacturer instead). If price doesn't move, frankly, consumers don't care about this, but management clearly does.

The mothership has landed

while it is a fast card for gaming, it is slower than the GTX580 in none gaming related programmes from what I have seen on the reviews.

I am now just waiting for the next Nvidia card, and then we will see a true comparison of it because at the moment there are no Nvidia PCI-E 3.0 card's to compare it to so we cannot, see how well this really perform's.

Compared to the GTX580 though for games this is a smart choice for people who need or want to upgrade at the moment.