AMD held a press briefing today on their upcoming 8000M graphics chips, which they are calling the "second generation GCN architecture" parts. We’ll have more on that in a moment, but while we were expecting (dreading) a rebranding prior to the call, it appears we are at least partially mistaken; there will be at least one completely new GPU with 8000M. (If you want additional background material, you can see the previous generation high-end 7000M announcement from April 2012 for reference.)

I’m not going to get too far into the marketing aspects, as we’ve heard all of this information before: AMD has improved Enduro Technology, they’re continuing to improve their drivers, and APP Acceleration has a few more applications. There have been a few major titles released in the past couple of months with AMD Gaming Evolved branding (Far Cry 3 is arguably the most notable of the offerings, with Hitman: Absolution and Sleeping Dogs also scoring well amongst critics and users), and Bioshock Infinite is at least one future release that I'm looking forward to playing.

Cutting straight to the chase, at this point AMD has released limited information on the core specifications for some of their 8000M GPUs, but they coyly note that at least one more GPU announcement will be forthcoming in Q2 2013 (8900M by all appearances). Today is a soft launch of high level details, with more architectural information and product details scheduled for January 7, 2013 at CES. AMD did not share any codenames for the newly announced mobile GPUs, if you’re wondering, other than the overall family name of “Solar” for the mobile chips (replacing the outgoing “London” series), but we do know from other sources that the 384 core part is codenamed "Mars" while the larger 640 core part is codenamed "Neptune". Here are the details we have right now:

AMD Radeon HD 8500M, 8600M, 8700M, and 8800M

Radeon
HD 8500M

Radeon
HD 8600M

Radeon
HD 8700M

Radeon
HD 8800M

Stream Processors

384

384

384

640

Engine Clock

650MHz

775MHz

650-850MHz

650-700MHz

Memory Clock

2.0GHz/4.5GHz

2.0GHz/4.5GHz

2.0GHz/4.5GHz

4.5GHz

Memory Type

DDR3/GDDR5

DDR3/GDDR5

DDR3/GDDR5

GDDR5

FP32 GFLOPS

537

633

537-691

992

FP64 GFLOPS

33

39

33-42

62

Obviously there are a lot of missing pieces right now, but what we immediately notice is that the core count on the 8500M/8600M/8700M means that we’re definitely looking at a new GPU. The only other time we’ve seen AMD do 384 cores is with Trinity, but that’s a VLIW4 architecture so we’re not seeing that again. Given the currently shipping Southern Islands chips (“London” on the mobile side) have 640 cores max for Cape Verde, 1280 max for Pitcairn, and up to 2048 for Tahiti, AMD has likely created a fourth SI derivative that drops down to two CU arrays, each with three CUs. (You can read more about the GCN/SI architecture in our earlier GPU coverage.) Performance is something of a wildcard with the new 384 core parts, and the choice of DDR3/GDDR5 memory will also influence the final result. We'll find out in the coming months how the 8500/8600/8700M stack up to NVIDIA's midrange "GT" offerings, which interestingly are also using 384 cores.

Also worth a quick note is that AMD is not discussing TDPs at this point in time—which is common practice for both AMD and NVIDIA. We expect the new "Mars" parts to be more power efficient than the outgoing Thames/Turks cores, thanks to the shrink to a 28nm process. However, AMD and NVIDIA typically stick to common power targets for laptops that are dictated by their OEM partners, which often means they'll play with clock speeds in order to hit a specific TDP. That's why all of the clock speeds listed in the above table have a qualifying "up to" prefix (which I omitted).

The final announced card is the one where we appear to have more of a rebrand/optimization of a previous generation chip. 8800M has the same 640 core count as Cape Verde/7800M, only with modified clocks this time. The earlier 7800M chips could clock up as high as 800MHz, so maximum core clock is actually down a bit, but they only ran the memory at up to 1GHz (4GHz effective) GDDR5. If AMD determined memory bandwidth was more important for that particular GPU than shader performance, the new 8800M would make sense. Also note that AMD isn’t including the boost clock speeds into the above chart; under the right circumstances, all of the new chips can run at higher clocks than the reference clock.

Radeon 7800M Left, Radeon 8800M Right

AMD isn’t calling the 8800M a rebrand, but we’re looking at the same core counts as Cape Verde and the same 28nm process technology, so we wouldn’t expect a substantial change in performance. There’s also the above chip shot as a point of reference. If the 8800M is substantially different from Cape Verde then the above images provided in AMD’s slides must be incorrect, as the new and old chips look the same. Minor tweaks to power use, caching, or other elements notwithstanding, we’re probably dealing with a die respin at most. But, there’s nothing inherently wrong with rebranding—AMD and NVIDIA have both been doing it for some time now. Don’t expect every “upgraded” GPU to be better; a 7400M isn’t faster than a 6700M, and likewise we expect 7700M and 7800M to be faster options than the 384 core 8500M/8600M/8700M and competitive with 8800M. Here’s a quick recap of the same core specs as above for the current 7700M/7800M parts:

AMD Radeon HD 7700M/7800M Specifications

Radeon
HD 7730M

Radeon
HD 7750M

Radeon
HD 7770M

Radeon
HD 7850M

Radeon
HD 7870M

Stream Processors

512

512

512

640

640

Engine Clock

575-675MHz

575MHz

675MHz

675MHz

800MHz

Memory Clock

1.8GHz

4.0GHz

4.0GHz

4.0GHz

4.0GHz

Memory Type

DDR3

GDDR5

GDDR5

GDDR5

GDDR5

FP32 GFLOPS

589-691

589

691

864

1024

FP64 GFLOPS

36.8-43.2

36.8

43.2

54

64

I’ll refrain from commenting too much more about performance of an unreleased part, but AMD indicated their 8870M should be substantially faster than NVIDIA’s current GT 650M GDDR5 (which isn’t too surprising considering clocks and core counts), and the 8770M should likewise be a healthy 20%+ bump in performance relative to the 7670M. I’d rather see comparisons with GTX 670MX and HD 7770M, respectively, but I suspect those wouldn’t be quite as impressive. Anyway, you can see AMD’s comparison charts in the complete slide deck gallery below. Availability of the new GPUs is slated for Q1 2013.

Post Your Comment

88 Comments

Correct me if I'm wrong, but shouldn't this be the very first AMD GPU with *full* Linux driver support from day one? I remember long ago it was promised to have the very same feature support on Windows and Linux, starting from 8xxx family.Reply

Like that is ever going to happen. As long as they don't share most of the code base for their drivers, this won't happen. And I am not aware of any plans on supporting Enduro on linux. Nvidia at least has a plan and a working community made workaround.Reply

No company in their right mind that is facing changing industry trends and experiencing financial problems has extra resources to spend on an OS that comprises just 1.25% of the entire market. Your expectations for Linux support at this point in time are completely unrealistic. The payoff for the investment is just not worth it. Reply

Actually, with Steam coming to Linux it wouldn't be a terrible idea betting on the gaming market share of Linux going up a small amount. If they could become the de facto GPU in the Linux space, I'd say that's worthy of putting effort in. AMD needs to be making bets right now, playing it safe isn't going to save them from bankruptcy in the future.

Think about this; NVIDIA historically has been terrible at catering to the Linux community. If AMD were to do something incredibly bold like open-source their (currently) proprietary drivers, that would all but guarantee them exclusivity in the Linux world. Competing for 1.25% aint much, but having a monopoly over 1.25%? that's a different story.Reply

Imagine if they did that? Gamers, and other enthusiasts building their own systems from scratch, might then completely overlook Windows, and use the free option. Who knows what skills the community could offer to them in improving drivers?

TRUST ME, I'm no Penguin hugger, I FIGHT with Linux, not use it.

With the arrival of Steam, the take up of Linux would be seriously re-considered if the drivers went open source... and given the Win 8 release (stealing my start menu, when everyone seems to want it!), it might force MS to actually listen to customers once again.

I am surprised that there is so little porting going on from PlayStation 3 titles to linux, given the fact the the PS3 already is running a linux-derivative.

At the same time, about 80% of big game titles coming for the PC are already portings or co-developments from/with the XBOX 360.

Of course, a market share of 1.25% of linux probably means that only 2.5% of all users care one way or annother what OS they have. Whatever comes with the PC (or tablet) and runs their software is fine for most people.Reply

PS3 games aren't built for Linux or any derivative thereof -- Early PS3s could *run* Linux, and I suppose it might even be possible that the dash interface might be derived from Linux, but games themselves take complete control of the system, much like an OS does. The PS3 has an optional, openGL-like API, but I'd wager that nearly all AAA games are writing commands directly to the GPU hardware, at least for the meat of the rendering. At minimum, porting a PS3 game to linux would require substituting Linux API calls for any bare-metal coding, porting any PPC/Altivec assembly to x86/SSE, porting SPU code to SSE or OpenCL as appropriate, substituting OpenGL calls for any GPU control statements (and similarly for audio), and finally auditing the code base for any statements that rely on how the PS3's particular CPU operates, because the behavior of certain operations (floating-point accuracy and rounding mode, shift behavior, others) can be different and this is visible even to high-level languages like C and C++. Then you might have fundamental differences between memory models (strongly vs weakly ordered) and cache coherency... All that just to get it working, and *then* you get to re-tune things for performance because consoles tend to have a few other tricks up their sleeves that PCs simply don't -- there's quite a few reasons why a PS3 or 360 game can hold its own against PC games with far better hardware.Reply