The new laptop processor promises "double digit" performance and battery life improvements.

Share this story

AMD has unveiled some of the first details of its Carrizo system-on-chip. The processor is the latest iteration of AMD's accelerated processing unit (APU) concept, pairing a CPU with a tightly integrated GPU.

The CPU portion of Carrizo is AMD's latest iteration of its Bulldozer family. This version is called Excavator, with Carrizo having two Excavator modules providing four cores. Currently, not much is known about Excavator, aside from having larger caches: the level 1 data cache has been doubled in size to 32KB per core. Compared to the previous iteration of the design, Steamroller, performance is up about 5 percent at the same clock speed.

The GPU portion has similarly been updated; Carrizo uses 8 cores using the Tonga design, which made its debut in the Radeon R9 285. This is version 1.2 of AMD's Graphics Core Next architecture. It supports the forthcoming DirectX 12 and AMD's own Mantle API, and also heterogeneous system architecture (HSA) 1.0.

HSA is designed to allow developers to more easily mix computation up between different kinds of processor. The common case will be spreading work between a CPU and a GPU (though in principle, HSA could include, say, cryptographic accelerators or other kinds of special-purpose compute unit). Traditional GPU-based computation requires data to be copied back and forth between the CPU and the GPU, making it expensive to split workloads between the two; to run efficiently, a task needs to run almost entirely on the CPU, or almost entirely on the GPU. With HSA, fine-grained interleaving is much more efficient, affording greater flexibility in how tasks are spread between both processing units.

The GPU also has a new codec module, providing support for H.265 decoding in hardware and enough power to transcode nine 1080p streams simultaneously.

The big revelations about Carrizo, however, were around its layout and power efficiency. As with its predecessor, Kaveri, Carrizo is built on a 28nm process. Carrizo uses a lot more transistors than Kaveri, too; 3.1 billion, as compared to 2.4 billion. But the total die size is approximately the same, and the Excavator cores are about 23 percent smaller than Kaveri's Steamroller cores.

With Carrizo AMD has switched from a high performance design to a high density one. As the name implies, this increases the density of the transistors, thereby reducing the die size. Perhaps less expected is the implication for performance: at low power levels, the high density design supports slightly higher frequencies. With larger power budgets the situation is reversed—the high performance design supports higher frequencies in a given power envelope—but for a mobile part such as Carrizo, the high density design has the edge.

These advantages extend to the GPU; for a given power level, the high density design supports a 10 percent higher frequency, and for a given frequency, the high density design supports as much as a 20 percent reduction in power.

Carrizo also includes a new power management feature called Adaptive Voltage and Frequency Scaling. To run stably at a given frequency, a processor needs a certain amount of voltage. The power regulators cannot deliver perfectly smooth voltage; there is noise and occasionally the voltage will drop below the level needed to support a particular frequency. To account for this, excess voltage is used, typically around 10 percent more than that needed to run at a desired frequency. This means that even if there's a brief dip, the chip is still getting enough juice. However, this wastes power; power used is proportional to voltage squared, so a 10 percent voltage excess implies a 20 percent power excess.

With AVFS, Carrizo can eliminate much of that excess power. Through careful monitoring of the power, frequency, temperature, and voltage, Carrizo can near-instantaneously reduce the operating frequency of the chip whenever a brief voltage drop is detected. This means that the chip no longer needs the over-voltage just to handle these momentary glitches. AMD claims that this can provide power savings of up to 10 percent in the GPU, and almost 20 percent in the CPU, providing power reductions above and beyond those from the switch to the high density designs.

Carrizo has some 500 different frequency-sensing modules scattered around the chip, reporting to 10 different AVFS control modules. These allow each individual chip's performance to be measured to determine the necessary voltage at a fine-grained level, so frequency can be pushed as hard as possible at a given voltage/temperature level.

Altogether, AMD reckons that Carrizo will deliver "double digit" increases in both performance and battery life. While we don't expect Carrizo to surpass Intel's Broadwell designs, especially at the high end, the improvements in Carrizo could well make AMD's processors a compelling option in the sub-$500 segment when they eventually reach the market.

AMD claims that this can provide power savings of up to 10 percent in the GPU, and almost 20 percent in the CPU, providing power reductions above and beyond those from the switch to the high density designs.

The speed-reduction seems like a smart move; a drop in supply power would offer MANY cycles in which to slow down the clock.

But I wonder: is this really state-of-the-art, to simply blow off excess voltage resistively? Aren't power supplies very carefully switched to maintain steady voltages in capacitors or inductors, so that simply less power is actually drawn from the battery?

Compared to the previous iteration of the design, Steamroller, performance is up about 5 percent at the same clock speed.

Woah. I'm floo... no wait I'm not.

As for power reduction, has AMD implemented or is planning to implement the same changes Intel made with Haswell to give it such great battery life?

Edit: from Anandtech (wth Ars?)- In our pre-briefing call, AMD confirmed that the Southbridge/FCH is no longer a separate chip, and is being moved on to the CPU from its previously separate package. In fact not only is the south bridge going to part of the CPU with Carrizo, but it's being fully integrated into the APU die itself. This is a first for AMD, and even Intel by comparison still uses two separate dies on the same package for their similar Broadwell-Y/U processors. As a result, AMD explained, this advances the Southbridge from the older 65nm/45nm processes to 28nm and 28SHP, reducing power consumption and operating voltage.

Anandtech mentioned the significance of additional graphics units. It certainly suggests that the power savings are very real. I still question the net result at such a low power level, but it's encouraging to see the progress at the same node. One has to wonder what AMD could do with a good process shrink.

But I wonder: is this really state-of-the-art, to simply blow off excess voltage resistively? Aren't power supplies very carefully switched to maintain steady voltages in capacitors or inductors, so that simply less power is actually drawn from the battery?

Yes, power supplies very carefully try to keep the voltage as stable as possible.But no matter how good the power regulator is, they are not perfect: there's a small ripple in their output and they do not respond instantaneously to changing current requirements.On top of it, because the current drawn by the logic cells is constantly (violently) changing over time, the voltage drop between the regulator and the logic cells in the chip also changes.

You can alleviate this by adding more decoupling capacitance but outside the chip it only goes so far and inside the chip it takes precious area.

You can further improve this by putting the power regulators in the chip itself, as Intel as done for Haswell.

But all that filtering and regulation is is accounted for, you need to design the logic to operate correctly in a "worse case" scenario, where a dip in the voltage makes it slower than normal.

But I wonder: is this really state-of-the-art, to simply blow off excess voltage resistively? Aren't power supplies very carefully switched to maintain steady voltages in capacitors or inductors, so that simply less power is actually drawn from the battery?

Yes, power supplies very carefully try to keep the voltage as stable as possible.But no matter how good the power regulator is, they are not perfect: there's a small ripple in their output and they do not respond instantaneously to changing current requirements.On top of it, because the current drawn by the logic cells is constantly (violently) changing over time, the voltage drop between the regulator and the logic cells in the chip also changes.

You can alleviate this by adding more decoupling capacitance but outside the chip it only goes so far and inside the chip it takes precious area.

You can further improve this by putting the power regulators in the chip itself, as Intel as done for Haswell.

But all that filtering and regulation is is accounted for, you need to design the logic to operate correctly in a "worse case" scenario, where a dip in the voltage makes it slower than normal.

Also, the PSU has to deal with unpredictability on the high side, or the load and balance of the house circuit, and the power factor from the utility itself. It's amazing how well it all can work, really. Even more amazing that we can keep making it better.

After kaveri was a no show, I cannot be really excited about this. (no laptops used the fx-7600p, and one or two used the fx-7500, while the other ulv chips were seen on a rare occasion) EVEN if it can manage higher clock rates, EVEN if it does have double digit increases in battery life, it doesn't do me much good if I cannot buy it.

I also worry about the excavator cores. My current laptop has an a10-4600m. the GPU is 40% faster on paper, and in gpu only tests, than the hd 4600 from intel's haswell. and yet, in actual games, the intel chip is slightly faster, because AMD's cpu cores are so anemic. Battery life is a similar story, with the i5 lasting longer on a 48 Wh battery than the a10 can on a 62 Wh battery (comparison laptop was a lenovo l440 with an i5 4300m). yes, the intel machine is a 14 inch, but that cannot fully explain why it gets 4 hours of battery life, while the AMD 15 inch only gets 1:30, even when it was brand new, on a LARGER battery. both were cleanly wiped as well, so there was no bloatware. AMD is simply too inefficient.

With broadwell doing the same with a 15 watt TDP that haswell does in 37 watt, 37 watt broadwell may be too much for AMD. AMD will need to pull a rabbit out of a hat with carrizo if they expect OEMs to use it, when battery life is a major selling point, and in real use, AMD offers zero advantage outside of a lower price.

Like probably everyone else, I'm rooting for AMD because competition is good. However, I can't help but acknowledge that we won't see any real competition for a while. On process alone, we have AMD releasing a 28nm chip (not their first at 28nm, but still) while Intel has been selling a 14nm one.

In lookin to build a new pc but I might have to hold off for a couple months to see what AMD has planned in the desktop apu space going forward as well as what's going on with Intel

I'm curious, and I'm sure the answer is out there I just haven't looked yet but I'm hoping with the AMD apu you can get a secondary GPU and have it run as a crossfire setup. The new series of GPU from AMD is looking quite powerful

Like probably everyone else, I'm rooting for AMD because competition is good. However, I can't help but acknowledge that we won't see any real competition for a while. On process alone, we have AMD releasing a 28nm chip (not their first at 28nm, but still) while Intel has been selling a 14nm one.

Even if its just on the low end, that'll be better than Intel only. Especially in terms of graphics capability.

Like probably everyone else, I'm rooting for AMD because competition is good. However, I can't help but acknowledge that we won't see any real competition for a while. On process alone, we have AMD releasing a 28nm chip (not their first at 28nm, but still) while Intel has been selling a 14nm one.

Even if its just on the low end, that'll be better than Intel only. Especially in terms of graphics capability.

The market for PCs that need a good GPU, but not a discrete one, is very small. Either you do 3D games (need discrete gpu) or you don't. "3D games from 10 yrs ago" is really really niche.

Like probably everyone else, I'm rooting for AMD because competition is good. However, I can't help but acknowledge that we won't see any real competition for a while. On process alone, we have AMD releasing a 28nm chip (not their first at 28nm, but still) while Intel has been selling a 14nm one.

Even if its just on the low end, that'll be better than Intel only. Especially in terms of graphics capability.

The market for PCs that need a good GPU, but not a discrete one, is very small. Either you do 3D games (need discrete gpu) or you don't. "3D games from 10 yrs ago" is really really niche.

With DX12, though, won't that help lower end gaming? Especially with the app store?

And I was happily playing Star Trek Online with my old A4 laptop which didn't have a discrete gpu.

I'd also like to point out what you need, think you need, and want are usually different things for consumers. So, having a Radeon built in might help sell an APU system

Like probably everyone else, I'm rooting for AMD because competition is good. However, I can't help but acknowledge that we won't see any real competition for a while. On process alone, we have AMD releasing a 28nm chip (not their first at 28nm, but still) while Intel has been selling a 14nm one.

Even if its just on the low end, that'll be better than Intel only. Especially in terms of graphics capability.

Except the fact that every tick and every tock of the Intel product line, the integrated video gets closer and closer to the performance of AMD and that's without having ATI.

I have been very happy with my first gen A8 based laptop. It let me game at 1080p on the go for relitivly cheap, so long as I did not want all the bells an whistles turned on.

Now that I am in the market for a new laptop for this purpose, I hope they get these to market soon. I would really love one with a nice 1080p screen and good construction. I do not want one of those ultrabook like systems that sacrifices everything for thiness. So a nice fat laptop with a solid build and good screen for less than 800 USD. Is that to much to ask for?

"Carrizo uses 8 cores using the Tonga design, which made its debut in the Radeon R9 285."

People seem to flip flop between calling a shader a core, and calling a cluster of them a core. I think the proper term would probably be compute unit in this instance, since AMD would also call the 285 a 1,792 stream processor or shader core design.

As an aside a similar concept confuses on the Intel side - everyone cites 40EUs for the Iris Pro for instance, but then compares it to shader cores in Nvidia and AMD chips, which would lead one to believe that a mere 40 cores compares to chips with much more in them. But those 40EUs each have 8 of what AMD and Nvidia would call a core, so 320 shaders.

"Carrizo uses 8 cores using the Tonga design, which made its debut in the Radeon R9 285."

People seem to flip flop between calling a shader a core, and calling a cluster of them a core. I think the proper term would probably be compute unit in this instance, since AMD would also call the 285 a 1,792 stream processor or shader core design.

Careful they are calling them Compute Cores, not shader cores. Compute Cores, have their own resources that's local to a grouping of shader cores. They can also run independent processes from other Compute cores, where the shader cores within that group all run identical code, and I'm pretty sure they all still have to run the exact same instructions, not just code.

But I wonder: is this really state-of-the-art, to simply blow off excess voltage resistively? Aren't power supplies very carefully switched to maintain steady voltages in capacitors or inductors, so that simply less power is actually drawn from the battery?

Yes, power supplies very carefully try to keep the voltage as stable as possible.But no matter how good the power regulator is, they are not perfect: there's a small ripple in their output and they do not respond instantaneously to changing current requirements.On top of it, because the current drawn by the logic cells is constantly (violently) changing over time, the voltage drop between the regulator and the logic cells in the chip also changes.

You can alleviate this by adding more decoupling capacitance but outside the chip it only goes so far and inside the chip it takes precious area.

You can further improve this by putting the power regulators in the chip itself, as Intel as done for Haswell.

But all that filtering and regulation is is accounted for, you need to design the logic to operate correctly in a "worse case" scenario, where a dip in the voltage makes it slower than normal.

Also, the PSU has to deal with unpredictability on the high side, or the load and balance of the house circuit, and the power factor from the utility itself. It's amazing how well it all can work, really. Even more amazing that we can keep making it better.

The regulators at the CPU are on the mobo. This is not the off-line switcher.

We could really use a "next gen" in power supply ferrite. Faster switching yields better regulation and lower ripple. Advances in the controller chips and power fets have progressed faster than improvements in the magnetic material.

But I wonder: is this really state-of-the-art, to simply blow off excess voltage resistively? Aren't power supplies very carefully switched to maintain steady voltages in capacitors or inductors, so that simply less power is actually drawn from the battery?

Yes, power supplies very carefully try to keep the voltage as stable as possible.But no matter how good the power regulator is, they are not perfect: there's a small ripple in their output and they do not respond instantaneously to changing current requirements.On top of it, because the current drawn by the logic cells is constantly (violently) changing over time, the voltage drop between the regulator and the logic cells in the chip also changes.

You can alleviate this by adding more decoupling capacitance but outside the chip it only goes so far and inside the chip it takes precious area.

You can further improve this by putting the power regulators in the chip itself, as Intel as done for Haswell.

But all that filtering and regulation is is accounted for, you need to design the logic to operate correctly in a "worse case" scenario, where a dip in the voltage makes it slower than normal.

Also, the PSU has to deal with unpredictability on the high side, or the load and balance of the house circuit, and the power factor from the utility itself. It's amazing how well it all can work, really. Even more amazing that we can keep making it better.

The regulators at the CPU are on the mobo. This is not the off-line switcher.

Haswell has it's own regulators on the CPU as well. Not quite sure how that works at that level, but they do have "Haswell Ready" power supplies now.

Compared to the previous iteration of the design, Steamroller, performance is up about 5 percent at the same clock speed.

Woah. I'm floo... no wait I'm not.

As for power reduction, has AMD implemented or is planning to implement the same changes Intel made with Haswell to give it such great battery life?

Edit: from Anandtech (wth Ars?)- In our pre-briefing call, AMD confirmed that the Southbridge/FCH is no longer a separate chip, and is being moved on to the CPU from its previously separate package. In fact not only is the south bridge going to part of the CPU with Carrizo, but it's being fully integrated into the APU die itself. This is a first for AMD, and even Intel by comparison still uses two separate dies on the same package for their similar Broadwell-Y/U processors. As a result, AMD explained, this advances the Southbridge from the older 65nm/45nm processes to 28nm and 28SHP, reducing power consumption and operating voltage.

Unfortunately the Carrizo is competing with Atom's performance and power levels.

Like probably everyone else, I'm rooting for AMD because competition is good. However, I can't help but acknowledge that we won't see any real competition for a while. On process alone, we have AMD releasing a 28nm chip (not their first at 28nm, but still) while Intel has been selling a 14nm one.

Even if its just on the low end, that'll be better than Intel only. Especially in terms of graphics capability.

The market for PCs that need a good GPU, but not a discrete one, is very small. Either you do 3D games (need discrete gpu) or you don't. "3D games from 10 yrs ago" is really really niche.

Except for Minecraft, which isn't niche and runs on an integrated Intel GPU.

But I wonder: is this really state-of-the-art, to simply blow off excess voltage resistively? Aren't power supplies very carefully switched to maintain steady voltages in capacitors or inductors, so that simply less power is actually drawn from the battery?

Yes, power supplies very carefully try to keep the voltage as stable as possible.But no matter how good the power regulator is, they are not perfect: there's a small ripple in their output and they do not respond instantaneously to changing current requirements.On top of it, because the current drawn by the logic cells is constantly (violently) changing over time, the voltage drop between the regulator and the logic cells in the chip also changes.

You can alleviate this by adding more decoupling capacitance but outside the chip it only goes so far and inside the chip it takes precious area.

You can further improve this by putting the power regulators in the chip itself, as Intel as done for Haswell.

But all that filtering and regulation is is accounted for, you need to design the logic to operate correctly in a "worse case" scenario, where a dip in the voltage makes it slower than normal.

Also, the PSU has to deal with unpredictability on the high side, or the load and balance of the house circuit, and the power factor from the utility itself. It's amazing how well it all can work, really. Even more amazing that we can keep making it better.

The regulators at the CPU are on the mobo. This is not the off-line switcher.

Haswell has it's own regulators on the CPU as well. Not quite sure how that works at that level, but they do have "Haswell Ready" power supplies now.

The off-line switchers had a problem with no load conditions, which the newer intel chips can achieve. The Haswell ready supplies don't have this problem.

I'm still holding out on getting an AMD graphics card. I built a new desktop 18 months ago and am still using onboard graphics. The lack of really new graphics products from NVidia and AMD is annoying. The press makes it sound like nothing really new will show up until Q3 at the earliest.

Compared to the previous iteration of the design, Steamroller, performance is up about 5 percent at the same clock speed.

Woah. I'm floo... no wait I'm not.

As for power reduction, has AMD implemented or is planning to implement the same changes Intel made with Haswell to give it such great battery life?

Edit: from Anandtech (wth Ars?)- In our pre-briefing call, AMD confirmed that the Southbridge/FCH is no longer a separate chip, and is being moved on to the CPU from its previously separate package. In fact not only is the south bridge going to part of the CPU with Carrizo, but it's being fully integrated into the APU die itself. This is a first for AMD, and even Intel by comparison still uses two separate dies on the same package for their similar Broadwell-Y/U processors. As a result, AMD explained, this advances the Southbridge from the older 65nm/45nm processes to 28nm and 28SHP, reducing power consumption and operating voltage.

Unfortunately the Carrizo is competing with Atom's performance and power levels.

Oof... I'll probably go and cry in a corner later on tonight. Isn't Intel subsidizing Atom just so OEMs will use it? A defensive move against ARM? That can't be making things any better for AMD.

AMD claims that this can provide power savings of up to 10 percent in the GPU, and almost 20 percent in the CPU, providing power reductions above and beyond those from the switch to the high density designs.

The speed-reduction seems like a smart move; a drop in supply power would offer MANY cycles in which to slow down the clock.

But I wonder: is this really state-of-the-art, to simply blow off excess voltage resistively? Aren't power supplies very carefully switched to maintain steady voltages in capacitors or inductors, so that simply less power is actually drawn from the battery?

I think that was just to say another possibility or what used to be done and the next paragraph addresses what they do now i.e.

Peter Bright wrote:

With AVFS, Carrizo can eliminate much of that excess power. Through careful monitoring of the power, frequency, temperature, and voltage, Carrizo can near-instantaneously reduce the operating frequency of the chip whenever a brief voltage drop is detected. This means that the chip no longer needs the over-voltage just to handle these momentary glitches.

In lookin to build a new pc but I might have to hold off for a couple months to see what AMD has planned in the desktop apu space going forward as well as what's going on with Intel

I'm curious, and I'm sure the answer is out there I just haven't looked yet but I'm hoping with the AMD apu you can get a secondary GPU and have it run as a crossfire setup. The new series of GPU from AMD is looking quite powerful

There is no possibility of connecting a discrete GPU with crossfire to any APU. Crossfire works when dealing with GPUs of identical capabilities. The APUs GPU is slightly more than current smart phones use. Which would mean limiting the GPU to the speed of the APU. Ain't gonna happen.

There's a lot of commenters here condemning the chip having no experience with it. If the specs work out as intended, it looks to be an excellent value for HTPC, laptop and corporate duties which should have broad appeal. Those who say integrated graphics suck should take a closer look. You can get 30fps on medium quality for 1080p on a large number of recent games, and that's with current apus. This will have DX12,mantle and hsa, h265 decoding and Southbridge baked in. I'm eagerly awaiting the reviews.

With broadwell doing the same with a 15 watt TDP that haswell does in 37 watt, 37 watt broadwell may be too much for AMD. AMD will need to pull a rabbit out of a hat with carrizo if they expect OEMs to use it, when battery life is a major selling point, and in real use, AMD offers zero advantage outside of a lower price.

emphasis mine

I think I may have found your elusive, hat-dwelling, rabbit. That is, unless manufacturers have ceased being price sensitive.

In lookin to build a new pc but I might have to hold off for a couple months to see what AMD has planned in the desktop apu space going forward as well as what's going on with Intel

I'm curious, and I'm sure the answer is out there I just haven't looked yet but I'm hoping with the AMD apu you can get a secondary GPU and have it run as a crossfire setup. The new series of GPU from AMD is looking quite powerful

There is no possibility of connecting a discrete GPU with crossfire to any APU. Crossfire works when dealing with GPUs of identical capabilities. The APUs GPU is slightly more than current smart phones use. Which would mean limiting the GPU to the speed of the APU. Ain't gonna happen.

AMD marketing is very confusing on these points: Yes a form of crossfire does exist that allows an APU to work with a GPU, but it doesn't "work" (yet, ever?) in the way that d0x is interested in.

Basically, if you pair a top end APU with a very low end GPU, you can get performance that is slightly better than either one of them separately. Of course, at the same time, you introduce all of the compatibility and frame-pacing problems of crossfire. You throw out power efficiency as well.

Currently, using this form of crossfire with any GPU better than an R7 250 actually has worse performance than using the GPU alone.

It's a great idea, and I wish that it worked effectively, but it's only real use for now is in duping uninformed shoppers into thinking they are getting a much more powerful system than they really are.

The common case will be spreading work between a CPU and a GPU (though in principle, HSA could include, say, cryptographic accelerators or other kinds of special-purpose compute unit).

Considering the past few months about security, cryptographic accelerators at the CPU level would be welcome. Also I wonder if AMD will be ready for DDR4? And finally maybe the desktop market will get a taste of what the portable market dealt with with HSA?

The common case will be spreading work between a CPU and a GPU (though in principle, HSA could include, say, cryptographic accelerators or other kinds of special-purpose compute unit).

Considering the past few months about security, cryptographic accelerators at the CPU level would be welcome. Also I wonder if AMD will be ready for DDR4? And finally maybe the desktop market will get a taste of what the portable market dealt with with HSA?

Yea, lack of DDR4 is unfortunate, though hopefully the reduction in memory bandwidth needed with Tonga's compression support will help.

Like probably everyone else, I'm rooting for AMD because competition is good. However, I can't help but acknowledge that we won't see any real competition for a while. On process alone, we have AMD releasing a 28nm chip (not their first at 28nm, but still) while Intel has been selling a 14nm one.

Even if its just on the low end, that'll be better than Intel only. Especially in terms of graphics capability.

The market for PCs that need a good GPU, but not a discrete one, is very small. Either you do 3D games (need discrete gpu) or you don't. "3D games from 10 yrs ago" is really really niche.

Not really true if you're playing at standard laptop resolution (1366x768). For instance, skyrim runs fine on my outdated HD3000 and would run even better on a more modern chip.

In lookin to build a new pc but I might have to hold off for a couple months to see what AMD has planned in the desktop apu space going forward as well as what's going on with Intel

I'm curious, and I'm sure the answer is out there I just haven't looked yet but I'm hoping with the AMD apu you can get a secondary GPU and have it run as a crossfire setup. The new series of GPU from AMD is looking quite powerful

There is no possibility of connecting a discrete GPU with crossfire to any APU. Crossfire works when dealing with GPUs of identical capabilities. The APUs GPU is slightly more than current smart phones use. Which would mean limiting the GPU to the speed of the APU. Ain't gonna happen.

AMD marketing is very confusing on these points: Yes a form of crossfire does exist that allows an APU to work with a GPU, but it doesn't "work" (yet, ever?) in the way that d0x is interested in.

Basically, if you pair a top end APU with a very low end GPU, you can get performance that is slightly better than either one of them separately. Of course, at the same time, you introduce all of the compatibility and frame-pacing problems of crossfire. You throw out power efficiency as well.

Currently, using this form of crossfire with any GPU better than an R7 250 actually has worse performance than using the GPU alone.

It's a great idea, and I wish that it worked effectively, but it's only real use for now is in duping uninformed shoppers into thinking they are getting a much more powerful system than they really are.

The problem is you don't really save any power, because the difference between lowest idle state and lightly loaded for the discreet GPU is essentially nill.