AMD Carrizo next-generation APU details leak

AMD's next-generation Kaveri chips are to be succeeded by Carrizo, a source has claimed, extending the lifespan of the company's FM2+ socket.

Details of the successor to AMD's next-generation Kaveri accelerated processing unit (APU) products has been leaked, pointing to the codename Carrizo and backwards-compatibility with the FM2+ socket format.

AMD may not be talking, but that isn't stopping the usual raft of anonymous industry sources from waxing lyrical about the chipmaker's potential future plans. According to details leaked to VR-Zone China, Kaveri's successor is codenamed Carrizo and will be backwards-compatible with Kaveri's FM2+ socket and A88X chipset.

Further claims made by the site include support for DDR4 - already rumoured for Kaveri, although likely to come in the form of soldered-down BGA chips for original equipment manufacturers (OEMs) rather than retail-ready user-swappable memory modules - and thermal design profiles (TDPs) ranging up to 65W. The latter is of particular note: AMD's current crop of APUs hit 100W at stock settings, yet Carrizo is expected to offer significantly improved performance - suggesting AMD is expecting significant gains to come from efficiency improvements and a reduction in process node size.

For those considering an APU-flavour upgrade in the near future, there's still better news: the site's source suggests that Carrizo will be fully compatible with the motherboards launched for Kaveri, slotting straight into the FM2+ socket and running happily on the A88X chipset - and, the source further claims, the existing A78 as well. All Carrizo models will also adhere to the Heterogeneous Systems Architecture (HSA) standard, blurring the line between resources designed for CPU or GPU use.

The first Carrizo APUs are expected to roll off the production line in August of next year, with mass production beginning in December for a 1H 2015 launch. In addition to the Carrizo desktop chips, the company is also tipped to launch a system-on-chip model dubbed Beema for low-end mobile devices and a BGA range dubbed Nolan for higher-end devices. All models will include next-generation Radeon graphics, the site claims, although final core count, clocks and memory specifications have yet to be detailed.

AMD, as is expected, has refused to confirm or deny the claims, stating that it does not comment on speculation regarding unannounced products.

Share This News Story

23 Comments

As sad as it is, but I don't really expect anything from AMD anymore in the CPU-area.

intel will be releasing their first 14nm Broadwell-chips by the end of 2014, with an improved iGPU and even less TDP than the current 22nm Haswell.

And don't even start talking about CPU-perfomance...

AMD should concentrate all their efforts on GPUs or platforms like the new consoles, but in the desktop-CPU-market they've shown nothing of interest for the last 5 years, allthough their iGPU is better than intels offerings. The iGPU doesn't matter for a desktop tho, as you can buy a cheap PCIe GPU anytime.

@jrs77.
????
AMD's APU are miles ahead than the Intel ones. Until Intel comes with an integrated GPU as fast as the AMD one, there is no comparison to discuss between the two.

As for generic use, AMD CPU is as good as Intel ones, except someone is "encoding video" whole day, or is bothered with 1-2 fps difference on games, paying a hefty price for them. A good old T1090 can cut anything today as good as any new Intel product.

Is it shame that the mainstream gaming industry is still stuck on single core process and DX9 still. Mainly because of Intel and the consoles.

Originally Posted by Panos@jrs77.
????
AMD's APU are miles ahead than the Intel ones. Until Intel comes with an integrated GPU as fast as the AMD one, there is no comparison to discuss between the two.

As for generic use, AMD CPU is as good as Intel ones, except someone is "encoding video" whole day, or is bothered with 1-2 fps difference on games, paying a hefty price for them. A good old T1090 can cut anything today as good as any new Intel product.

Is it shame that the mainstream gaming industry is still stuck on single core process and DX9 still. Mainly because of Intel and the consoles.

I'm not a gamer. I'm using my machine for work most of the time and intel is the only choice here currently from a performance POV.

And about the iGPU.... nobody cares about them, aslong as they're capable of HD-video playback. Software doesn't make any noticable use of GPU-acceleration to this date, and where it's needed people simply use a cheap dedicated GPU like the HD7750, which is way more powerful than the HD8760D used in the A10 APU.

The only thing of interest is CPU-performance vs. CPU-powerdraw, and intel beats AMD here by a mile.

Oh, and the GT3e iGPU found in Crytsal Well chips is as powerful as the HD8670D, and an intel i7-4xxxHQ uses only half the power of an AMD A10-6800k.... just saying.

Originally Posted by Harlequinnot really - Intel are so far behind AMD and ARM they`tr not even in the same market - intel iGP`s are a joke , they STILL have the hardware bug (for 4 genereations now) which effects video playback.

and the GT3e? you actually seen one yet? nope - its vapourware - it has a bigegr powerdraw than a GTX 460

Intel, even though they are market leaders, seem to be the lumbering giant of the industry. They are akin to Microsoft in the software world, they have bought their cards to the table too late in the game. Allot of the underdogs are innovating and chipping away at their dominance.

Complacency costs you dearly in the long run, some of Intel's products lately have been shocking for the price they command.

Just one thing to point out though, Intel is significantly ahead in CPU performance and power draw. AMD is moderately ahead in GPU performance at some levels. Intel is closing the gap in GPU performance pretty significantly from generation to generation. AMD is if anything losing ground from generation to generation in CPU and power consumption, or holding their losing position.

In the sub 20w range Intel dominates AMD in CPU, GPU and power consumption, but a VERY long shot. AMD's only real "ULV" processor on the market brings to bear roughly 1/3-1/2 an i3-3117u CPU performance, about 60% of the GPU performance and still generally tends to use more power. Even AMDs 25w processor can't hit the same CPU performance and is only roughly similar levels of GPU performance, in a package that consumes a lot more power.

It is only in the 35-45w range that AMD has processors with GPUs that actually manage better performance than Intel, and especially now with Haswell, a lot of AMD's GPU line up is not significantly further ahead, but again with much higher power consumptions in general work loads and also much lower CPU performance.

Ignoring the workloads I do on my desktop (because I do a lot of video transcoding, plus some rendering, a lot of photo editing and some light gaming), with my laptop, there is no way I could live with an AMD chip. I do light gaming and browsing on it, but I also do a lot of on the go photo editing using lightroom and Photoshop CS6. I need a realtively thin and light laptop and at least 5-6hrs of light use battery life and 3+hrs of pretty heavy use battery life in roughly a 4lb or less package. The 14" laptop I have fits that perfectly with the i5-3217u in there. The compromise in CPU and GPU performance to get a similar size/weight laptop (and battery run time) from AMD would make the machine unusable for what I needed.

I don't just sit there all day running video transcodes on it (rarely. I bring the big boys to bear with my OC'd i5-3570 desktop for that sort of thing).

Ignoring my dig on AMD, hopefully AMD is managing some big improvements. I am tired of seeing Intel dominate the processor market. Some healthy competition would be really nice. Haswell seems great both for GPU improvements and general battery life/power consumption, but it is a complete dissappointment on CPU performance. I know you can't have your cake and eat it too a lot of times, but I am really hoping now that battery life is VERY impressive for mobile with Haswell, that Intel will focus at least equally on CPU and GPU improvements with Broadwell and future successors. A healthy 15-20% bump in CPU performance would be very welcome.

sub 20w? intel like to play the numbers game - 8.5w suddenly doubles when you start to use that cpu....

and arm are at 1w where intel is championing 5w with there latest and greatest

AMD APU`s are far far ahead of intel - everyone knows this , no matter how much you want to polish a turd - and intel`s gpu drivers are truly that bad (yes you can polish a turd , I do watch mythbustes)

intel`s drivers , even the latest are aweful for anything other than desktop display.

gpgpu is another area intel are playing catchup - when an A10 kills an i7 with gpgpu in encoding it shows how far behind intel are.

AMD is weak in non-gaming laptops and high-end desktops, and pretty much non-existent in tablets. Intel is weak in netbooks, tablets, gaming laptops, and mainstream desktops. While their products are good for mainstream desktops, their price point is not. AMD products are perfectly reasonable for the average user and would make fine laptop products if they were more power efficient. If AMD can just fix their TDP, I'm sure they'll do a lot better in the mobile market. This is probably why they're investing time in ARM.

Here is my issue with these APUs and indeed the same for the last few generations of Intel's CPU as well. I personally have zero interest in on-die graphics. Even in a HTPC I would still choose a case large enough to house a decent dedicated GPU. Therefore I can not help but feel that I, and many others are being ripped off by having to pay for something that we will never ever use. Yes, there is a market for APUs (at what point are we going to start calling Intel CPUs an APU as they now include on-die GPUs?) but there is also a large market that doesn't want the extra GPU. I know you can get a very limited range of Intel CPUs without the GPU but they are just non-activated pieces and the cost saving is minimal to a full blooded version.

Allthough you wanted an english site, which I can't show you unfortunately, just look at the slides and pictures in this review of the Schenker s413 then to see how big/small the gap is between an AMD A10-6800k (HD8760D) and an intel i7-4750HQ (GT3e).

And I might add, that the intel i7-4xxxHQ is a 47W-package, while the AMD A10-6800k is 100W.

Originally Posted by SchizoFrogHere is my issue with these APUs and indeed the same for the last few generations of Intel's CPU as well. I personally have zero interest in on-die graphics. Even in a HTPC I would still choose a case large enough to house a decent dedicated GPU. Therefore I can not help but feel that I, and many others are being ripped off by having to pay for something that we will never ever use. Yes, there is a market for APUs (at what point are we going to start calling Intel CPUs an APU as they now include on-die GPUs?) but there is also a large market that doesn't want the extra GPU. I know you can get a very limited range of Intel CPUs without the GPU but they are just non-activated pieces and the cost saving is minimal to a full blooded version.

I know exactly what you mean and would definitely have to agree. However, with GPGPU software being more common, you can often use an IGP for things like that. But, intel's IGPs aren't really good enough for that, so I would rather pay say 10% less and not have the IGP. With AMD anyway, their FX-4000 series are really the only mid-range processors that don't include an IGP. On the other hand, they're probably the worst valued AMD processors.

For their top-of-the-range iGPU, intel are quite a bit ahead of AMD in performance. Performance/£ might still be in AMD's favour though; the Iris is only included in Intel's more expensive chips, and regular old GT3 is pretty much on-par, and GT2 is soundly trounced at the same price point.

Originally Posted by edziebaPerformance/£ might still be in AMD's favour though; the Iris is only included in Intel's more expensive chips, and regular old GT3 is pretty much on-par, and GT2 is soundly trounced at the same price point.

This is the part people seem to forget when jumping on AMD, and I'm most glad you brought it up.

Is the Intel Iris-equipped Core chip faster than AMD's top-end APU? Absolutely. It's also only available in trays of 10,000 to Intel's OEM partners, and comes in a BGA package requiring a wave-soldering machine to attach it to the (also only available to OEMs) motherboard. So, if you're an enthusiast, you ain't going to buying one - thus it can be discounted from the comparison.

Are the GT3-equipped Core chips significantly faster in CPU performance and roughly the same in GPU performance as AMD's top-end APU? Absolutely. They're also significantly more expensive, bumping the cost of your build up considerably and meaning you're going to have to either spend more or make sacrifices elsewhere - like forgoing a decent SSD in favour of a mechanical, which for everyday tasks will likely make your Intel-powered computer feel slower than the AMD-powered version regardless of the difference in CPU horsepower.

Are the similarly-priced Core chips somewhat faster in CPU performance on single-threaded tasks than AMD's top-end APU? Absolutely. They're also significantly slower in GPU performance, and typically have fewer cores - meaning that, for properly multithreaded or asynchronous multiprocess work, they're going to be completely outmatched by AMD's offering. Add in the extra GPU oomph, and AMD has itself a winning proposition for certain market segments.

We've seen plenty of people above pointing out how an AMD APU wouldn't be suitable for their workloads; that's fine. There's no such thing as a one-size-fits-all solution: if you need the absolute maximum compute performance from your system, spend a couple of grand and get a top-end Intel chip and two or three graphics cards; if you need to minimise your power use, an ARM chip is the way to go; but if, like me, you need reasonable general-purpose performance, decently low power draw, a bit of gaming and four cores on a tight budget, then you're going to be looking AMD-wards.

Originally Posted by rolloAMD need a server chip that can fight Intel on performance / watt at the moment they are so far behind its crazy. There's a reason Intel are above 80% server market pen.

Actually, they did. AMD's Magny Cours Opterons were overall better than their intel counterparts in nearly every way except single-threaded tasks. They were cheaper, faster clock-per-clock, more power efficient, and had QUAD CHANNEL memory. The current Bulldozer/Piledriver architecture was also revolved around AMD server chips. They're decent performers in servers, but for whatever reason, companies would rather go for intel - I guess because Intel is the easiest way out. Intel has proven multiple times that throughout most of their history, they are not a cost-effective option in servers(when you include all CPU architectures). I feel x86 overall doesn't belong in most servers.

Quote:

Would also help they made a quick low power laptop chip to go in ultrabooks another segment that is high in profits.

Agreed, but with their current architecture, that isn't going to be a reality. This is probably why they're investing in ARM. I just hope they do the same thing with ARM as they did with x86-64.

Quote:

Desktop chips AMD is only a player if your on a budget or a major fan. Intels top end chips are better for every work load you care to list.

Better doesn't mean necessary, or cost effective. Intel's i3 and i5 series are pretty overpriced (except in laptops), and the average user hardly has a need for something much better than the best i3. The thing to consider too is most enthusiasts hardly *need* an i7. While there are undoubtedly good reasons to own one, I'm sure the majority of i7 users are just bad at managing their tasks, care about bragging rights, and don't have the patience to wait an extra few minutes to render or encode something.

Originally Posted by Harlequingiven intel has its own method for reporting TDP - I take anything they say with a bucket of salt - look at the issue with IB byt don't worry , thermally that chip was fine!

intel are not catching up with the game - everyone is moving on to gpgpu - look at the encoding results to show that.

Okay...use Quicksynch. Handbrake beta has support for it, and especially with quicksynch enabled (for the bit that it can do), Intel SMOKES AMD with it, by a good 30-40%.

If you looked at the latest Haswell benchmarks, Intel smokes ALL AMD APU options when it comes to GPGPU compute. It even beats the pants off some of the low/mid range Nvidia discrete graphics cards and mobile graphics in GPGPU compute. You know, like the GT650, which is a 45W TDP all on its own, the new Haswell Iris 5200 GPU beats it in, I think all/most, GPGPU benchmarks with the 47W TDP for the entire CPU/GPU. The GT650 beats the 5200 in most games (with one or two tiny exceptions)...but often not by a huge amount (sometimes the 5200 gets spanked pretty bad)...but again, that is comparing a 45w discrete graphics option to a CPU/GPU combo with 47w between them.

Physics only allows you to get away with so much.

Compared to 7000 series APUs, Intel Haswell IGP is significantly better. Compared with 8000 series APUs, the only thing that AMD can beat it with is DESKTOP APUs. The mobile APUs...where you are probably going to actually CARE about how powerful your IGP solution is, get spanked in most/all similar TDP comparisons.

As for Intels TDP numbers...no, they are pretty darned accurate. The SDP crap you can take with a massive shaker of salt, but the TDP is pretty much what it means. Intel tends to be pretty far below it most of the time.

Also, not sure if you checked lately, but most ARM 1W CPUs...are for a SINGLE core. Not the entire SoC, that is generally a lot higher (even phone SoCs). The SoC in the Nexus 10 can run as high as 8W for short periods of time, and averages 4W underload. The new Haswell 15W ULT chips bat up to about 15W under heavy load, and that is generally with CPU and GPU loaded. Just loading the GPU or CPU with very light work on the other isn't going to see 15W. Even with the CPU at full turbo, you are probably only going to see around 6-10W (Ivy Bridge ULV for it's 17W chips hit about 10W with both cores fully loaded under max turbos).

If you bothered to read, a couple of places of done "mobile" workloads on new Haswell 15W ULT ultrabooks, and normalized for battery capacity...the laptops BEAT things like the iPad 4 under some workloads and came up short by only a small amount in some others. That means that 15W ULT and the platform as a whole uses LESS power than something like the iPad4. Sure that isn't comparing heavy load to heavy load...but you know what, that ULT Haswell chip is also punching out probably 4-5x the compute power under heavy load, and even then probably only comes up short by maybe 50% of the battery life under heavy load, normalized for battery capacity.

I am very interested to see what the new Silvermont Atom chips are able to do.

AMD has some interesting gear coming up with ultra low power mobile, but for standard mobile and regular low power mobile, they do NOT have a compelling offering right now nor in the near future. Their CPU performance just sucks, their power consumption sucks and their GPU performance often is NOT as good as a similar TDP Intel mobile chip.

Low end desktop if you want a light to medium gaming rig and only need your machine to do light to moderate CPU tasks, AMD DOES have compelling offerings. They don't in mobile.

Originally Posted by rolloAMD need a server chip that can fight Intel on performance / watt at the moment they are so far behind its crazy. There's a reason Intel are above 80% server market pen.

Actually, they did. AMD's Magny Cours Opterons were overall better than their intel counterparts in nearly every way except single-threaded tasks. They were cheaper, faster clock-per-clock, more power efficient, and had QUAD CHANNEL memory. The current Bulldozer/Piledriver architecture was also revolved around AMD server chips. They're decent performers in servers, but for whatever reason, companies would rather go for intel - I guess because Intel is the easiest way out. Intel has proven multiple times that throughout most of their history, they are not a cost-effective option in servers(when you include all CPU architectures). I feel x86 overall doesn't belong in most servers.

Quote:

Would also help they made a quick low power laptop chip to go in ultrabooks another segment that is high in profits.

Agreed, but with their current architecture, that isn't going to be a reality. This is probably why they're investing in ARM. I just hope they do the same thing with ARM as they did with x86-64.

Quote:

Desktop chips AMD is only a player if your on a budget or a major fan. Intels top end chips are better for every work load you care to list.

Better doesn't mean necessary, or cost effective. Intel's i3 and i5 series are pretty overpriced (except in laptops), and the average user hardly has a need for something much better than the best i3. The thing to consider too is most enthusiasts hardly *need* an i7. While there are undoubtedly good reasons to own one, I'm sure the majority of i7 users are just bad at managing their tasks, care about bragging rights, and don't have the patience to wait an extra few minutes to render or encode something.

I'd dispute the "majority" claim. I am sure some fall in to all three. At best you might be looking at one of those. I don't have an i7 (though I do have a 3570), but I fall in to the last category, though not "don't have the patience". I do a lot of photo editing work, and with 3 young kids, plus a full time job means I don't often have a lot of spare time. So being able to save 30-50% of my time compared to what a mid grade AMD Trinity processor could manage in similar tasks when it is something CPU compute dependent while doing photo editing work is pretty important. Baseline average workload in a month that saves me roughly 45-60 minutes (based on roughly 3 photo editing "sessions" which is about what I average). My time hasn't stopped being money. Even if it was only worth $10 an hour for my spare time (and I make a crap load more money than that)...10 months would pay back the difference between my i5-3570 and a mid grade Trinity processor.

That doesn't include the time saved in a lot of other tasks, though I won't include Encoding tasks, because I simply set that stuff up to run and walk away.

For me and a lot of people, though certainly only a large minority, having a really fast CPU is still very, very important. Especially when it comes to mobile, I have a requirement of a light weight laptop, but I also need a fairly fast CPU in it for on the road photo editing, and the AMD option would likely mean a good 40-60% hit in CPU performance, which would be unacceptably slow, or else going up in package size significantly, which would also be unacceptable.