Leaked AMD roadmap shows Excavator arriving in 2015 – and possibly the end of AMD’s big-core x86 business

Share This article

A new, unofficial roadmap from AMD has shed light on the company’s plans for the 2014-2015 timeframe, There are some specific reasons I’m uncertain if this roadmap is legitimate — but since it’s also rather interesting, let’s dive into what it shows. If accurate, AMD’s FX series of products is essentially moribund, the next mainstream CPU upgrade won’t arrive until 2015, and AMD is planning to launch a new socketed Kabini part in mainstream systems.

In fact, if this roadmap is accurate, it’s possible AMD is evaluating an exit strategy for its high-end, big-core x86 business. That’s an explosive claim to drop, so let me explain my reasoning before you jump me. First, if this roadmap is true, the FX-9590 was essentially AMD’s swan song for the high-end. The company has no public plans for an eight-core Steamroller part for the AM3+ segment or Opteron.

This roadmap suggests that Carrizo, Kaveri’s replacement, will integrate a new CPU but keep the same GCN architecture that Kaveri uses. That makes sense if AMD intends to tape the chip out in the next few quarters. AMD may be adopting a similar cycle to Intel, where it improves on the CPU architecture in one update, then the GPU in the following cycle.

The small core section of the roadmap — the Kabini/Beema portion — is data we’ve covered recently. We already know that these chips are tweaked versions of Jaguar with better power consumption and that they’re built on 28nm at TSMC. The really interesting question is what process Carrizo is going to be built on?

CPU efficiency and the foundry question

Right now, all of AMD’s “big-core” x86 CPUs are built at GlobalFoundries. GF isn’t rolling out multiple types of 20nm this time around, but is pursuing a unified strategy of offering one type of silicon that can stretch to cover multiple targets. This 20nm LPM node (shown below) may indeed be able to hit a wide range of targets — but it’s unlikely to stretch far enough to accommodate the full range of AMD’s product line. That’s probably part of why we see TDPs yanked down to 65W for Excavator.

Intel can hit wide TDP ranges on its process nodes partly because it enforces extremely strict design controls on the engineering team. GF, being a foundry, can’t exert the same amount of control over its partners, which means AMD has to find ways to fit within the 20nm process if it wants to build cores at GloFo. But that throws the question back to the CPU team — can they design a Bulldozer-derived part efficient enough to fit within the power envelope?

Up until now, AMD’s big-core business has relied on pushing TDPs higher to hit better performance targets. The company’s six- and eight-core products all draw more than 65W. Moving to a 20nm low-power process at 65W means Excavator’s performance-per-watt needs to leap forwards relative to Kaveri. This, in turn, may explain part of why AMD is keeping the GPU core as a GCN-derivative — it may make sense to retain that architecture as a known quantity while negotiating a die shrink and architectural shift.

All of this assumes AMD is keeping its upper-end x86 business at GlobalFoundries, mostly because there’s no reason to suspect that business could shift elsewhere. Still, I think it’s clear that AMD is strategically reevaluating the product segments it participates in. We don’t see a 20nm version of Kabini, Beema, or Mullins on this roadmap because AMD is ramping up an ARM core instead. If that core does well, it may become the poster child for AMD’s new push into different market segments. The 14nm and 20nm tape-outs that Lisa Su discussed at the company’s last earnings call could refer to designs for the PS4 and Xbox, not merely the PC side of the business.

AMD can’t afford to ditch the x86 business altogether at this point, but we know it has huge plans for embedded and semi-custom, which Rory Read has talked about as accounting for up to 50% of the company’s revenue in future years. This roadmap reflects some of those plans — with limited cash to spend, AMD is going to put more funds into emerging markets, shore up its overall x86 position, and push forward with second-generation Kabini hardware while keeping the entire lineup standardized on the GCN architecture.

Tagged In

Post a Comment

Marcus Mendez

I think apu only makes sense. The trick is can they make it configurable. Six core curruzo
Could be interesting even for enthusiast gamers and power users. Intels best silicon has gone in this direction.

Joel Hruska

Bulldozer was built on the promise of high clock speed. If AMD has to rearchitect the chip to fit into a 65W maximum TDP range, that’s going to hurt. They’d have to completely rebuild the architecture to drastically improve IPC.

I don’t think we’ll see a six-core CPU component there. Not in a 65W envelope with a GPU attached.

john

You make some unfounded claims about a node that isn’t in production and a product that hasn’t been launched … yet you claim that lowering the tdp automagically means lowering performance even though it’s a different arch on a different node and a different technology… tell us how you came to this conclusion?

Joel Hruska

12 years of work and months of research, behind-the-scenes conversations, and in-depth discussions with foundry engineers, as well as a decent working knowledge of what is and isn’t possible on any modern foundry node.

Here’s what I’m claiming, to be clear:

1). GloFo has one process node at 20nm. They told me so. Today.

2). That node is going to run the gamut from very low power to very high power.

3). TSMC and GF went with multiple types of silicon at 28nm and 40nm specifically to optimize for particular use-cases, before it became cost-prohibitive to do so.

4). High performance silicon is tuned for relatively high leakage current and high frequency scaling. Low power silicon is tuned for low leakage and relatively low frequency. You can target both of those simultaneously, but you lose out on the benefits of specializing.

5). AMD is already hauling clock speeds and TDPs backwards on Kaveri.

6). As dies shrink, companies struggle to hold TDP constant. Intel gave up frequency scaling at the high-end desktop with Haswell to hit better mobile targets. This is physics at work. AMD is not immune to the process.

7). In order to deliver equivalent performance in a maximum 65W envelope, AMD will have to drastically improve the performance per watt of Excavator. This follows from #4 and #6.

8). The primary benefits Intel gained at 22nm compared to 32nm were from FinFET, not the die shrink. TSMC and GloFo are both using planar silicon at 20nm. That means the benefits of these processes will be relatively small — 15 – 20% perf, 20-25% power.

9). Given the constraints of 20nm LPM at GloFo, the chances that AMD will be able to field a 4-4.5GHz processor on a new 20nm process are very small. Not unless GloFo tunes a 20nm line specifically for AMD, which they *could* do, but have not announced.

john

You still didn’t show us the thought process by which you got there.

In the slide we see 2 density increasment with 1.5 perf increasment that means you get 1.5 better tdp… that translates to amd’s claim almost liniary. Meaning that they may hit same performance or incrementally betternwith 50 percent better tdp

So tel me what contradicts gf’s claim?

Now i am in the field for close to 20 years does it have any relevance? No, because we both talk about the future and operate on same base info. If you have added info please share! That’s exacly what i was asking for in the first place it’s not because i don’t trust you, i trust all internet info equaly not… but not having other mans of comparison i rely on what is out there and common sense filtered by what i know obviously, but when i claim something i usually expose the thought process behind it.

Joel Hruska

Current low-power processes target operation in a sub-2GHz envelope. Sometimes a sub-1GHz power envelope, but let’s assume that GF is talking about a 1.5GHz, Kabini-like chip.

1.5GHz * 1.5x performance = 2.25GHz.

AMD can’t build a 4GHz core on an LP process. They’ll have to haul in frequency, big time. And that means they have to dramatically improve performance per watt.

john

you yourself said that GF offers anything from lp to hp(apparent also in the slide)… Meaning they have a mature enough process to trust it would deliver on both ends GF always touted the 20nm as performance and 16nm fifet & 20nm finfet as xm(extreme mobility)… My guess is that the process is quite mature now to accomodate the entire spectrum. What makes you think that it’s an LP process scaled up and not the other way round?(which makes far more sense provinding the fact that GF has taped out 20nm more then a year ago for the first time – closer to 2 years now)

BTW – your computation is wrong:
2.6ghz * 1.5 =~4 ghz… let me remind you the chip in the note 3 runs @ 2.3 GHz… that is “LP” for you right there :) … There are arm chips running in the 3ghz region in the fanless tablets area… so your frequency point is not valid – mind you they use 28nm process not 20nm…

Joel Hruska

There are no 3GHz mobile chips for ARM. Certainly none being built at GF.

Slides from GF suggesting that the gains on 20nm are going to be quite small compared to 28nm, but quite a bit more expensive. That slide, in fact, pretty much sums it up. Small gains, significant cost.

Think what you like. You won’t see 4GHz+ Piledrivers on a 20nm LPM process unless GF does custom work for AMD to make that happen.

My point is they don’t need 4ghz, not even 3ghz… just 2.6ghz which is very well within todays capacity even though the highest phone cpu’s are 2.3-2.5 with announced 3ghz.

The idea is that with the increased density and efficiency they could ship a 20nm excavator in q4/2014 or 1h/2015 by then the prices will go down and mass production won’t be an issue.

With the added perf increase we might see much lower power much denser dies beating kaveri and still being at much lower tdps… that was the entire point of the discussion anyway wasn’t it?

Joel Hruska

Ah. No, I doubt it. Not from any Bulldozer chip. I’d be more likely to believe a scaled-up Kabini than a BD overhaul that drastic in just a few months. (It would need to tape out in 3-5 months to be ready a year later).

john

Well that fits doesn’t it, if excavator tapes out h1 of 2014 it might hit market h1 2015.

Kaveri is already a big change on the BD design if context switching is to come with this chip it might mean much smaller x86 cores and with that, better scaling in the low-power area. I get your point that BD is primarily designed for high frequency but I think they have time to adapt it till q2 2015 + I have yet to see a aggressive resonant clock use, I guess it didn’t turn out that good after all or it doesn’t scale very well…

Joel Hruska

John,

On the contrary. I think the resonant clock mesh and the hard flips are precisely what gave Piledriver a significant clock increase over Bulldozer.

The first-generation FX-8150 chip didn’t spend much time running at peak Turbo speed unless you had *really* strong cooling. The chip put out too much heat for that. So you ended up with something more like 3.75GHz over the long run.

The FX-8350’s base clock of 4GHz was a significant improvement over the 3.6GHz on the 8150, and while the Turbo Mode was only 4.2GHz (300MHz higher), it spends more *time* at top frequency.

The FX-8350 came in 15-20% faster than the FX-8150. It’s a much better part. It may not compare all that well against Intel’s hardware as far as power consumption, but AMD definitely moved the ball ahead with Piledriver.

john

Well that fits doesn’t it, if excavator tapes out h1 of 2014 it might hit market h1 2015.

Kaveri is already a big change on the BD design if context switching is to come with this chip it might mean much smaller x86 cores and with that, better scaling in the low-power area. I get your point that BD is primarily designed for high frequency but I think they have time to adapt it till q2 2015 + I have yet to see a aggressive resonant clock use, I guess it didn’t turn out that good after all or it doesn’t scale very well…

very compelling points. I guess we have to admit that given all that there are still things we don’t know, and that your best guess are what you would perhaps call highly confident predictions based on known evidence (ie no one will argue with GF having only one process , 20nm LPM as you say and have read elsewhere ). It does create quite a problem for AMD, but there the underdogs, so perhaps they will try something wonky. Let me ask you this, given your considered educated positioning on this subject and AMD’s stance, what should they do?

pelov lov

Joel, do you know what GloFo’s 20nm process is? FD-SOI, bulk or PD-SOI? If they’re claiming a wide range of scaling, I’m inclined to think it’s back to SOI after dabbling in 28nm bulk.

I’m also not sure that AMD is yanking the cat family, as you implied above. The perf-per-watt gains they’re claiming with Beema/Mullins on TSMC’s 28nm may have been favored over moving to a rather expensive and low capacity 20nm TSMC node. Moving to a new process isn’t worth it unless you can charge high prices on chips with relatively flexible TDP headroom; something that their GPU side knows all too well. The low power x86 cores don’t fall into either of those two categories.

I do agree with you that the roadmaps we’ve seen over the past 2 years from AMD point to a single x86 chip covering the entire gamut with ARM SoCs being tweaked for enterprise and perhaps consumer, though the former is far more likely than the latter. It’s more expensive and time consuming to build and maintain multiple x86 architectures than it is ARM cores

Joel Hruska

We know GloFo has done some 20nm FD-SOI work for STMicro, but that agreement is strictly to build *for* STMicro.

And looking at that chart, it says ugly things about 20nm LPM, the process in question. 20nm LPM is only marginally faster than 28nm HPP, but it’s considerably more expensive. The implication is that 14nm XM is substantially faster (but priced the same) with 20nm FD-SOI delivering the same benefits (and deploying at the same time) as 14nm XM.

That points to a late-2015 early 2016 timeframe for high volume on 14nm XM or 20nm FD-SOI.

This chart (I just found it) could explain why AMD is in no great hurry to move to 20nm. The benefits look even more marginal than I thought. Performance is *not* very much improved.

pelov lov

Thanks for the info. Really makes you wonder if AMD cancelling big cores was an internal decision that was thought to be favorable for the long term or if it’s a result of circumstances out of their control. I have a feeling it’s a bit of both ;)

If the jump to 20nm + FinFETs comes a year after 20nm for both TSMC and GloFo, and I haven’t read of any delays, we might see more chip companies skip the 20nm node altogether. That might explain the lack of cat family chips on 20nm on the roadmap for 2015. The advantages of FinFETs suit AMD’s small core x86 family more than they do the Bulldozer derivatives.

Joel Hruska

Two years, not one. 20nm ramp is 2014. Morris Cheng is on record saying he expects volumes of 16nm FinFET to be “very small” in 2015.

2016 for 16nm at TSMC, 2016 for 14nm XM in commercial shipping production at GF.

pelov lov

TSMC has been shipping 20nm chips for a couple of months now, no? Though it has been risk production, the ramping up of volume should be mid first half of next year with volume restricted until the latter half of the year (check out the IP tables image below)

Are the 2016 time lines referring to non-risk production with semi-decent volume? Here’s the EEtimes article

I’ve always got the impression that the 20nm node was a short term stopgap towards moving to FinFETs – in essence a two step process that’s going to be handled in such a way as to decrease risk while providing at least some benefits for those inking WSAs.

And that CAPEX spending is absolutely monstrous. GloFo, Samsung, IBM and co. are glad that they’re sharing at least some of that cost. The bean counters at TSMC, and Intel especially, have got to be pulling their hair out =P

Joel Hruska

This depends entirely on how you read the tea leaves.

TSMC says volume production of 20nm starting Q1 2014 with volume production of 16nm FinFET starting one year after that. In a previous transcript, Chang said he expected volumes on 16nm FinFET to be “small” in 2015.

I wrote it up at the time. That’s a quote from him based on the Q1 2013 transcript.

So here’s how I reconcile that. The length of time between when a foundry says they’ve begun “volume production” and the actual time we start seeing shipping products varies, and it varies a great deal. On 40nm, which was famously bad, it took nearly a year. On 28nm we saw chips from AMD within two months. Chips from NV followed about six months later. But it took a year or more before a majority of mobile devices were shipping 28nm.

Also, keep in mind that tapeout can take time. 12 months is a typical minimum these days.

So. If TSMC begins volume ramp of 16nm by Q1/Q2 2015, I’d expect 16nm FinFET to hit shipping products no *earlier* than Q3. Based on the relative ramp times we’ve seen, GPUs would be the strongest candidate for that.

Broad 16nm FinFET shipments will probably take into 2016. Whether AMD or NV will be on the early adopter list, I do not know.

pelov lov

Somehow I doubt that we’ll be seeing any FinFET GPUs soon :P Though I do recall AMD stating Q3 2014 for their next GPUs, but I’m not entirely certain on that timeframe. It would line up with TSMC’s statements

The Kepler example is actually what I was thinking of when above I said that AMD might be skipping 20nm and waiting for 16nm/14nm-XM for their cat family followup. AMD was more lax with respect to TDP headroom and clock speeds on GCN 1.0 and were therefore able to salvage more dies per wafer and launch earlier. In contrast, nVidia was much more strict and the GK104-based GPUs were delayed as a result until TSMC’s process was mature enough to churn out the chips nVidia needed. My point was that AMD may have decided to wait it out and be a beta tester, thus the 2014-to-15 roadmap lacks a cat family update. Their GPU side can afford to take the plunge with big chips and big TDPs, but that’s not a risk you take when making cheap and frugal small cores. These are SoCs now, after all.

The lag for mobile devices is understandable given the amount of IP involved, particularly third party IP. It’s an entire ecosystem that has to take the plunge at the same time, not just one or two chip manufacturers.

Joel Hruska

So, Rory said recently that AMD would tape out new designs on 20nm and 14nm “in the next couple quarters.” Again, 12 months to production from tapeout is pretty normal. And as I wrote recently, AMD’s published roadmap for the 2014 timeframe suggests that the HD 7000 / R9/R7 parts will be kept on at lower prices, with the implication being that an R9 280X could hit the $200 price point.

But we dont’ really know what that meant, either. It could have applied to the PlayStation and Xbox One, after all.

what i find interesting from your claim #5 and # 7 is the fact that amd claimed beema and mulins will up the performance per watts by 2x than kabini on the same process node (15 watt beema vs 25 kabini a6-5200).
if amd could do this on excavator (65 watt) vs kaveri (95watt) with the help of full node shrink, i think we will se dothan/conroe 2.0 by AMD.
edit : typo

Joel Hruska

Beema and Mulllins aren’t built on 20nm. They are 28nm chips at TSMC. Different process, different chip design.

Pangolin_user

yeah thats why i write on the same process node

Joel Hruska

D’oh! My mistake. Apologies on that.

I don’t really expect a 2x straight-line improvement on those chips. That’s just out of the ordinary for a same-process revision. The only real way for Kabini / Temash to cut power by half is if the transition to TSMC from GloFo (remember, those parts were originally supposed to be built there) meant they came in *way* over power to start with.

Pangolin_user

actually is not 2x, if we comparing amd kabini 25 watt chip with beema 25 watt or 15 watt kabini with 15 watt beema the number wouldn’t be that big. amd comparing the less efficient chip kabini designed (25 watt) with the most efficient beema (15 watt). IIRC kabini 25 watt only 25-30 % faster than 15 watt kabini. so its only 1.6 x performance / watt

David Stanley

sorry but FAB 30 at glofo is currently punching out 12nm 10nm and 20nm

and AMD has stated it will use all of FAB 30 in upcoming products

Joel Hruska

Fab 30 is not “punching out” 12nm and 10nm. I have corresponded with them directly regarding foundry plans.

20nm planar production will ramp in 2014. Broad commercial availability will be in the middle of 2014 and into 2015.

14nm-XM production will ramp late in 2014 to early 2015. Volume production is expected for late 2015 – early 2016.

Please keep in mind that “Available” in this context does not mean “Shipping commercially with a customer,” as evidenced by the fact that the first 20nm FPGA from Xilinx shipped in November from TSMC, while GloFo claims 20nm has been “available” for the whole of 2013.

GlobalFoundries will not ramp 10nm technology until sometime after the 2016 timeframe. I would not expect shipping chips before 2018 at the earliest.

Also, don’t know where you get your “All of upcoming products from.” Both consoles, which are AMD designs, are built at TSMC. So are Beema and Mullins, and AMD’s GPU business. If AMD moves to GloFo, it’ll be at 20nm, and since AMD hasn’t taped out its 20nm designs yet, that won’t happen before early 2015 at the earliest.

Joel Hruska

Huh. My response to this appears to have been eaten.

GloFo’s current roadmap is for 20nm to ramp into production in 2014. AMD, however, won’t have GloFo designs ready until 2015 based on a Q1 2014 tapeout.

Best-case for 14nm-XM shipments is late 2015 / early 2016. 10nm is farther back from that.

Dozerman

There was actually a leaked slide from AMD with kaveri being shown as 4/8 cores while trinity was 4/4 as we’re richland and llano. It being bullshit aside, this could be foreshadowing of either SMP or a quad module part.

Joel Hruska

Kaveri is not an 8-core part. It just isn’t. I’m sorry. Every official roadmap, every conversation, every discussion : There’s no plan for an eight-core or an eight-thread Kaveri.

Dozerman

My apologies; I actually just went back and looked. The chart only referenced Trinity, Richland, and Kaveri and has since been changed to 4/4, even though the author of the article actually replied to my comment about the 4/8 discrepancy and seemed to confirm that there was a form of SMP being implemented.

Bulldozer can decode four instructions per cycle. It feeds them to one of two integer cores, round-robin. BD/PD *cannot* dispatch two instructions to Core 0 and two to Core 1 in the same cycle.

Steamroller returns to a conventional configuration. Now, each core gets its own decode unit. Where BD/PD could only supply four instructions per clock to *one* core, Steamroller dispatches eight instructions per clock to two cores. AMD has taken steps back towards conventional multi-core configurations,

This should really help untangle a major bottleneck. But Stramroller isn’t doing any fancy multi-threading. The integer units are still separate, the FPU is still shared.

Dozerman

That was my original understanding of what was happening. I guess I got exited over something I should have looked into more. Whatever. You live, you learn.

pelov lov

The threads refer to integer cores or 128-wide FP instructions within a single module. AMD has used the same language in describing their CMT/modular approach in the past too. It was just a misinterpretation by someone in need of a story ;P

Dozerman

I realize that now. What pisses me off the most, though, is that the author actually took the time to reply to me, just some commenter, and confirm what wasn’t true.

Dozerman

Every time AMD releases a new roadmap, their X86 business dies, at least according to every tech site on the ‘net…

Joel Hruska

Not their x86 business. Just the high-end, big-core variants.

It’s hard to argue with that. AMD’s Opteron and desktop roadmaps show Piledriver continuing straight through 2014. The only updates shown on both official and unofficial roadmaps are the quad-core and below parts.

Kabini and Piledriver are actually equally efficient in certain tests. That’s not good for Piledriver.

Dozerman

“Kabini and Piledriver are actually equally efficient in certain tests. That’s not good for Piledriver.”

I’m not sure how these scores on this particular site work, but it seems that Piledriver is barley pushing over the same number of FLOPS as Kabini at 1/3 the clockspeed. That’s abysmally terrible. Even worse than that is multithreaded scaling, where it stayed ahead of Kabini by the same margin as single threaded benches even though it has twice the number of cores.

I never realized how bad it really is for them. That being the case, it would make a ton of sense for them to drop big cores and push what they have now. Honestly, if I’m interpreting what I am seeing correctly, a Kabini that was engineered for high clocks and an L3 would do better on the high end than Piledriver, should they choose to still pursue that market.

Joel Hruska

Yes. This isn’t always true, but Pd’s efficiency is poor. If you normalize for clock speed, PD is typically slower than Thuban. AMDs accomplishment was getting clock speed up and power down, but in terms of IPC, in non-AVX code, the old K10 is often faster, clock for clock.

Jesse

AMD has never been any good at low power chips where AMD trounces AMD Six ways to SUNDAY!!!! Few tablets use AMD chips want to know why. It because AMD chips use WAY too much power. AMD hasn’t fixed that problem and I don’t think they ever will. So yeah AMD is history.

Wakko Waru

You know, when Banias and Dothan hit the streets, it was “not good for Netburst.”

8 years ago I had a mac for 1 job and a P4 machine for another job. I so badly wanted to replace the P4 with a Dothan chip but was not allowed to tamper with my work PC.
Then Apple announced the move to Intel CPUs because of “performance/watt”. Most people, thinking of the P4 architecture, Netburst, laughed off the idea and wondered why Apple didn’t go with the clear performance/watt leader, AMD. What few people realized was that Intel would best the next generation of high end chips off of its current low end design.

I sincerely hope for the sake of competition that if “Kabini and Piledriver are actually equally efficient…” what that means is that AMD in 2016 will have the same performance and performance/watt renaissance as Intel in 2006.

Joel Hruska

That’s an interesting idea, and possibly a good one. They would have to make some significant changes to Kabini to turn it into a big core, but it’s far from impossible.

Dozerman

They already are making changes in their low-end procs that mirror the high end, such as the inclusion of AVI.

http://cpugrade.com/ Dylan

I understand you wrote this a year ago, but Zen is coming in 2016, and it will supposedly have SMT like the Core i7s with fewer shared core resources. In fact, I’m assuming the L3 cache will be the only shared resource. I need to dive into reading about it more myself.

Boudou

AMD is in a unique position with their APUs. They are the only company that have powerful GPUs and can make x86 cores.

Nvidia makes great GPUs but can’t make x86 cores, and Intel makes great x86 cores but their GPUs aren’t on par with Nvidia/AMD. This is the main reason AMD won the contract to make the PS4 and XB1: x86 + GPUs.

Then there is the larger question of the actual market for ‘big core’ x86 chips. Desktop marketshare is rapidly shrinking, as the other article today went into today, and its a market that is better left to Intel. APUs at least will have larger penetration in laptops and tablets, markets that still have a growth potential.

Given the current economics of the PC market, razor thin margins and everything made by ODMS, these system-on-chips really are the only way to survive.

A big win for AMD would be if they could convince Apple to move over to their next-gen APUs for their Macbooks and Mac Minis.

Well to AMD, Apple keeps making good strides with their ARM setups, few more years…

john

Show me an iris pro that can beat an equally prices gpu + cpu combo from amd and i’ll eat it – guess being a fan doesn’t hurt now does it?

Mountainjoy

You fail to understand what mobile means to chip selection and usage.

john

Right because you buy an iris pro chip because of the baterylife… you dump 1-2 k into an iris pro system for the power and there you would be better served with cpu + gpu… however in the true mobility area amd has nothing worldshattering and intel is irelevant in ultramobility while being practically a monopoly in laptops and derivates.

Dozerman

Kaveri will fix that big time.

Jesse

No it won’t. AMD kaveri still PALES in comparison to Intel’s AWESOME Iris Pro.

Mariu

Especially the price. It’s the most awesome thing about it.

Jesse

Intel Iris Pro is UNBEATABLE!!! AMD can’t touch it not even with Kaveri!!! But Kaveri sucks. It missed performance and power targets all over the place. It’s a true testament to how much AMD sucks.

Jesse

Intel Iris Pro is UNBEATABLE!!! AMD can’t touch it not even with Kaveri!!! But Kaveri sucks. It missed performance and power targets all over the place. It’s a true testament to how much AMD sucks.

Mariu

AMD already built faster APU’s like the one in the Xb1(not to mention the one in the PS4 which 50% faster then the one in the xbox).

Is more than 2 times faster then “magical” Iris Pro.

They can reduce the gpu frequency a little, reduce the number of cpu cores in half and increase their frequency and it will fit without a problem in a laptop.
There is no a doubt Kaveri will beat Iris Pro and without having any dedicated EDRAM.

Jesse

Wrong Kaveri fails against the Iris Pro and it will get creamed by Intel Broadwell’s IGP NO CONTEST!!!! Not going to happen with the fail that is AMD!!! Not it won’t the PS4 SoC USES 140 WATTS OF POWER. It’s way too power hungry to use IN ANY LAPTOP!!!! Face IT AMD IS DOOMED whatever you like IT OR NOT!! AMD is the first causality of the new era of computing.

Mariu

140W is not too much for a laptop genius. The GTX 780m alone consumes more than 100W. As I said they can reduce the core count and GPU frequency and save 20-30W no problem.

The only thing that makes Iris Pro not suck is the dedicated EDRAM. AMD can easily beat that if they also release a similar solution only it’s not cost effective. A dual graphics configuration should be a little more expensive than an i5+dedicated gpu laptop but in the same time destroy Iris in terms of performance.
Dual graphics is AMD’s biggest Ace for their next generation on desktops as well as on mobile.

Jesse

That’s not enough for a small laptop. That’s NOT GOING TO HAPPEN!!! AMD sucks too much. Dual graphics WILL DO NOTHING for AMD because AMD’s CPU’s SUCK too much to use the Dual GPU configuration. The last time AMD did that it ended up IN PURE FAILURE!!!!

Mariu

So that is it.That is all you’ve got?? Pathetic.
AMD already released a slide where they talk about an improved hybrid crossfire technology. As I said it’s their biggest Ace. If it sucks they can make it better. And they probably have.

Jesse

Nope AMD is still SUCKING OUT LOUD!!!! AMD is doomed face it chump!!!! AMD can release all the slides they want it CHANGES NOTHING!!!! Intel crushes AMD IN EVERYTHING Now!!! Intel Iris Pro dominates in all areas when compared to AMD.

http://cpugrade.com/ Dylan

You are an idiot. The R7 in the 7850K already beats the Iris Pro 5200 at gaming. And in HSA workloads, the APU trounces anything by Intel, including an i7 with its puny HD Graphics.

doctorpankake

Congratulations for pointing out that a $500 chip is beating a $200 chip.

Excellent job.

Pure genius. Keep it up!

Jesse

No it proves that INTEL IS KING and AMD WILL NEVER USURP Intel EVER!!! AMD is a chump when compared to the awesomeness that is INTEL!!!!

I don’t think anyone is taking Intel lightly, they are afterall the 800lbs gorilla in this industry. AMD, as the smaller company that has sold off most of its foundries, needs to decide where to allocate resources. Putting their money in their APUs is a good bet.

Furthermore, it also comes down to price. It comes down to if AMD undercut Intel in price. And Haswell’s performance isn’t that great a leap over AMD’s current APUs, as mentioned by others, the Kaveri looks to best the Hasell.

One big variable in this will be Mantle. As both MS/Sony are using AMD APUs, Mantle may find traction, and may be able to gain performance advantages as more developers use it. This performance advantage would be difficult for Intel to claw back if Mantle becomes widely used.

Finally, Apple. We don’t know if Apple will move to ARM for their Mac OS11. That is certainly a big possibility, but the ARM A50s (successor the the A15s) have to match or exceed x86 rivals in productivity applications like FCP and Adobe Suite of programs.

x86 is still clearly a desirable option as OSX apps can find interoperability and the question of if ARM chips can match x86 performance. ARMS push is unquestionable, but these SoCs bring a very strong case to stick with x86.

Marcus Mendez

arm only has to be good enough on performance. price/performance/watt

Mariu

Iris Pro bested AMD’s gpus especially on the price side man.
AMD could also put 128mb of edram in an APU and will easily beat the unimpressive and hard to find Iris Pro.

Marcus Mendez

design partners? i have thought of that, but one wonders if the rumours of AAPL going all in on ARM isn’t more likely

Boudou

The issue is that Mac’s main draw is its array of productivity apps. Such as Adobe Creative Suites, video editing, 3D software, etc.

All that compatibility will be gone when they switch from x86 to ARM. Apple have already got the casual end of their marked sewed up with iPhone and iPad.

ARM and mobile PowerVR cores may become ‘good enough’ for casuals, especially when the Cortex A50s launch, but its really not going to offer the performance of an x86 with real GPU. And Apple is positioning Mac for productivity.

There has to be a real reason for Apple to drop x86, ARM has to offer a real tangible benefit to throw away a strong suite of productivity applications that its worked very hard to get on its system.

If AMD can offer an APU for a competitive price, it becomes the easy choice as they have to change nothing and make no sacrifices to get it to work.

Then again, Apple may be planning on changing Mac altogether, dropping the traditional desktop altogether to make a new type of work/productivity OS. And if more things move to the cloud, as Adobe seems to be tilting towards, than x86 becomes irrelevant.

Marcus Mendez

good points. I am in the middle of this debate in terms of questioning why i still use a mac. I’ve had a macpro, several macbook pro’s, and i am intrigued by the new macpro. But there does seem that there will come a point, say 5-10 years, where say at 10nm, apple has more than enough compute to do it on arm or power vr if they so choose. I think apple sucks with web stuff, hate safari and i try to love it.

Jesse

AMD is nowhere NEAR fast enough. In fact ARM is faster than AMD these days. That’s how much Suckdozer, faildriver, Shitroller SUCK!!! ARM cortex A-50 will smoke EVERYTHING AMD has to offer and then AMD will be thoroughly defeated and thus their destruction will finally be AT HAND!!!!

The best case scenario for Kaveri is that a Kaveri quad-core will no longer be defeated by an Intel dual-core. If you have a dual-core Sandy Bridge, you might get what you’re looking for, especially if you want to buy a simple budget system with a decent iGPU.

john

Another bullshit… piledriver cores are @70% ivy performance c4c mean with the 20% perfboost in ipc over piledriver it should be h2h with a haswell chip albeit using a higher frequency but the price will more then make up for that as usuall. You keep saying dual sandy beats quad piledriver – i’d appreciate some benchmarks not single threaded performance.. ty

Dozerman

Those frequencies don’t matter if the TDP is the same. That being said, you also have to remember that AMD per-gigahertz performance is about 70% BEHIND current-gen Intel.

john

from the benchmarks, even the ones provided here, pure single threaded performance of one core is about 70% of intel not 30% of intel… if it would be 30% of intel ipc, amd would already be totally irrelevant in ANY market segment. Now multi-threaded performance doesn’t scale well in the benchmarks posted but I’m fairly sure the test are rather old (before microsoft did all the improvements to account for the odd architecture of amd cores and most probably with default 1.3 memory). This means that with the optimizations done by ms and with the added decoder and all the rest of the tweaks clock for clock kaveri should be around ivy bridge level, while boosting up the frequencies will bring it in line with haswell. (provided amds slides & es benches hold true)

Joel Hruska

John,

You aren’t clock normalizing. I’ll use Cinebench as an example. The FX-8350 scores a 1.11 in CB11.5’s single-thread test.

Take that score and divide it by the clock speed:

1.11/4.2GHz = 0.264 CBs per GHz.

Now do the same for Ivy Bridge.

1.66 / 3.9GHz = 0.425 CBs per GHz for Ivy Bridge.

Now, some math: 0.425/0.264 = 1.61x.

Conclusion: Ivy Bridge is 1.6x as efficient as Piledriver, clock-for-clock, in Cinbench 11.5. That rises further if we compare Haswell.

Alternately, we can say that Piledriver is 60% as efficient as Ivy Bridge.

john

Actually I compared richland singlecore to i3 singlecore cinebench but You are correct, the frequency comparison was off by about 10 % so richlands ipc is 60% of ivy so basically same number you reached with your normalization on fx vs i7.

So we can both agree that piledriver offers 60% performance of ivy bridge… if the engineering sample tests/slides/rumors/leaks hold true then we would need to be 15%-20% above frequency of haswell with a kaveri to beat it… which is much better then richland can provide atm. The ultimate test would be if a kaveri @ 4ghz can beat a haswell @ 3.5-3.6 ghz if it can I consider it a major step forward in the right direction.

What is a major concern of mine and havn’t heard a peep about is the memory controller … if that one is as shitty in kaveri as in richland … well let’s just say that would be a big let-down…

Joel Hruska

There’s nothing wrong with the memory controller in Richland as far as I know. The chip’s caches just aren’t particularly fast.

john

wrong? Well maybe not… but if you go against a 3 way setup like intel you might as well make some adjustments & improvements so your decision to stay with 2 way seems legit…

APUs would really benefit from improved memory performance… don’t get me wrong I’m ok with running 2.4 ghz but I sense APUs would deplete even higher bandwidths than I can now offer with this ram

The Core i3-3220 wins 13 tests and ties 8. Even adjusting for the fact that both of the chips I’d prefer to test against are somewhat faster, the end results are obvious. Substitute the Core i3-3225 and the A10-6800K in for each other, and the end result is an AMD quad-core that performs very similarly to an Intel fast dual-core with Hyper-Threading enabled.

Finally: AMD has already admitted, candidly, that it will not win a head-to-head comparison with Haswell. Sorry.

john

Yes they admited they are not going for haswell that doesn’t mean they are not going for ivy. I can’t find the methodology behind the tests but apparent even in this is exactly what i said look at cinebench singlethread you will see the 70% of intel core i was talking about. Without knowing the os’ and hw configs for both it is hard to tell however il looks like the apu had a 1.3 memory and game tests were done using a dedicated gpu.

The problem with apus is that they are memory starved by a large margin, i have seen huge improvements when going from 1.3 default to 2.1 and 2.4 ghz. Cinebench in pure singlethread shows c4c 70% perf vs i3 that’s exactly as expected. The aggregate performance shows there were some serious inefficiensies when testing, i expect newer drivers and w8 should scale better. Now, kaveri improves 20% on core performance(not ipc according to their slides!) while being @10% lower frequencies this is explained by the added decoder and a few other tweaks… basically any pd or bd is starving the cores + there was a major inefficiency in the silicon of bd/pd. The new design team delayed the chip in order to fix those.

Now i do have both a 6800k and an i5 ivy… we ran some benchmarks with the softwares we use, content creation video creation compilers and sound creation – we were not particularly exact but both systems behaved similarly… albeit the 6800 was much cheaper and ultimatelly the i5 needed an discrete gpu to cope with multiple monitors smoothly. I compensate the perf/core in pd with good memory and eventually smal overclock.

Tajisi

So, to summarize…
1. If you highly optimize the OS and run certain software in very specific situations, AMD is competitive.
2. You went from saying the chips were going to be competitive with Haswell to saying AMD is targeting Ivy.
3. 20% better performance at a 10% lower clock speed would generally indicate IPC improvement even if not directly stated.
4. If you implement more memory bandwidth (PS4 / XB1 style) performance will increase, otherwise on consumer level systems it will be heavily choked.
5. If you overclock the chip past specifications and use higher end memory you can begin to approach the performance you’d have with a lower end Intel chip at stock.
—
That being said, why bother? The value equation for AMD goes down once you overclock an i5 or an i7. I’d need an FX chip running at 5.5 – 6.0 GHZ on phase change cooling (or better) to just deal with my Haswell at stock. Focusing more on the mobile market and APU segment will do AMD more good since it can consolidate its resources and try to improve where its branding is already strongest.

john

reading comprehension problems much?

– About OS – I wouldn’t call it highly optimized windows 8.1 is a run of the mill general public os – what’s your point here?

– I said THEY target probably IB I would just increase the frequencies to compensate up to haswell, probably 10%(which is not that much either.)

– I did not say that core performance improvements doesn’t imply ipc improvements in this case. It was a reference to another article claiming that it’s actually 10-20% improved ipc but because of the 10% lower frequencies the advantage evaporates… which just isn’t what amds slides tel – but I guess you can read it either way…

– Yes APUs are incredibly memory starved from what I’ve seen up until now the better memory I threw at it the better the performance(without core freq increase)

The point is that kaveri + 10% frequency increase will match haswell and outright ridicule it’s gpu with 2.1/2.4 memory – combo which will still be way cheaper then i5 + memory or a discrete gpu to compensate.

I get your value computation but with kaveri you shouldn’t need 5.5-6ghz but instead 4-4.5 to beat haswell @ 3.8 – which is next to nothing compared to BG/PD the improvement should put AMD back on the map – most probably below intel but not by that much I suspect there will be a 5-10% difference in intels favor

Marcus Mendez

amd can’t beat nvidia or intc head on, the only question is are there markets that they know about that are being under served by those two right now or in the immediate future that aren’t being catered to by said competitors. it’s a great unknown… what’s in the press doesn’t provide too many clues. R290 interesting and there closer on graphics. Mantle is nice play but once again, nvda and intc turned it down

Mariu

I don’t know what are you mumbling about but the i3 3220 and the i3 3225 are identical on the cpu side.
The i3 also is not a typical dual core as it has HT enabled. Why not compare the i3 with a FX 6300 as they are the same price my friend.
AMD APUs also offers way better igp performance and close enough CPU performance.
AMDs typical quad cores like FX 4300 or 750K are fairly cheaper than the i3 and offer the possibility of overclock.

Joel Hruska

Because the previous post was referring to Sandy Bridge. I compared the 2600K againt the FX-8350 and the 5800K against the i3-3220 because that’s the information in the AT database.

Mariu

The i3 3220 is ivy bridge genius.

The i3 2120 for example is sandy bridge.

Phobos

I count 12 wins for the FX and 8 for the i7 for the ones that matter, if we remove the other garbage of benchmarks.

Joel Hruska

COnsidering I counted every test, we can’t possibly arrive at different numbers unless one of us mistook the bars.

I’ve just recounted. The Core i7 wins 20, loses six, ties one.

You cannot possibly have more wins for the FX than six. That’s six, counting every test. And in any event, the only benchmark I’d throw out of that suite is Cinebench R10, for age purposes.

Joel Hruska

COnsidering I counted every test, we can’t possibly arrive at different numbers unless one of us mistook the bars.

I’ve just recounted. The Core i7 wins 20, loses six, ties one.

You cannot possibly have more wins for the FX than six. That’s six, counting every test. And in any event, the only benchmark I’d throw out of that suite is Cinebench R10, for age purposes.

Marcus Mendez

kaveri wins in special cases(hsa/mantle) loses others.imho kaveri not good enough, but if i were on a budget, it might be. i think that is the value proposition. amd has unique ip but not class leading and can’t command top prices it seems.

Angel Ham

So.. Is any of your contacts telling you if AMD is going to release something similar to the APUs found on the PS4/XB1 for the PC market in the near future?

Joel Hruska

Not that I’ve heard. You’re not alone in being curious. ;)

Jesse

It official then AMD IS DOOMED!!!!!

john

It’s official, you’re a worthles piece of internet junk… grow up damn it i’m tired to see you fill the comment section with nothing but amd doomsday prediction…

Oh really NOW because the way I see AMD IS SCREWED and thus they should be because they haven’t made A SINGLE good product IN YEARS!!! INTEL IS KING and they have been lower power usage with every generation and will soon use about as much as ARM while AMD will NEVER get there. AMD should of tried to compete with smartphones and tablets and NOW ITS TOO LATE AND THEY ARE F**KED!!!!

Jesse

It seems that x86 is also doomed at this point. With Intel as the only x86 chip maker x86 has no future!!!

Mike Chambers

Haha, x86 is going nowhere for a long time. Quit saying stupid things.

Jesse

Nope you are wrong. x86 is fading away into obscurity with the advent of Linux and android.

Joe Joejoe

Linux is x86. Android is just an ARM variant of Linux. ARM is something that will never take over simply because many complex codes cannot be executed on it at all. Even if you were to add the capability to execute such code, you’d have to bump up the power usage significantly, which would essentially negate all the purposes of ARM…and it wouldn’t even be an ARM anymore, it’d be an x86.

Mike Chambers

Yeah, x86 is so obscure right now. The ARM architecture is just inherently less efficient per clock cycle. It can’t match an x86 clock-for-clock. It will always be huge in the embedded industry, but would have a tough time replacing x86 in desktops.

Mike Chambers

BTW, Android IS Linux.

James Tolson

this does not make sense at all, Amd have a huge business in performance Opteron cores in the server and large computing market, these processors are the pinnacle of latest technology and speed and efficiency, they have to be to keep head to head with the competition… from these performance desktop user processor version are made, and from those midstream apu’s are made and from them low end embedded processors are made.. Baking silicon is an art that has yields, this method produces a chain of products that processor manufactures need to fit those chips into the right product.. to say AMD are giving up on performance desktop processors is absolutely ridiculous….

they may however choose to alter their product line, and focus on just making APU’s, opterons, and low end.. but this is not really ditching performance processors because power users if they wished could just use opteron based boards for their rigs..

Dozerman

“Pinnacle of latest technology and speed and efficiency”. The only reason that opterons are being used in supercomputers anymore is because they’re cheap and just fast enough to feed data to the GPUs, the real stars of the show.

massau

there HPC server market is to small. they will focus on micro servers for now. i suspect they will do this whit an costum ARM A57 (supporting HSA )
processors

me987654

I hope the 8 core units don’t go away. I’ve had fantastic performance from my 8-core Vishera based ESXi server… and it was much cheaper to build than anything roughly equivalent from intel

Joel Hruska

They’re not going away, near as I can tell. They’re just not being updated.

Wakko Waru

As long as AMD makes money on its flagship CPUs they won’t can them. Intel sells very few “E” or “K” series CPUs but would they make probably 30x the profit on one of those as they do on the “i3″s when you factor in development costs.
I currently have an Intel CPU in both of my machines but I would love for AMD to come up with something better.

Marcus Mendez

As cosumers, we usually end up being starved for choice when gov’t don’t reign in monopolies with suitable corrective penalties. The future for us i’m affraid is Armh whimpy cores being built up to challenge big core x86 dinosaurs. There not there yet, so we have to weight. Well, there there if you do most of your computing on your cellphone/tablet. At least on performance/per watt i think. A7 is awesome, only thing that kills me are the IPAd air/iphone 5s random reboots (buggy software it seems )

john

arm’s perf/watt advantage is diminishing every year as Intel is actively trying to penetrate the market. Remember a snapdragon 800 uses about 4W when benchmarking it that means a TDP probably about 6W just to be on the safe side (as Intel & AMD set it)… and both Intel and AMD have chips fitting the bill… So the problem is not that!

The problem is scaling & power management not full powerdraw(read battery life). For instance your phone will rarely ever use 2 cores let alone 4 and the one core active will usually sit at about a tenth-half of maximum frequency when doing “phone stuff” – remember a phone like the S4 with its ginormous battery depletes the battery in half an hour when gaming(I know screen power draw is an important factor but you get about 3 hours of reading but only 1/2 of full throttle gaming) and the phone gets so hot you can barely hold it. Also ARM manages core frequency variance of almost a factor. Having said that AMDs approach to the problem is the only one sensible: chop down the fatty x86 to bare bones and reshuffle the execution over DSPs and GCN cores and this will possibly come with excavator: dynamic context switching. Now if they can handle core variation and core shutdown like ARM does x86 should soon be extremely competitive with ARM. I don’t get Intels approach to the problem… shrink the node / chop entire instruction-sets out of the chip (which in most cases means compiling everything with those instructions disabled/special compiler). AMD is reshuffling the compute units… Intel is forcing the market to recompile or alternatively use very inefficient execution paths… To me AMDs approach to this makes far more sense, even though I can understand Intels predicament with not having a GPGPU on the ready to offload said execution.

Marcus Mendez

interesting. thx

Sam al

I wonder how tech review sites will review the kaveri APU with HSA. How will they show the difference between HSA disabled and enabled. Is there software that is already optimized to use HSA tech?

Joel Hruska

Virtually none. In theory, AMD may seed a few applications to reviewers.

Sam al

that’s what I thought. But in order to show the performance of this APU at its maximum…they will need to use some HSA enabled software. I wonder if CineBench, SysMark, etc are working on implementing this in future versions.

Joel Hruska

No. The latest version of Cinebench just came out last month.

You have to remember, HSA is going to take time. A workload that normally runs on the GPU isn’t automatically faster because HSA. A CPU-centric workload isn’t going to be automatically faster because HSA. Some tasks will never lend themselves well to cross-scheduling, ever.

At APU13, AMD didn’t have a killer app or feature to show off. Most of the use-cases focused on things like Java. Gaming was barely a mention (all the gaming talk was for Mantle). I saw a few small application demos that were impressive, but that was it.

Sam al

True. I guess time will tell how real the HSA performance break is.

Joel Hruska

I saw one demonstration of HSA running at APU13. It was a demonstration of using a computer webcam to detect edges and shapes around the various objects on the screen.

I got the guys showing it to set it up on a Trinity laptop and the Kaveri desktop in an apples-to-apples comparison.

Running on Trinity, the demo could push what looked like 8-12 FPS, tops. Running on Kaveri, I’d say the rate of capture was maybe 25-30 FPS. Not perfectly smooth, but much improved above Trinity. According to the team, this was the result of using shared memory directly between the two components. The program wasn’t fully optimized yet, because they’d only had the chip for a few weeks, so that was the one optimization they were able to provide.

That’s the only tech demo I saw on Kaveri that gave me any kind of feel for what the future of the tech might look like.

Sam al

cool. Thanks for sharing :)

Jesse

It’s official AMD FAILS AT LIFE!!!!

Jesse

SUCK IT AMD SUCK IT DOWN HARD!!!!

rishi dev

My fx 6300 at 4.5 made on a 32nm node has trouble keeping up with an x6 1090t (45nm node)at 4 ghz and 2800mhz NB freq—but only in gaming. The fx 6300 cannot handle crossfire and sli VS an x6 1090T.

Gone are the days of compression of components on a die. As i see it…..semiconductor companies like AMD and Intel can scale performance radically anywhere between 45nm to 20nm for the nest 12 years atleast—but this wont happen moores law has to keep up and sales are more important than efficient coding.

Anything less than 20 nm clock speeds suffer, too much heat and current leakage…the days of silicon are practically over…

Building a chip on a smaller node is like banking on success of plastic surgery done on a nose.Nothing more than a fad.
kind of like–” I’ll have sex with that girl because she has a better nose”.

ARM has already proven this. What intel couldnt do with a pentium 3 and even core2duo, you can run a game like dead trigger that would otherwise dictate high perf hardware. With nvidia shield you can pump it to a 40″ lcd without quality loss and stutter.

KAveri will have 12 cores that can all compute simultaneously
that in itself spells the end of 8 core + big power draw chips

won’t need them as the GPU will off load a lot of work from the CPU
they may make a 6 core apu that would effectively have 18 cores of computability
they should call it X – FX if they make it.
the days are numbered for the big blocks with high power draw they have become redundant.

Joel Hruska

No, it won’t.

There are no non-APU Kaveri chips and no plans for a six-core Kaveri APU. I don’t know who started this rumor, but you can look at AMD’s public roadmaps yourself. They’re linked in this story.

AMD has no plans to bring out a Steamroller-based Opteron with 8+ cores.

David Stanley

the cpu is not going anywhere 5% improvements on Intel’s side
and 15% or better on AMD’s side

but HSA and MAntle is where the big gains will come
they will enable low power 70 watts or less and 100% gains in computability.

Joel Hruska

Yeah…100W TDP on the 7850K says not so much.

NarooN

And then there’s the 45W A8-7600 which beat 100W Richland in most cases. Kaveri is a perf/watt focused part that excels in low-TDP situations, not so much at high-TDP.

danwat1234

AMD’s mobile APU’s even the newest Kaveri ones, are limited to 35w TDP whereas Intel goes all the way up to 57W! No wonder AMD’s mobile x86 performance sucks. They need to ramp up the TDP. Some people don’t mind a lot of heat when they want full power out of their laptops.

ZAllenMillington

I seriously do not see ARM taking off in the personal desktop and laptop space. Many sites still rely on Adobe flash. That is going to be a serious hurdle to overcome unless adobe produces an ARM compatible version of flash.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.