Intel’s erratic Core M performance leaves an opening for AMD

When Intel announced its 14nm Core M processor it declared that this would be the chip that eliminated consumer perceptions of an x86 “tax” once and for all.* Broadwell, it was said, would bring big-core x86 performance down to the same fanless, thin-and-light form factors that Android tablets used, while simultaneously offering performance no Android tablet could match. It was puzzling, then, to observe that some of the first Core M-equipped laptops, including Lenovo’s Yoga 3 Pro, didn’t review well and were dinged for being pokey to downright sluggish in some cases.

A new report from Anandtech delves into why this is, and comes away with some sobering conclusions. Ever since Intel built Turbo Mode into its processors, enthusiasts have known that “Turbo” speeds were best-case estimates, not guarantees. If you think about it, the entire concept of Turbo Mode was a brilliant marketing move. Instead of absolutely guaranteeing that a chip will reach a certain speed at a given temperature or power consumption level, simply establish that frequency range as a “maybe” and push the issue off on OEMs or enthusiasts to deal with. It helped a great deal that Intel set its initial clocks quite conservatively. Everyone got used to Turbo Mode effectively functioning as the top-end frequency, with the understanding that frequency stair-stepped down somewhat as the number of threads increased.

Despite these qualifying factors, users have generally been able to expect that a CPU in a Dell laptop will perform identically to that same CPU in an HP laptop. These assumptions aren’t trivial — they’re actually critical to reviewing hardware and to buying it.

The Core M offered OEMs more flexibility in building laptops than ever before, including the ability to detect the skin temperature of the SoC and adjust performance accordingly. But those tradeoffs have created distinctly different performance profiles for devices that should be nearly identical to one another. In many tests, the Intel Core M 5Y10 — a chip with an 800MHz base frequency and a 2GHz top clock — is faster than a Core M 5Y71 with a base frequency of 1.2GHz and a max turbo frequency of 2.9GHz. In several cases, the gaps in both CPU and GPU workloads are quite significant — and favor the slower processor.

While this issue is firmly in the hands of OEMs and doesn’t reflect a problem with Core M as such, it definitely complicates the CPU buying process. The gap between two different laptops configured with a Core M 5Y71 reached as high as 12%, but the gap between the 5Y10 and the 5Y71 was as high as 36% in DOTA 2. The first figure is larger than we like, while the second is ridiculous.

None of this means that the Core M is a bad processor as such. But it’s clear that its operation and suitability for any given task is far more temperamental than has historically been the case. Even a 12% difference between two different OEMs is high for our taste — if you can’t trust that the CPU you buy is the same as the core you’d get from a different manufacturer, you can’t trust much about the system.

Is this an opportunity for AMD’s Carrizo?

Officially, AMD’s Carrizo and Intel’s Core M shouldn’t end up fighting over the same space; the Core M is intended for systems that draw up to 6W of power, and Carrizo’s lowest known power envelope is a 12W TDP. That doesn’t mean, however, that AMD can’t wring some marketing and PR margins out of the Core M’s OEM-dependent performance.

When AMD talked about Carrizo at ISSCC, it didn’t just emphasize new features like skin-temperature monitoring, it also discussed how each chip would use Adaptive Voltage and Frequency Scaling, as opposed to Dynamic Voltage and Frequency Scaling. AVFS allows for much finer-grained power management across the entire die — it requires incorporating more control and logic circuitry, but it can give better power savings and higher frequency headroom as a result.

If AVFS offers OEMs more consistent performance and better characteristics (albeit in a higher overall power envelope), AMD may have a marketing opportunity to work with — assuming, of course, that it can ship Carrizo in the near future and that the chip is competitive in lower power bands to start with. While that’s a tall order, it’s not quite as tall as it might seem — AMD’s Kaveri competed more effectively against Intel at lower power than in higher-power desktop form factors.

Leaving AMD out of the picture, having seen both the Core M and the new Core i5-based Broadwells, I’d have to take a newer Core i5, hands down. Core M may allow for an unprecedented level of thinness, but the loss of ports, performance, and battery life doesn’t outweigh the achievement of stuffing an x86 core into a form factor this small — at least, not for me. Feel differently? Sound off below.

Tagged In

And to think that they sell them for $281… …, I hope AMD can get into this very profitable segment.

Matthew Travous

Meet Micro-APUs: http://www.amd.com/en-us/products/processors/notebook-tablet/tablet-apu
I know that they could be beaten by the Core M easily with the right setup, but the A4 can and the A10 could (very few computers, 2 models, I think, use the A10. And they’re both hard to get) certainly beat the Atoms and Celeron-Atoms that currently rule the low-TDP marketplace for Intel.

This is a Tablet, Intel spends more than a billion dollars per quarter subsidizing Tablets (and only Tablets).

I wonder how much Intel subsidize them for using a Core-M.

Busybee

If those subsidies are gone, then prices may increase and go back to around $500 range like few years ago and that was just for Intel Atom-powered tablets only. For example, the brand new Microsoft Surface 3 tablet with the new Intel Atom x7 is around that $500 range (not subsized?). All we can hope is for Intel to be able to bring the prices of their chips down, then maybe we can see the cheap tablet pricing maintained.

Marc GP

Or we can have competence, that wouldn’t leave us depending on Intel kindness.

Tablet pricing will be low while there are ARM processors.

Busybee

Hard to say, especially with Microsoft also chipping in part of the subsidies to the tablet ODMs (with Windows 10 launch coming soon). Also majority of ARM chips inside tablets are still using the old architecture (not ARMv8 architecture). Only some are using the newer achitecture, but often with Cortex-A53 cores only.

Marc GP

Yes, it’s very odd. Are they high-end Android Tablets anymore ?. The last Tablet I would have considered buying was the Shield Tablet, and that’s pretty much a niche, it will never be mainstream.

Yes, I’m also surprised with the price of the new Surface 3. When you add the keyboard cover and the dock station it gets ridiculous.

Busybee

Yes, ridiculous indeed since at $499 can get an Intel Core M powered tablet already. Microsoft wants to be the next Apple…

Walkop

They’re not aiming at you. They’re aiming at artists, creatives, and students that would get an iPad but are totally stifled by its limitations. The Surface is a premium product, it’s no cheap Dell Venue.

No chance of this happening until 14/16nm. Even then, AMD will be a “cheap alternative” and gets crippled by other cheap parts in that computer. The only way AMD can break this cycle is to make a product so much better than Intel that even their contrarevenue strategy can’t save it.

Laptops don’t need that much processing power for a regular person. It’s basically features, design and battery life that sell these days. I still have not seen a decent AMD laptop with a large enough battery, nice backlit keyboard, SSD, and decent high res screen. There are some decent ones like the HP Elites 700s. But it seems that HP is only promoting the Elite 800s(Intel versions) 95% of the time. So even when there’s a decent AMD laptop you can buy, it’s like trying to find a hidden back door.

Mike kizaberg

maybe samsung can save them by acquiring em out right

Brian Kram

you must really enjoy spending $300 on a quad core.

Mike kizaberg

lol are you trolling or jut trolling hardcore

Brian Kram

So let me make sure that I understand your plan here. Your idea for keeping AMD alive and in the x86 business is for Samsung to buy them, have AMD’s non-transferable x86 license magically transfer to Samsung, and then have Samsung, a company whose business model is based almost entirely on mobile and consumer grade products, spend billions of dollars developing enthusiast and server grade processors.

Ben Mitchell

The x86 licence would remain intact but they would have to renegotiate for it 1st.

Sweetie

You just contradicted yourself. The license would be voided/broken. Samsung would have to convince Intel to reestablish it.

Alex Piontek

That can be easy. Do another amd64 fight.

Ben Mitchell

Intel are required to try to reestablish it or they lose some of the patents they are losing from the deal falling through and are fined for having a monopoly.

Sweetie

Intel: Thank you, buyer of AMD, for expressing interest in renewing the x86 license. At this time, our market analysts have determined that the value of the license is 3x what AMD paid for it.

Buyer: That’s not good-faith negotiation!

Intel: In the spirit of cooperation we have reduced the price to 2.5x.

government: Intel has made concessions which shows that it is negotiating in good faith.

I know AMD’s IPC is lower but it’s not half(or even less) of Intel’s processors. Considering how the benchmarks results seems to be everywhere, I’m guessing it has a lot to do with Intel’s processor hitting turbo mode for the benchmarks since turbo more than doubled its freq. That would make sense why it’s so inconsistent.

Also was wondering if that new HDL (high density library) implementation has hit some hard limits (especially in thermals versus internal clock speeds of each part the CPU core). That’s the main difference with the design of previous Mullins cores…

Of course — but that doesn’t explain why the A4-5000 hammers it if both are 15W chips. As far as I know, Carrizo-L was *not* reimplemented on HDL. I can check that, though.

So, a few things about power consumption in chips.

1). That 2W figure is SDP (Scenario Design Power) not TDP. Scenario Design Power isn’t the same as TDP.

2). Notwithstanding #1, you can’t compare TDP between AMD and Intel anyway. TDP is defined by Intel as “The average amount of power a chip will consume in a representative workload.” TDP by AMD has historically been defined as “Maximum SoC power consumption.”

Because the two companies have very different definitions, you can’t compare the two figures.

Despite this, I am certain that yes, the 14nm Atom x7 will draw less power than the 28nm Kabini. AMD never designed the Kabini / Mullins / Carrizo-L family to compete with Atom at the bottom of that chip’s power capabilities — just to offer competitive perf/watt and better raw performance.

The base clock on the 8700P is 18% higher, but single-threaded perf is 34% faster. Multi-threaded perf is 44.5% better.

We don’t know TDP on the 8700P, but it has the same 4 CPU cores + 6 graphics cores config as the A10-7300 (I couldn’t find the A10-7400 listed). It trades shots with the highest-end APU from Kaveri’s stack, the 35W FX-7600P.

BUT… It doesn’t measure the GPU performance! The A10 most likely murders the Core M with graphics.

Joao Ribeiro

AMD just needs to go bulldozer style on this chance and fight fire with fire. By this I mean, sell the chips with a 2% or so margin of profit and take over the segment in a bold unprecedented move taken from the books of Intel. Now they need to do this in a way it will assure they keep this market in the future, taking the chance to rebuild a name worth of praise driving people to it now and in the foreseeable future..

Dan Rizzatz

AMD doesn’t need to cut their margins.

Companies go under when they start sacrificing market plays for margin shrinkage.

Lowering costs isn’t a long term solution, it’s band-aid to a bullet wound.

I never look at Turbo frequency but at base frequency, advertised Turbo frequency in desktop CPUs let`s say in 4 core is only at the advertised speed when two cores are in use.
OEMs should be allowed all the “power” there is (ARM) or nothing (desktop CPUs, server CPUs).

All in all when looking for CPU buy accordingly to base frequency.

Joel Hruska

Except in this case, the base frequency on the 5Y10 is 800MHz vs. 1.2GHz for the 5Y71 — and the 5Y10 is faster in a number of tests.

Your evaluation method wouldn’t work, either. I’m not saying it’s *wrong*, but it still misses the truth of this scenario.

TxRx100

OEM need to learn that I will buy computer based on CPU base clock and I will not wait for some review site to show me performance, which is what they should ahve done. They should give max frequency in normal conditions on their product site but they never do.

Joel Hruska

If you buy without waiting to evaluate performance, you’re going to get burned. It shouldn’t be that way — but it is.

TxRx100

If they (OEMs) would give me Turbo frequency ( the one they are going to use not maximum frequency CPU can use) and not only base frequency I could evaluate performance on my own based on the frequency.

warcaster

The x86 “tax” or rather the Intel tax, is that this thing costs $280 and works almost as “well” (or poorly) as a $30 ARM chip.

As for AMD, they really need to use Samsung’s 14nm THIS YEAR, if they hope to be close to competitive in performance/power. But I’m sure either way they’ll at least be competitive in price/performance going by Intel’s prices.

Joel Hruska

I’m not aware of any 14nm plans this year. Carrizo is definitely 28nm; AMD Zen won’t debut until 2016, and Samsung’s 14nm is a low-power process that isn’t meant for GPUs, as far as I know.

AMD might do a 16nm chip at TSMC, but that process isn’t ready yet. Think end-of-year 2014, if then.

Project Skybridge supposedly introduces 20nm and HSA dual platform support between AMD and ARM, but AMD has been very, very quiet about it since the unveil a year ago. The Analyst Day next month will tell us more.

Busybee

For your information, there were already some leaked benchmarks for AMD’s Carrizo and Carrizo-L at the Geekbench site. Examples below…

Look, you Intel paid shill. Your post just proves my point even harder. Their only test equipment was three Mac Mini’s ALL USING INTEL CPU’s They didn’t even run a cross platform test on an AMD anything.

You know what AMD pays me? Nothing. I understand that their product is engineered to be superior than the garbage you’re trying to peddle here about Carrizo losing to an Atom equivalent. GTFO

Busybee

First you could not blame the compiler he used (because he’s not using ICC), could not blame the developer (because he left Intel to be independant) and now you are ranting that his equipment does not include any AMD processors? Then you called me a paid shill? It simply proves that you must be either stupid or retarded or something. Read the archived blogs, example: http://www.primatelabs.com/blog/2007/05/playstation-3-performance-may-2007/ even tested it on a Sony PlayStation 3, and also: http://www.primatelabs.com/blog/2007/08/xbox-360-performance-august-2007/ on Xbox 360 (and those were IBM Power based processors). The cross platform benchmark also runs on other operating systems like Android, and that include smartphones with ARM chips. That’s why there are lots of ARM benchmark results in his database also. And he does not even own nor had access to most of those devices in that database either.

I think this is a bit misleading. There is a lot more to overall performance of a computer than just the CPU. You have to take into account the memory, hard drive, and chipsets that connect everything to the CPU (and don’t forget bloatware). Ever open up a Lenovo computer? You will see the bare minimum to get the marketing stats on the outside of the box. Their motherboards are the cheapest Chinese labor can provide…

Joel Hruska

CodeJunkie,

There is a great, great deal more to performance than simple processor speed, but if you check the Anandtech results, you’ll see that they account for this with a variety of tests that can isolate specific characteristics. In this case, the Lenovo system is much slower because Lenovo chose a maximum temperature of just 65C for the SoC core, vs. 85-90C for other solutions.

That’s a perfectly valid choice and Intel’s design documents are meant to enable this kind of flexibility, but it complicates the consumer’s research. The chip in the Lenovo isn’t equivalent to the chip from another vendor.

Joao Ribeiro

Yes, a computer performance is not just the CPU, I completely agree, but, more and more it becomes it, as more and more components are moved within the CPU, now named in most cases SOC…
In a way I like this, for what it enables, portability wise, like powerful mobile phones and the likes of them. But I also have a deep dislike for this approach, as I do love to have a system as modular as possible, like in the days of old, where you’d pick even the HDD controller for your self assembled machine. This was true up to the last 486 machines with Vesa Local Bus slots (I once built a 486DX@50Mhz with 16Mb RAM using a Promise Vesa Local Bus HDD Controller with 8Mb of RAM that had a 286@16Mhz as it’s “brains”, it was like having a second computer just to speed up your hard drives and was really really fast, this machine was a Novell Netware 3.12 Server for a commercial network and it hosted the Invoicing and Accounting software databases, darn thing ran fast thanks to that VLB wonder of a HDD controller)
Nowadays, you can no longer cherry pick your hardware with such granularity, you’ll always end up with not wanted or needed stuff on your motherboard if you want to use special controllers like the one I mentioned, which is wasteful and will put resources and drivers and such installed on a system never to be used, therefore doing nothing else other than wasting CPU cycles and bandwidth…

Guest

Deleted

i9

Agreed. Not sold on Core M yet.

Alex Alexandrewitsch

Thats intel bullshit! I HATE this company! i got an intel processor, i dont know where is the performance, fanless performance is so low 10 sek, after that dead.

Matt Menezes

I still think this is on the OEM’s more than Intel. Intel is offering the highest-end x86 processor that these new form factors/thermal envelopes can accomodate. However, the design of the device matters a lot. If you want the chip to run at a higher clock, cool it properly (or use metal instead of plastic like the AnandTech article pointed out). The chips are capable of higher clocks, it’s just the OEM’s are trying to put them in crazy thin form factors with no thermal headroom where heat becomes a huge issue and the chip throttles fast.

Javier Martinez

I agree with you. I find the article terribly misleading blaming the processor for something that it’s in the OEM’s hands. The processort offers a lot of design flexibility to OEMs (to increase differentiation) but at the end of the day it’s their call on what platform they put it in.

Randall Dorman

It’s netflix….if you can’t watch Netflix on your computer it’s time to throw it in the trash

Randall Dorman

Yore failing to understand….. my windows 98 pc from nearly 20 years ago could run netflix…if her computer is that sluggish it’s because she’s looking at porn and getting viruses….check the cookies and history lol….I 3 is a good processor

Steve Smith

I had a HP Split X2 Core i3-4010Y, it was dreadful to say the least. HD Youtube videos would grind it to a halt, office work was slow, and if you had more than 4 tabs open in chrome, it would start burning up. Also your comment about her/his child and what she watches on her laptop, is just weird!

Walkop

Chrome+Video=BAD. DO NOT USE. No CPU maker supports Chrome’s video encoding in hardware currently (VP9), so it cannot he hardware accelerated. Wastes massive amounts of resources. h.264ify is a Chrome extension that adjusts the streaming format, IIRC…

The Core 2 I handed down to my parents gives my mother no issues with Netflix.

Guest

The Micro T is to slow to compete with this chip but I could see AMD making a challenge. In higher resolution the 25W 5350 APU beats those scores in DOTA2.

This also means that it’s likely the 15W Puma based APUs are faster as well. Still abit to power hungry but add some more optimisations & maybe a die shrink maybe?

Matt Menezes

It’s only uses 5x too much power… ;-)

Guest

True for the 5350 but moving Puma over to GF drops power consumption by 30% so a 15W Puma becomes a 10.5W. Shrinking to 14nm should reduce another 35% = 6.825W. It’s still using more power but that’s acceptable for alot more devices then what AMD currently have.

Ofcourse that’s only rough estimate based on marketing material from GF.

The top end Puma GPU (A8 6410) is clocked 200Mhz higher then the 5350 so frequency could be reduced on the GPU side to reduce power consumption even more.

http://juanrga.com juanrga

The x86 tax cannot be eliminated with a x86 processor, but Intel and ‘friends’ have been selling hype since 2006http://www.pcmag.com/article2/0,2817,2397964,00.asp
about how Intel is near killing ARM. Similar thoughts about how AMD is finally killed by Intel, it never happens.

Guest

I really hope market could agree on one instruction set, prefferable
something closer to x86 for the consumer grade CPUs. If you need chip for the
super-computer or your motherboard, sure use whatever you want, but this
ARM vs x86 bs only hurts people who can´t afford to have expensive
tablet, desktop, laptop and phone all at once – give me the ONE device to rule them all.

carol argo

À synchro nous is gona je a juge deal.somme AMD chip have a lot of ace ,y’a testing will ne needed but some chip are beast

ExtremeTech Newsletter

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2016 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.