Mobile is a work in progress for Intel, but at least it's finally "really there" says new mobile chief

Intel Corp. (INTC) has big plans for x86 smartphones. Its capable first showing, Medfield, demonstrated that Intel could make a decent smartphone system-on-a-chip, even if besting the cream of the ARM Holdings plc (LON:ARM) licensee crop was a work in progress.

I. x86 Windows Phones? Perhaps.

Intel's initial efforts have focused on Android. But the company says it's also leaving the door open to another licensed third-party operating system -- Microsoft Corp.'s (MSFT) Windows Phone.

Hermann Eul, president of Intel's Mobile Communications Group, spoke to the IDG News service at Computex 2012 about the possibility. He comments, "We would be [interested] when we see [the Windows Phone] market has a good chance to return our money that we have invested into this. Our roadmap has devices that can support Windows also on phones. So we can do that. The hooks for doing that [are] there."

Tizen is also backed by Samsung Electronics Comp., Ltd. (KSC:005930), but Samsung has yet to launch devices using the experimental OS. Of the nebulous project Intel would only say, "The current trend of statistics is pretty clear, Android is gaining the largest share of the market so that is where the money is. We support Tizen as well, we haven't announced any product on this, but being in the Tizen alliance it's clear we are also engaged there."

II. From Competitor to Champion: How Intel Plans to Step up Its Mobile Game

Currently, Intel is selling only one single-core smartphone chip SKU -- the Z2460 (a second chip, the single-core 1 GHz Z2000 has been offered, but Intel has no buyers yet).

But later this year a new chip -- the Z2580 will launch. That new chip will bump to a dual-core design, as well as add on-die HSPA+ and LTE. Then in 2013 Intel's best chance at dethroning ARM, the 22 nm die-shrinkMerrifield will land. Intel hopes to follow up with an improved version of the Z2000 for lighter-weight budget smartphones and feature phones. And then in 2014 it has plans for both an architecture boost and a die shrink to 14 nm.

For now, despite lacking volume, Mr. Eul says he considers the project a success in that it's generating buzz and dispelling misconceptions about Intel's mobile chances. He states, "We see substantial interest in our platforms in particular after customers really see the devices in the market and see Intel is really there. With that all the badmouthing on power consumption, and cannot do it, and so on is put to rest."

Intel's efforts in 2012 have impressed, but the company has a tough road ahead if it wants to go from simply "really [being] there" to being a chip that manufacturers will pick over rival designs from Qualcomm, Inc. (QCOM) or NVIDIA Corp. (NVDA) for their smartphones.

IMO, I really think that this is the future of mobile computing. Especially as phone processors are getting to the point where they can run full desktop OSes at proper speeds, and storage technologies (specifically the NAND chips for SSD's) are becoming cheap (relatively) and are offering the speed and space savings to give phones a good amount of fast storage.

These two things will make it so you have everything on your phone and that phone will then have connectors to be plugged into a tablet, laptop, or desktop.

Hell, with hot swap technologies, you could even leave your blazing fast desktop processors in a dummy desktop, then plug the phone in to instantly have more power. Even if you'd have to shut it off and on, the instant on electronics market is getting better too...

For everyone cheering how this represents the impending triumph of Intel over ARM, note the fact that there is no new µArch here, and won't be until 2014.

This is Intel's weak point in trying to push the full x86 package into mobile --- the chips are so damn complicated that it takes forever for a new one to be designed and validated. In the time it has taken Intel to get from current atom to the 2014 µArch, ARM has gone through A8, A9, A15, and the upcoming µArch for the ARM-64 instruction set.No-one doubts that Intel have the best process engineers in the world. But they are laboring under a ridiculous disadvantage trying to maintain that full x86 burden.

In two years Intel went from non-existent in the market to a competitive product. Where do you think they will be two years from now?

Also, the idea that x86 is a huge burden is dated from the 90's RISC vs CISC debate. It is not relevent, it was never a real issue, and in embedded devices with low power utilization a complex instruction set is actually prefferable. Essentially it permits a CPU to do more with less physical hardware. This is really noticable in Medfield where a single core is competing with dual core ARM designs, despite using similiar levels of power. Technologies like HyperThreading maximize the utilization of the core, and make almost no sense on a RISC design like ARM.

(a) ALL Atom processors are based on the Bonnell µArch, introduced in 2008.I don't know where your "two years from now" comes from. Do you know what a µArch is?Nehalem took seven years from conception to shipping --- that's how long it takes to design and verify these CPUs. The only reason we can see an annual tick/tock pattern is that Intel has multiple design teams active on their desktop CPUs. But they clearly feel the cost for doing that in the Atom space is too high.

(b) Hyperthreading has nothing to do with RISC. It is something that makes sense once your core hits a certain base level of features --- things like using register renaming and having an OoO engine. A9 is not yet at that level --- it's analogous to Pentium. But Pentium was followed by P6, a µArch for which hyperthreading would have made sense.Hyperthreading will certainly come to server ARM; for embedded it may not make sense. The tradeoff is more area (for more cores rather than hyperthreading) but more fine-grained power control (you can fully shut down a core if you don't need it to run). On the other hand, hyperthreading allows you to share amortize some work across two threads worth of computation. I (and pretty much anyone outside ARM) am not in a position to know how these factors balance for optimal low power.

An example of a RISC architecture that has had hyperthreaded implementations for years is POWER.

(c) I specifically DID NOT SAY that the issue was "huge burden of x86" in terms of area or even power. I said the problem was complexity of design and validation. I have given you numbers to prove that assertion.

quote: Also, the idea that x86 is a huge burden is dated from the 90's RISC vs CISC debate. It is not relevent, it was never a real issue, and in embedded devices with low power utilization a complex instruction set is actually prefferable.

+1People seem to forget that the Pentium MMX was fully x86 compliant yet had a transistor budget of around 3 million.Sure it lacks functionality such as SSE, VM etc'.

But it shows that x86 itself isn't that much of a burden, especially when you consider today's chips count in the billions of transistors.

As fabrication process's get smaller, so does the % cost of retaining x86.