Post Your Comment

25 Comments

That Z2580/XMM7160 combo would have been more exciting if not for two things:

1) Qualcomm is going be shipping an LTE-A baseband in Q4 2012. Network support will be coming in 2013, so it is more of a "nice to have", but still...

2) Like you said in the article, that GPU will probably be behind the curve by Q1 2013. CPU performance will probably still compare favorably to Krait/Tegra 4/etc, though. Should be interesting times.Reply

(...sigh) don't expect them to make the same mistake as IBM did back in the day. There will likely be alot of pressure from OEMs to make sure that everyday average builders are locked out. I mean think about it, we don't even really get to build our own laptops... chances of getting to build some phones are going to be fairly slim.Reply

I can see how java and devlik cache can also run on x86 (since java is portable by nature), but wouldn't that require a LOT of optimization before an OEM can release a phone? What about updates? It would even extend the delay problems of updates and bug-fixes. There are other technologies that are (currently) ARM dependent such as Renderscript, and now we're talking about android kernel app development... I can only see this making Android more fragmented since devs will also now need to optimize their applications/apps for x86...

On the other hand, a viable market would be an x86 version of Windows Phone 8 (hint: PowerVR SGX 544 with DirectX compatibility). I hear that metro-style apps (also winRT apps) should transcend CPU architecture. It's still unclear whether the same WinRT app build would work on both architectures without modifications or recompiling, but if it does, then it'll be perfect, and it's just exactly what .NET should have been from the first place.

A "smaller" copy of Windows 8 for tablets (without desktop view, and only running metro-style apps) might also benefit from the new Atoms. IvyBridge (and beyond) for high end tablets, and these Atoms for lower-end budget tablets... sounds cool.Reply

Huh? The Renderscript compiler compiles to byte code, similar to .Net or Java. The runtime decides what to run on GPU, what to run on CPU, etc. Very similar to CUDA. Once there is a Renderscript runtime and compiler for a platform, apps using Renderscript simply need a recompile. The Renderscript compiler takes care of optimizations at the byte code level.

Its much harder, in general, to develop an app for both Android and Windows than it is to develop an Android app for different hardware architectures. One of the benefits of Android's Linux roots.

Of course there has to be device drivers for any new platform's hardware, regardless of OS, but Intel has already done that. Since nearly all apps are written in high level languages and let the compilers handle optimization, I really don't think it is a big deal.Reply

Intel doesn't give too much information out until the chip is actually out in the market but the first single core version of the Medfield platform reportedly has a 2.6W TDP at idle and a maximum power consumption of 3.6W when playing 720P Flash video. While more details of the dual core version is still pending.

The SGX544 is limited to DX9 but the PowerVR series has versions that supports DX10.1 and the newest Power VR Series 6 supports DX 11.1 and OpenGL 4.2...Reply

Are those numbers right? It seems quite high to me compare to what Anand present to us earlier on.

I am getting slightly worried, because Intel has the resources and power as well as known how to literally everything in the semiconductor industry. And how they manage to catch up in such a short space of time.

22nm will bring them on par with power usage.

Then there is price which will sort itself out once Intel manage to overlap its SoC competitor by one nodes.

It looks like they won't have a 22nm until 2014. They might be able to catch-up in energy efficiency by then, but they can't do that by keeping pace with the performance increases that ARM chips also get.

And their catching up is by no means fast. They've been trying to launch an Atom based phone since 3-4 years ago, and it still seems they have more catching up to do in the power consumption department.

Also, ff a single core has a ~3W TDP, then their 2 cores chip will probably have an even higher TDP. They couldn't have cut the power consumption in half, while also doubling the performance in just one year. That's basically a 4x difference. Intel chips usually have only 30-40% difference in either power consumption or performance every year. They can't advance faster than that.Reply

"It looks like they won't have a 22nm until 2014" That's just wishful thinking.

Intel's x86 PC chips could never match ARM at 45nm. They knew they'd need at least 32nm & they just used 45nm to get everything ready.

The delay until June for 22nm volume production will have no effect on the 22nm entry into smartphones. It seems that lower volume 22nm production server chips will start shipping in April and a delay in volume fab starts won't hold up converting the 32nm Medfield design to 22nm and getting it ready for prime time. We'll see 22nm Medfields shipping in 2013. We'll probably see first 14nm samples in 2014.

At what point do the chips get so small and power efficient that they pass ARM in performance, power & cost ? Probably 22nm.Reply

I'm sure they will do once it hits the market. I hope they compare it with whatever is the most powerful then, and they also do a comprehensive test, including a video of how both react in the same time.

A single Sunspider test will not be nearly enough, and even that should be done with the same browser, so the browsers' own JS engines don't skew the results too much.Reply

A netbook with the extra battery life boost/capabilities of one of these chips seems potentially fun. Are there technical reasons it won't happen, or won't be as awesome as it first sounds if it does?Reply

the biggest red flag for me is the GPU. Intel has never had there GPU's run right, and I'm not crazy enough to think that this problem will magically disappear over night. Yes there taking the GPU from another company, but I am still skeptical. If a company that big, with that much money cant get there stuff to work at in there most heavily invested market, I am not going to assume that they will get it right on the first run in a market they have no experience in. Reply