Posted
by
samzenpus
on Thursday November 17, 2011 @08:10AM
from the sometime-down-the-road dept.

MrSeb writes "'Last week, Intel announced that it had added x86 optimizations to Android 4.0, Ice Cream Sandwich, but the text of the announcement and included quotes were vague and a bit contradictory given the open nature of Android development. After discussing the topic with Intel we've compiled a laundry list of the company's work in Gingerbread and ICS thus far, and offered a few of our own thoughts on what to expect in 2012 as far as x86-powered smartphones and tablets are concerned.' The main points: Intel isn't just a chip maker (it has oodles of software experience); Android's Native Development Kit now includes support for x86 and MMX/SSE instruction sets and can be used to compile dual x86/ARM, 'fat' binaries; and development tools like Vtune and Intel Graphics Performance Analyzer are on their way to Android."

Since most of x86 architecture and related hardware is getting smaller and most smartphone are getting bigger, they are bound to meet somewhere.hmm, I guess it will be called a tablet or an i(ntel)Pad. ehm ehm

No, it was just Balmer admitting that Microsoft is a WINDOWS company. That is their one product they have, everything else is built around WINDOWS for WINDOWS. Android and iOS must scare the crap out of them, because we're fast approaching the era where Windows only runs on Corporate Computers, everyone else is running Android or iOS.

No doubt Windows 7 is very close to the Vista codebase but what people were saying way back when (I was there) was that Vista in its present state was the future and you might as well get off of XP and get on Vista because "resistance is futile". That never happened. So, no, resistance was not futile. In this case it worked as MS got on the ball and delivered something that many people actually like.

But resistance WAS futile. 7 was a repackaged Vista, that retained most if not all of its flaws. It just offered a slightly different presentation and was repackaged under a different name.

UAC halting your entire system for pretty much everything? Still there. Programs breaking due to admin rights requirements on machine where I specifically fucking want to have admin rights while running it? Still there. Inability to roll back to classic menu? Still there. Incompatibility issues with older software? Still

No, even in linux land, they still call what you're doing "anon trolling". Because the only reason to truly make UAC workable for someone who doesn't have his head up his ass (read: doesn't need added security) is to disable it. Even at minimal settings offered by windows, it still breaks a lot of software by forcing it to run as limited user by default, and still forces you to run essentially all older software as admin and wastes your time on top of it by making you make separate shortcuts for each indivi

Given the choice, everyone who actually has to code for those CPUs (e.g. compiler makers), without a doubt prefers ARM over x86. Simply because of how shit x86 is.It's the Windows ME of machine code. It started out as a DOS, and kept the cruft all the way to today. While piling more and more bigger and bigger stuff on top. Ending up with a upside-down pyramid, held in balance by a billion wood sticks.And I know that even Intel itself couldn't stand it anymore. That's why they implemented that microcode solution with a RISC processor on the inside.If only they would give us direct access to that core, but leave the microcode in there for 1-2 processor generations for legacy reasons.Then nobody would willingly keep doing x86, and before those 2 generations would be over, it would be locked away and forgotten.

I, for one, plan a 265-core ARM CPU as my next desktop system. (Yes, ARM cores are slower per clock cycle. But they are *a lot* more efficient and *a lot* cheaper too. [No, ATOM does not count, unless you add that northbridge that's so big and gets so hot that looking at the mainboard 10/10 people think it's the actual CPU. Which is closer to the truth as Intel ever wants to admit.])

You forget too easily that many people depend on this legacy code to run software of thousands or even millions of dollars. Not because your desktop in your mom basement no longer need it so that the mankind did not need anymore too.

Maybe yes, maybe not. As a example, is difficult to get my "M.A.X." (a old DOS game) working on a emulator, most of then crashes or have some spurious error. A bigger and more complex, critical software running on emulator? I do not like the idea.

Games stress emulators in a way that business applications do not because games tend to use dirty I/O tricks to increase smoothness of animation. Business applications tend to use better-known I/O methods.

But you're kind of missing the point though. If ARM really is that much better than x86 (I don't really know as I don't program on that level) then with the amount of momentum it is catching in mobile devices, ARM overtaking x86 is inevitable. I don't know a lot about x86_64 whatever vs. ARM but I do know that my Xoom outperforms my netbook and it does it while generating no heat and with 3 times the battery life from a smaller battery. Look out, intel.

The problem is not who is better... The problem is try to port your "legacy", big and very expensive x86 code because some "genius" decided to simply drop the legacy support on the hardware. Especially when you have a tight deadline to meet and the system can not stop

Precisesly. The reason why AMD was able to succeed with its architecture over Intel was that Intel's 64bit architecture at that time required all the software to be specially compiled to run on Merced. Whereas AMD64 was backwards compatible and could allow people to buy the chip and update to 64bit when needed or to just run some applications in 32bit mode.

In this case I have no idea why anybody other than Intel would think this would be a good idea as the reason why Intel was using an ARM based XScale proc

Very interesting idea. We are going to have to add in more cores from now on to get more performance, might as well start specializing them for certain tasks. Your idea about x86 hardware emulation is especially interesting.

Sorry to double reply, but now that I think about it, what you are describing sounds a hell of a lot like a mainframe on a chip. IBM mainframes have Multi-chip Modules [wikipedia.org] that are a lot like what you are describing.

What i would like to see is a CPU architecture that can have asymmetric cores:

Similar to your design, the Tegra 3 ARM SoC does that. It has a quad-core A9 running at 1.5GHz or more, but it also has a "slow" core running at 600MHz or so. When things are idling, the slow core takes over and does the job while the hefty quadcores are powered off, saving tons of power.

Marvell I think also has a similar idea for their SoCs. And ARM's A15 design is supposed to incorporate that as well.

The only problem with splitting everything out like that is there are many apps that always have that one thread that can't be atomized any further and will saturate the core. I mean a loop can only be run so fast no matter how many cores you have so any one program is always going to have an absolute performance bottleneck. This isn't to say that there isn't merit to multi-core as of course the more the merrier but it isn't an absolute panacea.

You are exactly right. This is why there should be cores that are high speed and high power for the tasks that cannot be broken down into bits to distribute. This way, low-energy cores take care of most tasks, while a task that cannot be distributed can be handed to the bigger cores which consume energy.

The more different types of cores available, the more flexible the architecture would be, and the better energy savings (in theory) can result.

Well, You described more or less a SNES (a handful of specialized chips working together). The problem is that the software would have to be aware of this way of working, or the CPU would have to be able to find out for yourself what the user is willing to do.

Very true. I wonder if this can be done on a hypervisor level. If done right, the hypervisor can present the OS a dynamic amount of CPUs, depending on what processes are using what resources behind the scene, as well as rebind a task to a different core (say the task was using a lot of FPU, then swapped to needing mainly integer manipulations.)

It would work on the OS level too, but would take a revised scheduler to take advantage of it.

I would not be so quick to say that. While I am no x86 fanboy, there are a number of things that are "nice" about the model from the point of view of most software developers. The instruction set is basically a compression system (much like thumb2 is for ARM). The very simplistic (to reason about) memory model (which is rather complex to implement in hardware) makes multi-processor significantly easier for most people. Most people who think they know how to write good multi-processor, multi-threaded cod

I thought x86 is a power hog compared to ARM. It seems like that is a serious consideration for mobile devices to me. I'll be interested to see where this goes. In the mean time, x86 chips are going to have to get a lot cheaper to compete with ARMs prices.

It is. The difference between an x86 and ARM core is around an order of magnitude at the moment for the same performance. But the difference between an x86 core and the display is another order of magnitude, so for devices that you mainly use with the screen on there isn't much difference between x86 and ARM in terms of overall power consumption. The difference in battery life between an ARM core at 200mW and an Intel core at 2W is very small when the display is using 10-20W. There are a few display technologies that are supposed to be hitting the market Real Soon Now that ought to make the difference between x86 and ARM a lot more apparent.

I'd mod up your post, but I want to reply instead. Are you suggesting that the display uses 50-100 times the power of an ARM chip (and therefore 5-10 times an x86)? If that is true, that is very interesting. I did not realize the display was such an outlier in power consumption department...

It definitely is, but you also have to consider that it is usually OFF in the case of a phone.
x86 android tablet would make sense since you could just turn it completely off when not in use, but an x86 phone would have a standby time shorter than your average summer blockbuster.

On my Samsung Galaxy S2 it's between 40 and 50% so maybe the super amoled display is really a power saver despite the 4.3"diagonal. And I use little wifi and little 3g, only when I explicitly need the net, so the display consumption could be even lower in % on a typical always on scenario.

Is the display really that much of a hog on a cell phone? Those numbers sound like laptop numbers, but I thought we were talking cell phones.

My phone has a battery that holds around 1300 mAh at 3.7v. That means I can draw 4.8W for 1 hour. If my phone's display really sucked down even 10W, then I wouldn't be able to have the display on for more than about 28 minutes total, which doesn't match my experience at all. I regularly browse the web from my phone for a half hour at a time, without making much of a dent in the battery.

A quick scan through this paper [usenix.org] suggests backlight power for the phone they analyzed tops out at 414mW, and the LCD display power ranges from 33.1mW to 74.2mW. If you drop the brightness back just a few notches, the total display power is around a quarter Watt or so, which sounds far more reasonable.

I don't think Intel is standing still on power consumption. Their desktop CPUs are hogs, sure, but they can bring a lot of engineers to bear optimizing Atom-derived products. (We might get an early read from Knight's Corner, actually, although I expect it to still be on the "hot" side. I'm waiting to hear more about it.) Also, ARM's latest high-end offerings (including the recently announced A15) aren't exactly as power-frugal as some of their past devices. In the next couple years, I think the scatter plot of power vs. performance for ARM and x86 variants will show a definite overlap in the mix, with some x86s pulling less power than some ARMs.

I have my phone screen on for about 2-3 hours per day due to bus rides. According to my the Android battery tracking thing my display uses up around 60-70% of my battery for the day, and this is on a Nexus S with the AMOLED screen that is supposed to use less battery than an LCD screen due to not having to light up the black pixels.

What are you doing on it during that time? The processor, baseband and RF circuits also suck up a fair juice. That PDF I linked above shows GSM consuming around 600mW during GPRS and WiFi consuming around 700mW when in use on the phone they analyzed. I'd expect other phones to be similar. 3G is supposedly much worse at draining batteries. Dunno about CDMA/LTE, but I would imagine they'd also be in the half-watt to 1 watt range, to venture a first-order guess.

But you're missing the key thing here; just because you turned on WiFi or GPRS doesn't mean they're sucking down a constant 600 or 700mW a piece. Chances are they're very aggressive with power savings being a key priority; to where if you're sending data (either technology) the power use ramps up for a short time; but receiving data I'm sure uses a lot less power.

Given the example of loading something like google.com - the phone would have to send some data to request the page, but overall that amount of d

Well Android 2.4 actually has a battery meter that tells you exactly what is using up the battery. Most of the timeI'm just reading various articles offline, but that doesn't really matter because (unless I'm mistaken) the battery monitor on Android seperates out the battery usage by Wifi, cell radios, ect.

The 60-70% I'm talking about is specifically from the "display" section in the battery monitor, which I assume only includes battery directly used by the screen.

Despite what many other commenters will say, no, it isn't a power hog compared to ARM. Or at least it doesn't have to be. Intel/AMD/VIA don't yet offer processors that have as low power as ARM (although some are pretty power/performance efficient depending on your workload), but they will within the next year for smartphones and tablets. On modern manufacturing processes the "x86 tax" becomes almost non-existant.

Vendetta [wikipedia.org] is from 1991. It's like pointing out that Mega Man X3 runs in a Super NES emulator: interesting, and probably fun for a while, but not what grandparent had in mind. As for Quake and Doom, can you recommend things other than first-person shooters that commonly get ported to Linux, especially well-praised E or E10+ rated game series?

And nevermind that wine actually works really well

Only on x86 phones. Most existing smartphones are ARM; let me know when Atom phones start to come out. And even if you stick to games from the Pentium 4 era, knowing tha

There's a list here [penguspy.com] of a couple dozen free (as in beer and as in speech) games for Linux, many of which are really good.

This list [penguspy.com] is just the "very best" games, regardless of whether they are free or paid, open or closed source.

One of my favorite things about that site is the ability to filter by open/closed source, free/paid, and whether or not the game has been awarded a "Pengu's Choice". There are some really solid games out there, and many of the best ones run on Linux.

These are from the list of "Platinum" support, which states as its description "Applications which install and run flawlessly on an out-of-the-box Wine installation". You can go here [winehq.org] for a list of 1,568 items listed as supported under wine with a rating of "Platinum", in the category "Games".

While it is always nice to hear about companies contributing to opensource, I don't see there being a big demand for x86 android. Who would use it? It's not low power enough for most tablets/phones. And while the ability to run existing x86 apps is nice they are mostly tied to Windows which is also not likely to see much traction in the mobile space. So what is the point?

What I would like to see is Intel creating a SoC and softcore suite. Intel has some big advantages that they could use to seriously compete:
1) Lots of experience in chip design. I don't see why they can't create an ARM-Core competitor.
2) They can start from scratch. Unlike ARM there is no need to legacy support or backward compatibility.
3) They have in house designers for everything from graphics, wired, wireless, etc. chips. I don't see why they cannot design from this a whole suite of modules that work on their SoC platform.
4) They have (to my knowledge) the best chip fab plants in the world by a sizable margin. Die shrinks offer a great way to reduce power consumption.
5) They have produced great x86 compilers for years, so producing a new compiler for a new chip shouldn't be too difficult since they are already experienced with x86 and Itanium.
6) They have shown that they already know how to support Android.
7) They have the cash and business partners to make it work.

I'm not saying they are guaranteed to make big bucks. Fighting an intrenched ARM with wide industry support will be hugely difficult. But if any company can do it it's Intel. Of course this means they would have to get over the Itanic debacle and stop trying to shove x86 down the throats of every problem.

Intel - have loads of experience in getting the creaking x86 architecture to work in the modern world, ARM however is much much newer and has much less layers of cruft, they have not shown their ability to throw away all that and start from scratch (which is what we really need)

They did that, what, 18 months ago now? Total number of people who licensed it: zero. Why? Because x86 absolutely sucks for low power.

Lots of experience in chip design. I don't see why they can't create an ARM-Core competitor

Ah yes, all those massive commercial success stories that Intel has had when it tried to produce a non-x86 chip, like the iAPX, the i860, the Itanium. The closest they came was XScale, and they sold the team responsible for that to Marvell.

They can start from scratch. Unlike ARM there is no need to legacy support or backward compatibility.

Intel has two advantages over their competition: superior process technology and x86 compatibility. Your plan is that they should give up one of those?

They have produced great x86 compilers for years, so producing a new compiler for a new chip shouldn't be too difficult since they are already experienced with x86 and Itanium

Hahahaha! Spoken like someone who has never been involved with compiler design or spoken to any compiler writers. Tuning a compiler for a new architecture is not a trivial problem.

>> They have produced great x86 compilers for years, so producing a new compiler for a new chip shouldn't be too difficult since they are already experienced with x86 and Itanium> Hahahaha! Spoken like someone who has never been involved with compiler design or spoken to any compiler writers. Tuning a compiler for a new architecture is not a trivial problem.

Having worked on a C/C++ compiler for the PS3 & PSP, I concur! Compiler writing is non-trivial. Sure, you can get 80% of the way there, b

I should also add that Intel has a history of designing chips that are a complete bitch to write compilers for and for having hardware and software teams that never talk to each other. The most famous example was the iAPX, which was designed for object oriented programming but without talking to any compiler writers, so it ended up requiring a 200-instruction sequence to do one of the most common operations in an object-oriented language. The Itanium is legendary for being impossible to target. The hardw

It would run the three of them at the same time. Android for answering calls and managing the small screen, linux for the large screen you attach over hdmi and windows... Mmm in my case it would be for testing sites with ie but not much else. For many people it would be for gaming.

suggests that 64-bit x86 code is actually even denser than ARM-thumb code in most cases (which in turn is denser than "normal" ARM code).

High code density means more cache hits, which means better performance and less power-hungry.

x86_64 has the same amount of integer registers as ARM: 16.
Every single x86_64 CPU has support for SSE, which means that floating point operations can (and is) handled by the 16 SSE registers instead of the old x87 fpu-stack.

Fact is that the 64-bit specification for x86 fixed a large number of problems that the 32-bit specification had, making x86_64 a really good architecture without any significant flaws.

x86 is CISC when we know RISC is better. Intel/AMD do some tricks to make the core more RISC, but why not just cut out the middle man? Why bother with converting it at all?

Pull up a pillow and have a seat around ol' Grandpa Short Circuit. This may come as a shock to you.

Some programs still being sold and run on desktop computers today were compiled over ten years ago. Some programs still sold and run in x86 embedded environments were compiled twenty to thirty years ago. That's why x86 is still around.

x86 is still around for the same reason Windows is still around. It still runs binaries that are really, really old. In some cases (many, I expect), the source code for these binaries no longer exists, or the toolchain for building it is bitrotted. That's why x86 is still around.

Imagine some sci-fi horror film where everyone's forgotten how to maintain the vast infrastructure of their civilization, they just don't poke it because they don't want it to break. That's why x86 is still around.

Meanwhile, every year there are more long-lived applications built for the existing platform, with very little hope for being updated for newer platforms and processors; their binaries are likely to be running for another five or ten years.

Amusingly, open-source software has a clear advantage over closed source software in this arena. Several distributions are actively keeping software packages portable across CPU archs, and even portable across OS kernels. (Debian and Gentoo both support BSD foundations as well as Linux)

I know some businesses which are still dependent on Windows 3.1 programs written in 1993-1994. When machine upgrade time came around, I ended up just P2V-ing their old boxes, sharing the application's document folders with the host OS, and to the end user, the creaky old application functions the same as anything else on Windows 7. To boot, if the creaky application gets corrupted, it only takes either a reloading of a snapshot, or grabbing an archive of the VM disk file to get back in business. (I also

Apple's reasons for switching had more to do with x86 as a better-invested hardware platform. They wanted all the same hardware capabilities as the burgeoning PC/gamer market, and I guarantee you it wasn't going to be cheap or easy to get the likes of NVidia or ATI to prepare Apple variants of their PC hardware. (You wouldn't just take the latest NVidia card and drop it into an Apple; the video card has a BIOS in x86 machine code, because the PC expects it. Apple hardware was necessarily different, if only

It's not as if "x86" means much from an architectural standpoint. It is a choice in instruction set and is a good choice for new products given your (5) above -- what's got better payoff, making a new instruction set or reusing an existing one that is supported exceedingly well? Intel's 386 and AMD's 64-bit conventions are common ground for many wildly different CPU architectures.

Actually yes, "x86" does mean a lot even from an architectural standpoint. For example it means you have to carry along all the instructions and their related mechanisms concerning 8086 Real Mode, and 80286 Extended Real Mode, plus all the horribly clumsy register types. That means you'll be wasting die space just to support stuff that isn't even used anymore, not to mention the time wasted on actual hardware design. With a completely new processor design you can just scrap all that, add much more flexible registers plus more of them, and get a more efficient CPU as a result. Every little bit of space saved is meaningful on a processor aimed for mobile devices, and it does help on desktops, too, if not as much.

Actually yes, "x86" does mean a lot even from an architectural standpoint. For example it means you have to carry along all the instructions and their related mechanisms concerning 8086 Real Mode, and 80286 Extended Real Mode, plus all the horribly clumsy register types.

OK, so your decoder has to be able to handle the micro-ops, and you've got to have the hardware on the chip somewhere to perform the operations. But you don't actually have to have ANY of the same hardware (aside from where there's really only one practical way to do things) because you're going to decompose the x86 instructions into micro-ops anyway.

What's more interesting to me is the Android@Home announcements (from Google IO 2011) that Google is implementing its own networking stack (instead of Zigbee) on 802.15.4 [wikipedia.org]. 802.15.4 is a very low power low-level radio network, with cheap embedded microcontrollers that are often ARM. There's probably not enough power in the node's ARM to run Android, but some nodes could have extra power and extra ARM cores that do run Android.

Android's Java means in addition to network RPC, code can be straightforwardly programmed to safely migrate around the network for distributed local execution near the data, whether that's network metadata, sensor data, or just the power of massively parallel distribution. I wonder whether JavaSpaces or something like it (probably a very lite version) will find a fit in making cheap distributed networks represented in computational tuplespace. Distributed around one's home, office/classroom or car, or among one's clothing (daily worn watch/jacket/shoes/belt/keyring), or eventually merging among those personal spaces as they're either near or just related (linked by the Internet).

Intel's x86 architecture still has too much power consumption (and the legacy HW baggage that consumes it) to be a design win for this distributed architecture. By the time x86 is suitably low power, Android will probably have defined the space of these smart spaces, and the smart things in them.

FWIW, there's still few details of A@H, though supposedly there is a reference implementation (network backbone embedded in LED bulbs). Anyone seen any specs, like whether it's really a SNAP/6LOWPAN hybrid, or which specific alternative Google is now pushing? Where to get the devkits (HW and SW)?

Not very popular on/., but Android being Java based will make life very easy for Intel to crack the mobile market. Most of the apps (sans native ones) will just work. It would have been almost impossible otherwise without some serious virtualization.

I bet everybody think about Android Market and all the cool stuff there. Well, don't do that unless your Android runs ARM.

I've got recently my hands on a Android MIPS phone. Extremely frustrating experience -- two of every three downloads from the Market simply refuse to install, because they have some tiny snippet or library compiled to ARM native code. Unless Intel heavy invests in app developers recompiling their works for Android/x86, it will be barely usable outside of the base system.

Sadly this is true. I have the same experience when I try to install many market apps in my Android virtual machine. Some work many don't. To all current and potential Android devs: do it with Java if at all possible.

Java is a dead end if you need performance, especially in a resources-limited device like a smartphone. The way out is to use native code, as the people that used Assembler in critical parts of the old DOS games.

Ha, one of those people that thinks there's a clean, perfect RISC architecture inside Intel CPU cores.

First off, everything is microcoded. Power is microcoded. SPARC64 is microcoded. Itanium isn't, but it's an oddball in that regard. Microcode just lets you hide implementation details and potentially simplify internal design.

Internal microcode isn't necessarily fun to play with. Look up the articles on RealWorldTech on the guts of Transmeta's CPU's if you're interested, and that used a significantly

Mind you, I have a fairly recent quad core Intel proc in my Windows 7 workstation, and it runs software only available on Windows (which is why I have it) pretty well.

But, rightly or wrongly, I associate Intel with big hot power hungry hardware, that you *must* have if you have apps that need Windows, and ARM with low power battery sipping appliances. Android seems made for the latter, and out of place on the former. I can understand why Intel wants to get a piece of the Android pie -- they are protecting

Even if they develop their own graphics chip for tablet use, it'll a) probably be enough for what you'd do on a tablet (seriously: on a desktop PC, for anything except gaming, Intel's stuff is good enough), and b) it depends on how well the software's done, anyway (case in point: on many recent Linux distros, and again, unless you're gaming, Intel's chipsets provide a better overall experience than much more capable nVidia or ATI hardware).

Intel's stuff is generally good, but it's expensive and I don't personally think we need to allow a foothold for the same sort of anti-competitive behavior that Intel is known for in the desktop/laptop processor market.

have you used intel graphics lately(stuff they're shipping in 2011)? it's like having a discrete mobile gpu from 2004.

but this article is not news of any kind. intel has had these plans out in public for years and years, android ndk has support for multiple targets. if they actually started shipping _that_ would be news.

Intel's past use of PowerVR chips was at a time when smartphone screens were still pretty low-res, and the expectations of graphical performance on a smartphone was very different from what was expected on a notebook. Cedertrail (their upcoming Atom product) is using a Series 5 chip (the 545) rather than a Series 5XT chip (like the PowerVR SGX543MP2 in the iPad 2 and iPhone, or the SGX543MP4 in the Playstation Vita). The 545 is certainly an improvement over their previous single-core chips, but I doubt it w

intel still owns perpetual rights to the ARM architecture. they choose not to exercise them. the ARM lines were sold to Marvell a few years ago. for better or worse, it's "IA or the highway" at chipzilla these days.