Couldn't find a similar thread, so consider this an open topic. If anybody else has questions feel free.I've got a couple open-ended questions about ARM processors that I am confused about. (this all stems from AMD's recent announcement) I'm a self taught PC hardware enthusiast, I've been reading sites like TR and Anandtech almost daily for at least 3 years so I feel like I have a strong understanding of high level hardware functionality and the differences between various products. However, I have no formal schooling regarding the inner workings of processor architecture, coding, and such (besides what I pick up by reading TR and Anandtech for example) so my level of understanding regarding these things is moderate at best. (still better than your average Joe Schmoe, but I'm no CS/IT grad let's put it that way) Please word your responses so I can understand them.

1) ARM is some kind of open-source processor architecture right? It seems like everybody and nobody owns ARM. From my understanding, the ARM company just engineers the cores/architecture and licenses that out to other companies to fab. They kind of act like the processor R&D team at AMD or Intel for example. You can "buy" their cores to use on your SoC, or you can "buy" their architecture as a building block to engineer your own optimized cores.

2) Why/how did ARM gain such a strong foothold? Was it just because they were the first to make it to the lowest power usage arena? I have a hard time believing that AMD or Intel didn't see the need for this coming a long time ago when smartphones first started emerging and didn't think of something similar.

3) What's preventing others (AMD/Intel) from beating them? Is it just a big black box that nobody can reverse engineer? I have a hard time believing that since you can license their actual architecture like Apple did with the A6.

No, it's a technology that other companies can license. ARM itself does not produce CPUs (see here). It's not really "open", you can just plonk down cash for the license and produce your own spin on their specs.

DPete27 wrote:2) Why/how did ARM gain such a strong foothold?

Extremely high performance-per-watt, low power consumption overall, and a really low number of transistors needed, which vastly improves production yields. Intel and AMD did see this coming, but there are two problems:

1) ARM is not directly compatible with the x86 instruction set and both Intel and AMD's main CPU business relied in that, which meant that they had (or thought they had) to provide a really low-power x86-compatible CPU - vide Atom - which is something that doesn't happen overnight without some major overhauling due to fundamental differences...2) ... and they thought they had time to develop their own solutions in-house before ARM took a piece of the market. *snicker*

DPete27 wrote:3) What's preventing others (AMD/Intel) from beating them? Is it just a big black box that nobody can reverse engineer? I have a hard time believing that since you can license their actual architecture like Apple did with the A6.

Nothing, really. The problem is, now that Intel and AMD both missed the boat, ARM has an enormous lead and cut deals left and right. So both Intel and AMD are now fighting both an installed base and market momentum.

There is a fixed amount of intelligence on the planet, and the population keeps growing :(

DPete27 wrote:1) ARM is some kind of open-source processor architecture right? It seems like everybody and nobody owns ARM. From my understanding, the ARM company just engineers the cores/architecture and licenses that out to other companies to fab. They kind of act like the processor R&D team at AMD or Intel for example. You can "buy" their cores to use on your SoC, or you can "buy" their architecture as a building block to engineer your own optimized cores.

Except for your first bit about it being open source and nobody owning it, that's essentially correct. The ARM people own the IP and license it out, either as complete core designs or as a license to use the ARM ISA.

DPete27 wrote:2) Why/how did ARM gain such a strong foothold? Was it just because they were the first to make it to the lowest power usage arena? I have a hard time believing that AMD or Intel didn't see the need for this coming a long time ago when smartphones first started emerging and didn't think of something similar.

To a large extent, I think they were just in the right place at the right time. They had the right combination of performance and power usage, at exactly the time when the mobile device market was exploding.

Other lower-power processors were (and are) out there, like the PIC microcontroller line. But these did not have the compute horespower to handle a smartphone. PICs get used a lot in appliance and automotive applications.

The MIPS processor line could've been what ARM is today, but back in the '90s they got acquired by former workstation/server vendor (and inventor of OpenGL) SGI, and subsequently spun back off again a few years later when SGI made the ill-fated decision to bet the farm on Itanium. IMO this little detour derailed any chances MIPS may have had of dominating the embedded market. They're still around (used in some consumer electronics devices like Blu-Ray players, set top boxes, and the PSP), but haven't managed to achieve the dominance that ARM has.

DPete27 wrote:3) What's preventing others (AMD/Intel) from beating them? Is it just a big black box that nobody can reverse engineer? I have a hard time believing that since you can license their actual architecture like Apple did with the A6.

Beating who? ARM Holdings? They're not a semiconductor company, they are an IP licensing operation. They don't build actual chips, so they don't compete directly with Intel or AMD.

Coming up with a different (but equivalent from a performance per watt standpoint) RISC CPU design that doesn't use any ARM IP is certainly doable (especially for someone with deep pockets like Intel), but there's also a huge existing ecosystem for ARM development. Compilers, OSes, and APIs (Linux, Android, etc.) all exist today. If you rolled a new design from scratch you'd have to port or re-invent all of the support infrastructure too.

Doing it with x86 (to leverage the existing x86 ecosystem) is difficult, because x86 is a complicated ISA with a lot of excess baggage that isn't needed for mobile devices. Atom was Intel's attempt at this, but it was still too power hungry for the sort of applications ARM targets, and too wimpy for low-end laptops and netbooks.

ChronoReverse wrote:With that said, Medfield has caught up in terms of available chips. Intel really is a crazy force when they determine to do something.

Probably too little, too late though. ARM-based designs are already entrenched in the mobile space; and with even Microsoft embracing ARM now, x86 compatibility just doesn't make a particularly compelling case for Medfield as a smartphone/tablet platform. I suppose Intel may attempt to bribe smartphone vendors to use it just to gain a foothold, sort of like they did back in the day to keep AMD out of the big PC OEMs; but they don't have nearly the kind of leverage in the smartphone market that they did with PCs.

just brew it! wrote:Beating who? ARM Holdings? They're not a semiconductor company, they are an IP licensing operation. They don't build actual chips, so they don't compete directly with Intel or AMD.

So it seems like ARM has no competitors stealing away from their business, they're just encroaching more and more on the x86 space. So, if Intel and AMD are "kings of the x86 arena" and ARM is in the x64 (?) arena, can they both coexist in the future or is x86 doomed? Seems like its a race for x86 processors to reach low enough power envelopes, and for ARM to get high enough performance to break out of the smartphone and tablet market. What happens when/if they intersect?

just brew it! wrote:Probably too little, too late though. ARM-based designs are already entrenched in the mobile space; and with even Microsoft embracing ARM now, x86 compatibility just doesn't make a particularly compelling case for Medfield as a smartphone/tablet platform. I suppose Intel may attempt to bribe smartphone vendors to use it just to gain a foothold, sort of like they did back in the day to keep AMD out of the big PC OEMs; but they don't have nearly the kind of leverage in the smartphone market that they did with PCs.

Well one of the new Motorola Razr's is using Medfield already. Android apps with native ARM code can run on Medfield (slower of course) plus most Android apps are Dalvik so those run on Medfield without any trouble. ARM's hold on this arena is actually really tenuous especially since A15 is more for servers and ARM seems to be pushing the BigLittle method to use them in low-power mobiles like phones.

We'll see of course, but the entrenchment of ARM is more tenuous than it appears. I do think ARM has the advantage still but Intel is so close that a single misstep could easily shift things around. The fact that Android can so easily be completely shifted to x86 and also being the majority of the market doesn't help.

I really do hope Intel pulls a Larrabee though since they have their fingers in way too many pies as it is.

DPete27 wrote:So it seems like ARM has no competitors stealing away from their business, they're just encroaching more and more on the x86 space. So, if Intel and AMD are "kings of the x86 arena" and ARM is in the x64 (?) arena, can they both coexist in the future or is x86 doomed?

I tend to think that going forward it will matter less and less what architecture a processor is based on.

With so many apps moving "into the cloud", on the client side first and foremost you need a decent web browser; once you've got that, you're halfway there. Provide a decent open cross-platform API (e.g. Android) and you're the rest of the way there.

On the server side, the rise of cross-platform server OSes that allow you to migrate binary code with a simple recompile, and interpreted languages that are completely platform-agnostic (PHP, Python, Ruby...) have made architecture lock-in less of an issue as well.

And yes, all this is probably not good for x86 over the long run, since they will likely end up ceding part of the desktop and server market to other architectures; even if they gain in the mobile space, they will no longer have a segment where they completely dominate.

Heck, MIPS may even make a comeback. Android is supported on the MIPS architecture, and the latest line of PIC microcontrollers ditches the proprietary Harvard-style PIC ISA for a MIPS-based core, so we could even see some convergence from below.

DPete27 wrote:Seems like its a race for x86 processors to reach low enough power envelopes, and for ARM to get high enough performance to break out of the smartphone and tablet market. What happens when/if they intersect?

We are starting to see signs of that intersection now, so I guess we will learn the answer over the next few years!

ChronoReverse wrote:Well one of the new Motorola Razr's is using Medfield already....I really do hope Intel pulls a Larrabee though since they have their fingers in way too many pies as it is.

Given that it is already used in shipping products, it is too late for them to "pull a Larrabee".

But as I noted previously, it may take some incentives from Intel to convince hardware makers who are using ARM to jump ship for x86 in a big way.

32 and 64 bit are just a way of making a processor, to oversimplify it a bit. x86, for example, has had 16 32 and 64 bit versions(I believe there's an 8 bit version as well, but don't quote me on that). Almost every x86 CPU being made is 64 bit(I think some Atom models are 32 bit).

Basically, the higher bit count means a few things.

1. Access to more memory. The higher the bit count on the memory controller, the more memory can be accessed.

2. The processor can work on more data at a time, essentially. The integers can be bigger if the programmer wishes.

I'm oversimplifying this stuff a fair bit, but this is a crash course on it.

DPete27 wrote:Ok, more questions. Can anybody explain to me the differences are between 32-bit, 64-bit, and x86 processor architecture in layman terms?

32-bit and 64-bit refer to the memory addressing space available to the CPU. 32-bit CPUs can only address (interact with) 4GiB of RAM. 64-bit CPUs can address 16EiB.

As for x86, it's simply the "language" the transistors on the CPU were designed to understand. Each Instruction Set Architecture (ISA) has to be implemented in metal on silicon wafers. Each ISA means a different way of hooking all the transistors together. x86 is just one of many ways of doing that. That said, an operating system must be written to speak to the ISA of the underlying CPU.

The width of the architecture refers to the size (number of bits) of most of the internal registers in the CPU cores. Think of a register as a memory location that is even faster than L1 cache (access time is typically just one clock cycle), and even more scarce (typical CPU ISAs have a few dozen registers at most). The width of the registers in turn determines the largest data item the CPU cores can process in a single operation, and the maximum amount of memory that can be directly addressed.

(The above aren't hard-and-fast rules, more like guidelines; vector extensions like SSEx typically can process data in larger chunks than the native width of the architecture, but that's a special case.)

x86 comes in 32-bit and 64-bit flavors (and a 16-bit flavor way back in the day). The 16- and 32-bit versions were designed by Intel (and copied by AMD); AMD came up with the 64-bit version of x86 on their own. (Intel had bet the farm on the Itanium ISA going forward, and had to play catch-up when AMD leapfrogged them in x86 processor design and Itanium didn't catch on.)

x86 is what is known as a CISC (Complex Instruction Set Computing) design, where a single machine instruction can do more than one internal operation, like (say) "fetch the value from memory location X and add it to register Y". In RISC (Reduced Instruction Set Computing), this would take two machine instructions and an additional register, e.g. "fetch the value from memory location X into register Z; add register Z to register Y".

Ok, I should have been a little more specific. So where do the new ARM A50 64-bit processors stand compared to the x86 offerings from Intel and AMD? Still can't use ARM 64-bit in regular Windows (only Windows RT). Why?

DPete27 wrote:Ok, I should have been a little more specific. So where do the new ARM A50 64-bit processors stand compared to the x86 offerings from Intel and AMD? Still can't use ARM 64-bit in regular Windows (only Windows RT). Why?

DPete27 wrote:Ok, I should have been a little more specific. So where do the new ARM A50 64-bit processors stand compared to the x86 offerings from Intel and AMD? Still can't use ARM 64-bit in regular Windows (only Windows RT). Why?

Because all "regular Windows" application software is built for x86. The ISAs are incompatible. Windows RT has its own set of applications, distinct from the applications available for desktop Windows.

Even if you could buy an ARM desktop, you would only be able to run Windows RT (and Windows RT apps) on it.

Flying Fox wrote:Are you honestly curious or are we helping you with some paper/assignment?

No, I'm curious. Just trying to get a grip on where this whole ARM vs x86 collision is going to take us in the future. Like I said in my OP, I consider my self very knowledgeable in computer hardware, but having no formal education (I'm a structural engineer), I am pretty much clueless when we get as deep as ISA differences.

Last edited by DPete27 on Fri Nov 09, 2012 1:48 pm, edited 1 time in total.

The other thing to note is that it turns out CISC instruction sets have the advantage of being compact. So it makes feeding the CPU with instructions faster.

Internally all modern CPUs, even x86 ones, are very RISC-like. The x86 CPUs just decode the CISC instructions into micro-ops that the CPU actually uses. This gives the advantage of compact code AND RISC-like performance.

As for ARM, we'll see when their 64bit A15-like server chips are available whether they'll retain a large power efficiency edge. I suspect the gap will be much closer once they have to scale up performance if the current A15 chips are anything to go by.