At that time, Intel got all over my case, citing the above-referenced article as deeply negative toward Intel’s goals and position. But the Santa Clara-based integrated device manufacturer (IDM) missed the subtlety of my argument.

I wasn’t saying Intel couldn’t move to ARM more quickly. After all, it is a tremendously capable silicon fabricator, possesses a huge war chest of capital, and is staffed by some of the most talented people in the industry. I was saying that it wouldn’t, that it is philosophically committed to its own architecture (x86 or IA) and simply won’t bet against it.

In fact, Intel pointed out to me in the wake of the article’s publication that it possesses an ARM architectural license, the highest level of relationship with the British intellectual property (IP) licensing firm, which Intel retained when it sold its communications product line to Marvell in 2006.

When asked how it intends to approach low-power servers, the company points to its Atom chips, low-power versions of standard x86 processors. When asked a year ago about how it would tie them all together for dense-server deployment, it pointed to SeaMicro, a company that makes a kind of internal communications infrastructure for servers called “fabric.” But in February of this year, AMD, Intel’s main rival in the x86 market, bought SeaMicro, effectively taking Intel’s best fabric option off the table.

Now, AMD has announced that it will make ARM-based dense servers for cloud applications using the latest SeaMicro fabric, called Freedom Fabric (which, for some reason, reminds me of Freedom Fries, that horrible jingoistic locution of the mid-Bush era).

These servers will hit the market in 2014 and will have a feature that overcomes one of Intel’s main arguments as to why ARM servers are contemptible and worthy only of derision: they will have 64-bit architecture.

Up through now, ARM, whose chips are used mainly in small mobile devices like smartphones, has offered only 32-bit architecture. The main shortcoming of 32-bit architecture is that, despite, or rather because of, the 32 doublings of 2 involved, it can address only 4.3 billion memory locations, which may sound like a lot, but when stated as 4GB, one sees that many memory configurations, even in some of today’s notebooks, are larger than that. Linux servers with 64GB memories are not all that uncommon now. With 64 bits, a processor can address eighteen quintillion, four hundred forty-six quadrillion, seven hundred forty-four trillion, seventy-four billion, seven hundred nine million, five hundred fifty-one thousand, six hundred sixteen memory locations, which leaves plenty of room for growth.

AMD had been hinting broadly that it would deepen its relationship with ARM for a while. Earlier this year, at a conference in Seattle, AMD sat with ARM on a panel (hosted by the ever-professorial Nathan Brookwood), and announced that it would incorporate an ARM A5 core to run security functions on future Accelerated Processing Units (APUs), AMD’s mixed architecture products.

AMD’s dense servers are still more than a year away, but in that time cloud computing will continue to grow and become an ever more vital part of how people use data. Another of Intel’s objections to ARM architecture is that it’s not “powerful”; that is, a single ARM core can’t do as much work per unit time as an x86 core. And this is true. Particularly with complex calculations that require a lot of intermediate results to be stored (cached) nearby, x86 wins by a mile.

But one key aspect of the cloud computing and Big Data analysis that underlies much of what companies like Google, Facebook, Twitter, and Amazon do is its sparse nature. As people with mobile phones and Apple iPads pile photos and tweets onto these platforms, vast data stores are formed that need to be picked over by programs. The picking itself can be quite simple: find the data item, read it, and pass it somewhere. But the amount of searching can be enormous.

And this type of task is exactly what dense ARM servers will be good at. Bazillions of tiny cores can all go out at once to the far reaches of the storage pool, looking for the same thing. When one finds it, it can signal the others to call off the search party. Like an ant colony, the power of this architecture is not in the individual ants but in the colony itself, which, collectively, is much smarter than the individual.

It’s cheaper, faster, and more efficient to send a whole lot of little cores out at once than to have a few big ones attempt the same task. Thus, once they have 64-bit architecture, dense ARM servers are just the ticket for a fair chunk of tomorrow’s cloud computing.

Now, with SeaMicro’s fabric, ARM’s low-power processors, and its own heterogeneous system-on-a-chip (SoC) architecture (not to mention process technology from its brother from another mother, Globalfoundries, which has been making ARM processors for other customers for a while now), AMD is well positioned to take a leadership role in this burgeoning market.

Disclosure: Endpoint has a consulting relationship with Advanced Micro Devices.

I founded Endpoint Technologies Associates, Inc., an independent technology market intelligence company, in 2005. Previously, I was vice president of Client Computing at IDC, covering client PCs (desktop and mobile computers). Before that, I ran my own research and analys>...