Why Intel’s neutral stance on Atom in servers is a mistake

Intel is at it again with this "you can put Atom in a server, but we don't see …

With all the talk of leaner, simpler cores in the datacenter, and the rumblings about ARM chips going into cloud servers, you'd think that Intel would be happy to position Atom as a server part. But you'd be wrong. PC World is reporting some remarks by an Intel VP, where he takes a neutral stance on the idea of Atom in the datacenter.

"We are not opposed to an Atom based server, but we just don't see broad adoption of the Atom as a server chip," Intel's Kirk Skaugen told the publication.

These remarks track almost exactly with Intel CTO Justin Rattner's comments at the most recent Intel R&D day. In response to a question about the Atom-based, 512-core SeaMicro server, Rattner acknowledged that there's a role for simpler cores in the datacenter. But he stopped short of encouraging Atom's uptake there. Instead, he pointed to the Single Chip Cloud Computer project as the best example of Intel's current thinking on the best way to meet the cloud datacenter's demand for a sea of tiny cores.

There are two problems with the SCCC, however, the first of which is that it's not yet a real product, and it's not clear when it will turn into one. The other problem with the SCCC is that its individual cores are probably too wimpy. To bolster support for the idea that Xeon still beats Atom in the datacenter, Skaugen cited a paper by Google Fellow Urs Holzle, entitled "Brawny cores still beat wimpy cores, most of the time" [PDF]. Holzle's paper makes the point that, while Google generally prefers throughput-oriented architectures to peak-performance-oriented architectures, it is possible for the cores to be too weak.

Specifically, Holzle writes that when a so-called "wimpy core" lags a "brawny core" by a factor of two in raw performance, it stops making sense for Google's data centers. That's just too much of a performance delta, and at that point a bigger, more powerful core is better, even if it costs a lot more in wattage.

The individual cores in Intel's SCCC demo are very weak—about half the transistor count of an Atom. So even if Intel brought the SCCC to market tomorrow, Atom might still be a better bet simply because its absolute performance is closer to that of a normal (i.e., out-of-order), low-power x86 core.

While Intel putters about with its SCCC science project, ARM is quite seriously attacking the datacenter with its next-generation Cortex A15 part, codenamed Eagle.

Eagle boasts a number of features that are directly aimed at the same server space that Intel is refusing to market Atom at, despite the latter's suitability for such workloads. And in recent days, ARM has also said that it's looking into adding simultaneous multithreading to future chips—a memory latency-hiding feature that makes a lot less sense for mobiles than it does for cloud servers.

Intel should consider shifting its position on Atom in servers from "neutral" to "strong buy," even if it does take a bit of the wind out of Xeon's sails. If they continue to push Xeon while encouraging everyone who wants a simpler server core to hold out for SCCC, they may leave a large opening that ARM (and possibly AMD) can waltz right through.

Wasn't there an Ars article previously about some company putting 100+ atoms into a server box? They were targetting low-wattage, high core needs, and seemed to have a target audience. I guess Intel doesn't feel the volume/demand is high enough for them to tackle that themselves?

Intel is too busy trying to protect their Xeon product than to innovate with Atom for servers. They think they can steer the market however they please and tell buyers what they want instead of letting them decide on their own. This is something that annoys me about Intel and some other tech companies. They're so busy thinking about how they can squeeze every penny out of what they've got that they completely miss the next thing coming. When you have a near-monopoly on the market like Intel does that approach works, unfortunately.

Sometimes I think Intel doesn't know what to do about Atom. I think the market accepted it way better than Intel expected, and that is because it is good enough most of the time and has exceptional power management. This really shot down the Celeron and its total lack of power management. When it comes down to it, Intel seems to hate the value segment, and they want to dictate demand. Intel is always trying to protect the product line, even if it comes at the expense of what the market wants. That model stung them in the past when Prescott launched, and it took them years to bring us the much better Core design that we loved in notebooks.

Atom is a good solution to some problems, but intel doesn't want to admit that.

Wasn't there an Ars article previously about some company putting 100+ atoms into a server box? They were targetting low-wattage, high core needs, and seemed to have a target audience. I guess Intel doesn't feel the volume/demand is high enough for them to tackle that themselves?

512 in this bad boy. You would think as well as its been received in the market ECC memory compatible boards would have surfaced by now thou.

With you on that sprockkets. I don't need any more power than my current Core2Duo. But if I can buy a smaller die that utilises power more effectively and so I can purchase a silent machine yet still retain the same power as I do now then I will do.

I definitely think that Google probably does only care about throughput-performance architectures more than peak-performance architectures, as was said in the article. However, that's probably only because it doesn't matter if it takes another few hundred milliseconds or maybe even a thousand or so milliseconds to process something every once in a while, because their business, in my opinion, doesn't seem to rely on split-second transactions everywhere.

I understand some people will disagree, thinking that they want their search engine to be fast or people would leave for something else. However, I don't think people notice a few milliseconds here or there every once in a while, because, overall, Google has a really high throughput. However, in a lot of other industries (i.e. automated equity trading, trade order routing, game servers, automated flight navigation, etc.), you want the throughput performance and especially peak performance. In these kinds of situations, you don't want to be caught making a decision too late, because either you'll lose money, customers, or lives.

One big thing I think Intel needs to worry about in the peak-performance space are the use of GPGPUs to get the same workload accomplished faster. Here, I think Intel really needs to push forward with their own GPGPU or to work with someone like NVIDIA.

So, I think there is plenty of room for everyone. Here at home, except for my development workstation and gaming machine, I don't care about peak performance. I just care about quiet operation (Guru Plugs / Sheeva Plugs are great here) and general performance.

In the end, I think Intel will either go where the market is, or they will lose out. I honestly don't think Intel would be that dense if the market moves towards Atoms, but who knows? I don't think they have too much to worry about though, as peak-performance datacenter operations seem to be the majority of the cases out there.

ATOM is a great platform for lightweight nodal servers but with the current push towards virtualization it is not quite yet the cureall for power consumption. For now I'll take dual hexacore Xeons, which is cheap to license, offers fat caches, and has QPI over multiple ATOM based servers for now. With a decent storage backend I can push modest 12-24 VMs on that.

interesting since a good chunk of Windows Home Server boxes use the atom as a server part.

I'm having trouble figuring out who is replying to whom, but I assume you're replying to me. (Is there a way to know for sure who is replying to whom? A threaded view, perhaps?)

My intent wasn't to say that Atom wasn't being used or that no one cares about it. I was trying to say I don't think Intel is dense enough not to recognize a moving marketplace, if it's indeed moving, and build better components to meet demand. At home, I think the Atom and ARM chips make perfect candidates for servers, because at home, no one cares if the server takes a few extra milliseconds to start some task like pushing data across a pipe. I think in certain datacenter workloads it makes sense as well.

Another hypothesis is that Intel gets it - they see the use of Atom in the server room, but they've run the profit margin numbers and realize that selling their $800 server chips turn more of a profit, so the official company line is that Atom belongs in netbooks. I wouldn't be surprised if they have R&D working on an Atom server class product, so that if the time comes where either IT depts start demanding it or 3rd parties like seamicro and AMD start to push the idea, Intel will already be set to jump back in.

In addition to the atom being a low margin part and Intel trying to protect their high margin parts, companies like Google are interested in adopting the small, lightweight, distributed, and importantly cheap mini-server for simple scalability and ease of quick and cheap replacement of one or more die. You simply can't do that with monolithic designs.

If Intel started marketing and/or encouraging Atom in servers at this point, wouldn't they just be competing with themselves while helping ARM/AMD carve out a "wimpy core" market?

The smart business move would be to keep raking in the Xeon money, let ARM/AMD spend the time, money, and effort carving out a "wimpy core" market (which Intel will make more difficult by crapping on the idea), then swoop in with Atom (or whatever product they can mark up the most) and reap the rewards of their competitor's labor.

As others have mentioned, Intel is not promoting the Atom for servers as it doesn't really make any economic sense for them. They would be competing against themselves, at much lower margins. Hell, everyone seems to be forgetting, they aren't really promoting the Atom for laptops either. Even at the height of the netbook craze, Intel's response was like "Why would you want to do that? Here, have some of these nice CULV processors we've made instead." That's why they're still enforcing these limitations of screen size etc.

Intel started the Atom to get into markets that they didn't have access to already, like the doomed MID/UMPC form factor, and now smartphones. Everything else has been quite the accident, and I don't think they've made up their minds whether they should be thanking the likes of Asus and SeaMicro or cursing them.

I don't think that it's wrong for intel to stance neutral on Atom in server.

reason?1. as normal practice that the server platform is where latest cpu technology is push out. because of this if Intel encourage to use Atom in server will cause slow for server platform to use new technology in future.

I think those are good business points, and it clearly is how Intel operates in the past. But things are changing worldwide, and I think Intel is risking more than they should to squeeze profits. Remember the best business deals benefit both sides. As Intel takes advantage of the size to artificially enhance profits (screen size limits on netbooks, no atom on servers, etc.) their partners will make progress with other suppliers. They're losing the mobile market, and not just cell phones, but the larger scale tablets that will eventually replace desktops. By not going full speed on these initiatives, they're risking a lot for short term quarterly results. Same as MS bleeding the PC makers dry.

Can someone enlighten me as to which server workloads people are actually clamoring to run on an Atom based server? I ask because it seems to me this post assumes the existence of such workloads.

HPC? noFile Server? noemail server? nodatabase server? hell no!virtualization node? nostatic web cache? maybe, but custom hardware exists for this and is likely better than an atom solutionmemcached? maybe (I think even the mild hashing in a memcache instance might be to much for atom)

So seriously, what is the workload for for such a beast? (hint "cloud" is not a workload.)

Intel's not just neutral about Atom in servers, they're neutral about it in just about everything besides netbooks and (some) embedded apps, and even there, the big push is the AMT enablement in the Core i series. The reason is simple - the Atom is a substantially lower-margin part.

Plan A - Intel does what it's currently doing, de-emphasizing Atom as a server option.Plan B - Intel instantly promotes Atom as the new server option.

Plan C - Intel revamps Atom and its chipset (possibly into an SoC, even) that supports ECC, and is easy to link multiple chips together (be it a new bus architecture or modest changes on an existing one) into a design meant for server-style tasks, and markets it as, oh, say "Atom MP" or an "Atom Server Architecture" platform, keeping the price reasonably inexpensive to make a good solution for blade servers, servers with multiple processors on expansion cards (like the one mentioned in the article), and possibly a 2-4 CPU home server variant (Windows Home Server 2.0 aka "Vail" will likely struggle a little under a single Atom).

In the meantime, I think it would be a good idea for Intel to make some subtle hints about Plan C (if they went with it) so that their existing customers would await this solution if they have needs, rather than considering the jump to ARM. But hey, what do I know? I'm just a lowly sysadmin. =)

The smart business move would be to keep raking in the Xeon money, let ARM/AMD spend the time, money, and effort carving out a "wimpy core" market (which Intel will make more difficult by crapping on the idea), then swoop in with Atom (or whatever product they can mark up the most) and reap the rewards of their competitor's labor.

That makes more sense than anything else. Does anyone think that if the market does emerge, Intel won't be able to use their marketing, manufacturing, and partner strengths to enter the market relatively easily? I don't think it'll fare any worse than recovering the server market share the P4 lost to Opteron. Any software that runs on an ARM Eagle or AMD Bobcat will run on whatever Atom derivative they create.

Plus, I don't know if I'm in the minority here, but I'm not convinced this physicalization thing is any more than a passing fad that popped up solely from the margins on chips currently offered to the bottom of the market.

As others have mentioned, Intel is not promoting the Atom for servers as it doesn't really make any economic sense for them. They would be competing against themselves, at much lower margins. Hell, everyone seems to be forgetting, they aren't really promoting the Atom for laptops either. Even at the height of the netbook craze, Intel's response was like "Why would you want to do that? Here, have some of these nice CULV processors we've made instead." That's why they're still enforcing these limitations of screen size etc.

Intel started the Atom to get into markets that they didn't have access to already, like the doomed MID/UMPC form factor, and now smartphones. Everything else has been quite the accident, and I don't think they've made up their minds whether they should be thanking the likes of Asus and SeaMicro or cursing them.

I know that's how it started, but I think that now, if they *don't* push Atom or a derivative into the "low-end many-core" server space, they'll risk ceding that ground to AMD or ARM. Selling one (or maybe lots) of your own products (even if at a much lower GP than your Big Strong Server Chip) is better than that money going to one of your competitors.

The smart business move would be to keep raking in the Xeon money, let ARM/AMD spend the time, money, and effort carving out a "wimpy core" market (which Intel will make more difficult by crapping on the idea), then swoop in with Atom (or whatever product they can mark up the most) and reap the rewards of their competitor's labor.

The problem with that is if they let a market get established based on ARM hardware, this would erode the industry's dependence on x86 server software (since logically in order for a viable hardware market to exist, viable ARM-based software must also exist). So that would devalue the ability to run x86 code, for which Intel charges a premium. Seems very risky, unless they're super confident that they have something in the works that can compete with ARM on straight up performance/watt, or whatever other metrics are important in this hypothetical market.

interesting since a good chunk of Windows Home Server boxes use the atom as a server part.

I'm having trouble figuring out who is replying to whom, but I assume you're replying to me. (Is there a way to know for sure who is replying to whom? A threaded view, perhaps?)

My intent wasn't to say that Atom wasn't being used or that no one cares about it. I was trying to say I don't think Intel is dense enough not to recognize a moving marketplace, if it's indeed moving, and build better components to meet demand.

Intel doesn't have a problem with Atom in Windows Home Servers because that's one of their approved niches- Netbooks and mini-ITX desktops. If the home server market takes off, they might change their stance. Like netbooks, Atom-powered WHS systems are a bit underpowered, but it's seen as a worthwhile trade off.

Actually, if they weren't so much more expensive than Atom CPU's, a CULV processor would be pretty good for a home server.

The smart business move would be to keep raking in the Xeon money, let ARM/AMD spend the time, money, and effort carving out a "wimpy core" market (which Intel will make more difficult by crapping on the idea), then swoop in with Atom (or whatever product they can mark up the most) and reap the rewards of their competitor's labor.

The problem with that is if they let a market get established based on ARM hardware, this would erode the industry's dependence on x86 server software (since logically in order for a viable hardware market to exist, viable ARM-based software must also exist). So that would devalue the ability to run x86 code, for which Intel charges a premium. Seems very risky, unless they're super confident that they have something in the works that can compete with ARM on straight up performance/watt, or whatever other metrics are important in this hypothetical market.

But the many tiny core market isn't about heavy x86 applications. The kinds of thing I imagine run on these things is the kind of stuff that lives in a Debian repository, which is why ARM stands a chance.

Can someone enlighten me as to which server workloads people are actually clamoring to run on an Atom based server? I ask because it seems to me this post assumes the existence of such workloads.

HPC? noFile Server? noemail server? nodatabase server? hell no!virtualization node? nostatic web cache? maybe, but custom hardware exists for this and is likely better than an atom solutionmemcached? maybe (I think even the mild hashing in a memcache instance might be to much for atom)

So seriously, what is the workload for for such a beast? (hint "cloud" is not a workload.)

If google's the example, I think you need to pluralize. Atom in a web server is different than atom in webservers. Do the legwork for some form of load balancing up front and you have a solution that is easily scalable. Throw more [cheap] atom servers at it. If the workload is big enough to require multiple servers anyway, it's at least plausible that dozens of Atoms might out perform a couple of xeons in some situations.

Can someone enlighten me as to which server workloads people are actually clamoring to run on an Atom based server? I ask because it seems to me this post assumes the existence of such workloads.

HPC? noFile Server? noemail server? nodatabase server? hell no!virtualization node? nostatic web cache? maybe, but custom hardware exists for this and is likely better than an atom solutionmemcached? maybe (I think even the mild hashing in a memcache instance might be to much for atom)

So seriously, what is the workload for for such a beast? (hint "cloud" is not a workload.)

File Server? Why not. An Atom or even one of the new ARMs have enough power to run a NAS with not problem.SuperMicro offers an Atom ITX board with 6 Sata ports and two NIC and a slot. Add another SATA card and you can use 10 drives and OpenNAS or FreeNAS to create a pretty powerful NAS.Email server? Depends on how many users.database server? If you want to run a small office accounting server sure. You may not want to use vm for that. I would but I know some people are odd about that.What about a VOIP PBX?Those are are all systems that an Atom can deal with.

Can someone enlighten me as to which server workloads people are actually clamoring to run on an Atom based server? I ask because it seems to me this post assumes the existence of such workloads.

HPC? noFile Server? noemail server? nodatabase server? hell no!virtualization node? nostatic web cache? maybe, but custom hardware exists for this and is likely better than an atom solutionmemcached? maybe (I think even the mild hashing in a memcache instance might be to much for atom)

So seriously, what is the workload for for such a beast? (hint "cloud" is not a workload.)

If google's the example, I think you need to pluralize. Atom in a web server is different than atom in webservers. Do the legwork for some form of load balancing up front and you have a solution that is easily scalable. Throw more [cheap] atom servers at it. If the workload is big enough to require multiple servers anyway, it's at least plausible that dozens of Atoms might out perform a couple of xeons in some situations.

but then you have to manage those physical machines! one of the best things about a virtual environment is that you can set up centralized storage and a few servers (and you can set them up as HA or load balancing). all you have to do is centrally manage processing resources (servers with no local storage) and a storage resource (iscsi san?). i don't see why anyone would want to take a step backwards to handle many separate servers unless there was a workload that a few strong servers running virtual machines wasn't equipped for

Can someone enlighten me as to which server workloads people are actually clamoring to run on an Atom based server? I ask because it seems to me this post assumes the existence of such workloads.

HPC? noFile Server? noemail server? nodatabase server? hell no!virtualization node? nostatic web cache? maybe, but custom hardware exists for this and is likely better than an atom solutionmemcached? maybe (I think even the mild hashing in a memcache instance might be to much for atom)

So seriously, what is the workload for for such a beast? (hint "cloud" is not a workload.)

If google's the example, I think you need to pluralize. Atom in a web server is different than atom in webservers. Do the legwork for some form of load balancing up front and you have a solution that is easily scalable. Throw more [cheap] atom servers at it. If the workload is big enough to require multiple servers anyway, it's at least plausible that dozens of Atoms might out perform a couple of xeons in some situations.

but then you have to manage those physical machines! one of the best things about a virtual environment is that you can set up centralized storage and a few servers (and you can set them up as HA or load balancing). all you have to do is centrally manage processing resources (servers with no local storage) and a storage resource (iscsi san?). i don't see why anyone would want to take a step backwards to handle many separate servers unless there was a workload that a few strong servers running virtual machines wasn't equipped for

Oh Lord, a virtualisation zealot...

Virtualisation is all well and good in its place, but it is not a panacea.

When you're dealing with REAL scale, virtualising onto a small number of high performance machines starts to run into real cost issues when you want to deliver resilience. It also makes scaling out with demand a financially unpleasant process, requiring large periodic investments in assets that will not immediately be utilised (and not fully utilised means 'not making a return on investment.')

The small+cheap+disposable model also tends to have very real operational cost benefits compared to the "big'n'shiny'n'fast" model. Your big expensive boxes also come with big expensive maintenance and support contracts so you can get the engineer out to fix it in an hour when there's a problem, because you can't afford to deliver any better than n+1 resilience. Whereas your dozen cheap'n'cheerful boxes that do the same job probably don't have any maintenance contracts on them at all because, well, if one breaks who gives a damn? You only lost a few percent of peak capacity and you can just buy another one and throw it in.

There are plenty of good places where small+cheap+'disposable' makes perfect sense technically and economically. There are also places where big-bucks iron (and possibly virtualisation) makes sense. Out here in the real world where we're dealing with these sorts of problems, you'll find both solutions in different parts of the same architecture.

Several people have said: it's obvious, because Atom is a low-margin part.

What I wanted to add: in my experience, what MOST smart consumers WANT TO USE in terms of computer components are exactly that: the low margin parts, because that generally means best bang-for-buck. Assuming that the low-margin part isn't too far behind the state of the art - almost no one wants to run a Thunderbird/P3 these days - even if it was free. For the average consumer, it would be "too" slow, and for the data center, the performance/watt is too low.. (counter-example: my wife still has a P3 on her desk at school, and it happily runs XP for her email checking and word processing - but I'm dying to upgrade it :-)

Over the years, it's been a constant see-saw between Intel/Amd/Nvidia/ATI/etc releasing a lower-margin part, people flocking to it, and then the same company trying to steer the consumer back to a much higher margin part with (somewhat) better performance.