The Intel Atom S1200 chips are for microservers as well as storage and networking systems that need energy efficiency and enterprise features that—Intel says—you just can't get in ARM chips. Intel called the S1200 "the world's first 6-watt server-class processor," and said microservers using the chip will be able to fit 1,000 nodes into a single rack.

ARM may dominate smartphones and tablets, but Intel hopes to lead the way in bringing smartphone CPUs to data centers. (As we've previously reported, Intel is making Atom-based SoCs for PCs, phones, and tablets as well.)

"Right now there are no ARM-based enterprise-class servers," Intel VP Diane Bryant said at a press conference for today's announcement. In addition to the hardware-assisted virtualization and ECC features already mentioned, Bryant noted that the Intel S1200 chips are 64-bit and support the x86 software prevalent in today's data centers.

The prospects of ARM servers from AMD and the likes of Calxeda are intriguing, but ARM isn't a major player in the data center yet. With Atom S1200, Intel hopes to pre-empt ARM's entry into the server market.

To prove the Atom chips' usefulness, Intel trotted out partners HP and Microsoft to talk about servers that will use the S1200 SoC and Windows Server's support for the new product line.

Describing the importance of 64-bit, Microsoft's Windows Server lead architect Jeffrey Snover said, "the benefits of a large, flat address space are just critical for a server operating system, so much so that Microsoft stopped supporting 32-bit chips [in Windows Server] a couple of releases ago. We're very excited that we'll now have a very low-energy part that could run the demands of Server."

Intel also scored support from Facebook. Facebook isn't using servers based on the chips yet, but had one of its top executives on stage with Intel to tout the new architecture's potential.

Facebook's involvement is interesting given that the social network previously joined AMD in touting the launch of ARM-based chips for servers. AMD's server processors using 64-bit ARM chips won't arrive until 2014, however, so Facebook may simply not want to wait.

There are three Intel Atom S1200 processors with frequencies of 1.6GHz to 2.0GHz, and power usage from 6.1 watts to 8.5 watts. Each SoC has two physical cores that can run four threads thanks to Intel's Hyper-Threading, and up to 8GB of DDR3 memory.

Intel's recommended price is $54 per chip in quantities of 1,000 units.

Smartphone CPUs in Facebook data centers

Today, Facebook VP of hardware design and supply chain Frank Frankovsky said, "Xeon-class processors have helped us scale Facebook very effectively so far." But not every workload needs what Frankovsky called "brawny cores."

Now that "smartphone-class CPUs" for servers are 64-bit and include ECC, they're ready for certain parts of the Facebook infrastructure, he said.

"We've applied what I'll call brawny cores unilaterally across our environment," Frankovsky said. "What's interesting about these smartphone-class CPUs is we can right-size them to the needs of maybe the photo storage tier, for example. Maybe that's a great place to start, where we don't need a brawny core, what we need is maybe a smartphone-class CPU that also includes 64-bit and also includes ECC."

(UPDATE: Another Facebook executive told GigaOm that Facebook is not actually using these latest Intel chips in its data centers. Frankovsky's remarks at the Intel event do show that Facebook is interested in the architecture, so perhaps the company will adopt future versions of the Atom SoC.)

Frankovsky noted that you might need two or three times as many Atom (or "wimpy") cores to do the same work Xeon-class processors handle, but ultimately come out ahead when measuring "how much useful work you can get done per watt and per dollar."

Those advantages only exist for certain types of workloads—the key is different CPUs for different applications. According to HP, compute-intensive applications still require Xeons, but the Atoms will be appropriate for "light scale-out applications" such as static Web serving and in-memory caching. HP said its calculations show Atom processors doing twice as much performance per watt as Xeon for light scale-out apps, but only half the performance-per-watt of Xeon for compute-intensive ones:

HP is hedging its bets, going both with Atom and striking a partnership with Calxeda on ARM-based servers. In addition to HP, Intel said systems based on the new Atoms are coming from Accusys, CETC, Dell, Huawei, Inspur, Microsan, Qsan, Quanta, Supermicro, and Wiwynn.

And of course, Intel is working on the next generation of Atom, code-named Avoton. "Available in 2013, Avoton will further extend Intel's SoC capabilities and use the company's leading 3-D Tri-gate 22 nm transistors, delivering world-class power consumption and performance levels," Intel said.

Intel noted that next year's Xeons based on the Haswell architecture will also have impressive energy efficiency, but of course they will still require more power than the Atom SoCs.

58 Reader Comments

I never understood why AMD never thought to do this with the Bobcat cores, they already outclassed Atom in performance, they simply needed ECC support, and a die shrink, now owning Seamicro you'd think it would have been a faster development versus starting fresh with an ARM core.

Not surprising especially considering how flat the x86 market is. Problem for Intel is that the ARM space isn't like the x86 space, and Intel will not find it as easy to pull an "AMD" and the margins even for servers will be slimmer.

I'd say for any sensible data centre admin, sticking to x86 is a no brainer - it is almost low-power enough already and with a clear path to getting there soon.All software already runs on it, including Windows servers etcWith ARM it might work in some cases, but at this stage it is still not a mature platform.You'd have to have a really huge data centre to even think whether lower chip and energy costs can outweigh the potential problems...

I also welcome the new Intel SoC, hopefully coming to my home media server soon as well.

"What's interesting about these smartphone-class CPUs is we can right-size them to the needs of maybe the photo storage tier, for example. Maybe that's a great place to start, where we don't need a brawny core, what we need is maybe a smartphone-class CPU that also includes 64-bit and also includes ECC."

Well considering the future vision is heterogenous multi core CPUs, and servers are currently basically giant homogenous multi core systems (excluding things like HPC servers which are already heterogenous), the addition of mixed Atom and regular Xeon processors is just a different way of getting to the same end point in effect (rather than in the future having lots of the same CPU, but each CPU having mixed core types).

"What's interesting about these smartphone-class CPUs is we can right-size them to the needs of maybe the photo storage tier, for example. Maybe that's a great place to start, where we don't need a brawny core, what we need is maybe a smartphone-class CPU that also includes 64-bit and also includes ECC."

Xeons are still important for database and app servers, but wow, the price point on these little things is kind of shocking for what they offer. Intel may have just undercut a 3rd of it's own market!

The S1200 name is odd because Intel has a Mini-ITX server board that is also 'S1200' ish in name. Makes me wonder if they will be socket compatable with E3 Xeons.

* Looking for more info on these... It seems these are the socketless CPU's Intel was talking about, and I suspect most will find use as blades rather than 1U racks (but expect some custom cards for use in 4U cases). Kind of bumbed by this, but it's not like I NEEDED to replace any nettop servers right now, just liked the idea of replacing larger VM's with phyical, low-powered servers.

"What's interesting about these smartphone-class CPUs is we can right-size them to the needs of maybe the photo storage tier, for example. Maybe that's a great place to start, where we don't need a brawny core, what we need is maybe a smartphone-class CPU that also includes 64-bit and also includes ECC."

I just bought a few dozen Intel shares at $20.70 each (with all the spare cash we can muster). It's not much, but with the market (and this stock) so cheap at the moment, it won't take much good news to turn these modest holdings into a modest profit! It's possible (if you give credence to Otellini's remarks) that Intel's current strategic position might be somewhat underestimated by the market. Opinions vary on this question, but this Ars Technica article shows that Intel is making the right moves to head off the uglier questions about their strategy — Intel may be better prepared for the coming market than the market has anticipated.

EDIT @down-voters: You're probably wondering why I'm discussing investment on Ars comments — why is this relevant? I mentioned previously that Ars Technica has a good track-record in helping me anticipate the market for high-technology shares (which are typically quite volatile, so that understanding this market can potentially yield decent profits over a comparatively short space of time). I read about a fellow who started with a paperclip, kept swapping things, and was eventually able to own a house. If my wife is patient enough with my new hobby, I'll try doing something similar with the stock market and document what I've done, and how I did it (I started with a ridiculously low-risk small amount of capital as an initial cost-basis — it's been an interesting experiment so far, gaining about 40% profit in two months, mainly based on RIM shares), and which news articles I based my opinions on (Ars Technica features quite heavily so far). If my wife calls a halt to my experiment, perhaps someone else could do a similar project? Or, does anyone want to compete with me? If so, get in touch via my blog and we can compare notes.

I just bought a few dozen Intel shares at $20.70 each (with all the spare cash we can muster). It's not much, but with the market (and this stock) so cheap at the moment, it won't take much good news to turn these modest holdings into a modest profit! It's possible (if you give credence to Otellini's remarks) that Intel's current strategic position might be somewhat underestimated by the market. Opinions vary on this question, but this Ars Technica article shows that Intel is making the right moves to head off the uglier questions about their strategy — Intel may be better prepared for the coming market than the market has anticipated.

Yeah, I've just done the same. Although I was talking with someone about doing it before I saw the recent news (although it was known that they were going to start pushing Atom servers).There's also Xeon phi to go up against GPUs for HPC uses.

I was always wondering why Intel doesn't bring their Atom SoC to Point Of Sale terminals where ARM is still (and unfortunately) dominant. We could benefit from high-performance x86 compilers and libraries with normal prices compared to ripoff ARM based SDKs starting at $4,000 which with some POS vendors don't even have proper debugging capabilities. Not to mention that being able to run and debug the code on PC would be even better.

I was always wondering why Intel doesn't bring their Atom SoC to Point Of Sale terminals where ARM is still (and unfortunately) dominant. We could benefit from high-performance x86 compilers and libraries with normal prices compared to ripoff ARM based SDKs starting at $4,000 which with some POS vendors don't even have proper debugging capabilities. Not to mention that being able to run and debug the code on PC would be even better.

While I like the idea in theory, that also opens up the PoS terminals to any arbitrary x86 code, depending on operating system. And we've all seen what happens with Windows running PoS terminals...

The MIPS R3000 pulled 4W, and powered many server-class systems from a variety of vendors (beyond just MIPS and SGI).

Even the Intel 386 saw usage in servers, and at one point (1991-ish) I ran a enterprise-wide database server on a Sequent Symmetry with 30 CPUs. According to the Intel 386DX datasheet, with a Vcc of 5V, Icc (input supply current) is 390mA max, meaning 1.95W peak.

Hell, the Symmetries even had a SMP version which ran Intel Pentium I's! I don't recall Intel disclaiming that Sequent shouldn't be doing so, because the CPU's weren't server-class.

I do acknowledge the minor apples vs. oranges comparisons here (ie. certain architectural functions like FPU were external back then and are now internal), but I still think Intel needs a boot up the ass about ridiculous and historically inaccurate hyperbole. If you use a phrase like "the world's first", you should be doing 5 mins of fact checking first.

I was always wondering why Intel doesn't bring their Atom SoC to Point Of Sale terminals where ARM is still (and unfortunately) dominant. We could benefit from high-performance x86 compilers and libraries with normal prices compared to ripoff ARM based SDKs starting at $4,000 which with some POS vendors don't even have proper debugging capabilities. Not to mention that being able to run and debug the code on PC would be even better.

Intel used to own the high-end PoS terminal market, primarily with the Celeron series of processors. It lost that market by not innovating well in that space, not providing power-efficient solutions, and thus allowing the ARM vendors a way in. And if you really want to take a long-term view, the 8008 CPU - from which the entire x86 ISA evolved - was basically designed for PoS terminals (as well as computer terminals).

Furthermore, it's not like you have just one software development environment targeting ARM: shop around. The elegant ARM architecture is MUCH easier to produce code for than the god-awful x86 ISA, so I question your statement about compiler code quality.

This seems peculiarly low for a 64bit server product with virtualization capability, even a low end one.

Maybe it doesn't. Think about the board footprint of the DRAM, and the fact that they're anticipating 1000 CPU's in 40U of rackspace (ie. 25 CPU's per 1U of vertical space.) That's going to require some interesting PCB layout, not to mention cooling. They also might be assuming massive sharing of memory between execution threads in their target apps.

Of course, the other interesting thought is that Intel hasn't actually specified the memory architecture in any of the published reports I can see. If they were pursuing a CC:NUMA strategy, which could make sense for a 1000 node-per-rack superserver, then 8GB per node is absolutely valid. I doubt it, though, and there is no mention of Quickpath support relative to the Atom S1200.

To me, this just shows the chronic lack of architectural innovation which besets Intel right now (it's "Netburst" and "blue crystals" time again) . Sequent and SGI had commercial NUMA architectures, scaling to thousands of processors, in the mid-90's. Fifteen years later Intel is still talking about NUMA as a future technology.

This seems peculiarly low for a 64bit server product with virtualization capability, even a low end one.

Maybe it doesn't. Think about the board footprint of the DRAM, and the fact that they're anticipating 1000 CPU's in 40U of rackspace (ie. 25 CPU's per 1U of vertical space.) That's going to require some interesting PCB layout, not to mention cooling. They also might be assuming massive sharing of memory between execution threads in their target apps.

Of course, the other interesting thought is that Intel hasn't actually specified the memory architecture in any of the published reports I can see. If they were pursuing a CC:NUMA strategy, which could make sense for a 1000 node-per-rack superserver, then 8GB per node is absolutely valid. I doubt it, though, and there is no mention of Quickpath support relative to the Atom S1200.

To me, this just shows the chronic lack of architectural innovation which besets Intel right now (it's "Netburst" and "blue crystals" time again) . Sequent and SGI had commercial NUMA architectures, scaling to thousands of processors, in the mid-90's. Fifteen years later Intel is still talking about NUMA as a future technology.

All very valid points, but I don't think it's a stretch to get to a higher memory requirement for some alternate deployment scenarios. It's the virtualization aspect that seems to warrant higher memory capability to me.

I suspect its probably more about not eating into the Xeon market too much than any technical consideration.

I'll take your word for it, 'cause my knowledge of the markets comes from Kai Ryssdal and the occasional news item on Anandtech/Tom's/Ars. But from the perspective of someone in, you know, technology, Intel seems basically unstoppable at this point.

I don't want that. I want competition. I want a viable AMD. I want ARM to be competitive with x86 CPUs on more than a low-end performance-per-watt basis. But Intel just seems like it's wiping the floor right now, with everyone...which is going to be bad, in the end, for technology, but which is great for Intel, right now.

"Each SoC has two physical cores that can run four threads thanks to Intel's Hyper-Threading,"

Please don't just mindlessly repeat Intel's PR.On pretty much every Intel chip that has every shipped, from the P4 to Ivy Bridge, hyper-threading is worth about .25 of a CPU, so the throughput of a 2 core device with hyperthreading is about 2.5 cores which is a far cry from 4.

Now maybe Atom is the one chip where Intel have finally fixed whatever the bottleneck is that makes hyperthreading such a crappy performer, but I'd like confirmation of this before I just assumed it.

(And if anyone could point to serious research on exactly WHY HT remains so underwhelming, even on Ivy Bridge, I'd be much obliged. When I look at Ivy Bridge's µArch I see enough infrastructure there that for (non-FP, non-SSE) you should be able to get close to 2x speedup on most code.)

I'll take your word for it, 'cause my knowledge of the markets comes from Kai Ryssdal and the occasional news item on Anandtech/Tom's/Ars. But from the perspective of someone in, you know, technology, Intel seems basically unstoppable at this point.

The risk for intel is margins and volume collapse. If the market shifts to being overwhelmingly cost and power sensitive, then Intel will find itself disrupted, and in serious trouble like HP.. This happens when the market comes to believe that processors are good enough and further improvement in CPU computational throughput is not worth paying for. Most people would not notice much difference between penryn and ivy bridge, and that is a problem.

Seriously, is there anything but pride stopping Intel from buying an ARM license, make their own ARM processors on a superior process, then kill the competition, and then, finally, carry x86 to the grave?

So, is ARM really, really, the more desirable architecture? Can it be the top performer also?

BTW, when will we see the perfect server processor SOC: with integrated DRAM?

Seriously, is there anything but pride stopping Intel from buying an ARM license, make their own ARM processors on a superior process, then kill the competition, and then, finally, carry x86 to the grave?

So, is ARM really, really, the more desirable architecture? Can it be the top performer also?

BTW, when will we see the perfect server processor SOC: with integrated DRAM?

There's no reason to. Intel has already shown that they can play in whatever CPU space they want to. The only reason they've been dragging their feet with Atom is that they don't want to sell $20 SoCs if they don't have to.

There's no reason to. Intel has already shown that they can play in whatever CPU space they want to. The only reason they've been dragging their feet with Atom is that they don't want to sell $20 SoCs if they don't have to.

Play? Ok, maybe. Succeed? That remains to be seen.

Hard for me to see this as a move from strength on Intel's part. It took them until now to even merit consideration in the phone and tablet space, and they arent exactly racking up a lot of volume there and to fit the required price, power and performance envelope they are practically giving them away. And now to try and keep ARM from gaining a foothold in the datacenter, they are pushing $50 chips? The fab advantage they need to stay competitive must be getting squeezed. Meanwhile, some big ARM licensees are bidding aggressively against each other for capacity, which must be giving the merchant foundries the confidence and wherewithal to invest more aggressively in their own process improvements. Doesnt seem good for Intel at all.

The MIPS R3000 pulled 4W, and powered many server-class systems from a variety of vendors (beyond just MIPS and SGI).

Even the Intel 386 saw usage in servers, and at one point (1991-ish) I ran a enterprise-wide database server on a Sequent Symmetry with 30 CPUs. According to the Intel 386DX datasheet, with a Vcc of 5V, Icc (input supply current) is 390mA max, meaning 1.95W peak.

Hell, the Symmetries even had a SMP version which ran Intel Pentium I's! I don't recall Intel disclaiming that Sequent shouldn't be doing so, because the CPU's weren't server-class.

I do acknowledge the minor apples vs. oranges comparisons here (ie. certain architectural functions like FPU were external back then and are now internal), but I still think Intel needs a boot up the ass about ridiculous and historically inaccurate hyperbole. If you use a phrase like "the world's first", you should be doing 5 mins of fact checking first.

That is so cute. You are comparing a 32bit 33MHz RISC chip from 1988 to a modern 64bit, ECC, virtualized 2GHz CPU able to address 8GB of memory and perform FP work without accessing the buss.

The definition of a server chip used today didn't even exist back then. All the server class features sets were university pipe dreams. Not working silicon. Modern smart phones should be able to emulate the MIPS and do in less watts then the original.

Seriously, is there anything but pride stopping Intel from buying an ARM license, make their own ARM processors on a superior process, then kill the competition, and then, finally, carry x86 to the grave?

So, is ARM really, really, the more desirable architecture? Can it be the top performer also?

BTW, when will we see the perfect server processor SOC: with integrated DRAM?

Intel owns an ARM license. I cannot believe people continually ignore the fact. Doesn't XScale sound familiar? Yes they sold the product line but they have a license. Some of the I/O processor variants can still be purchased last I checked. It was probably an architectural license considering what was done with the XScale processors which we all recall was a DEC part before it became Intel's.

Why put the dominant workstation and small to mid-level server CPU in the grave when no ARM chip can replace it? Get those low watt monsters into Top500.org and then we can talk. Intel has 76% of the Nov/2012 list with Opterons next with 12%. No ARM based solution available to date. :-/

Smart phones are a more likely place for the included DRAM in the SoC. Chip counts and board area is more important there then servers, including WIMPs where RAM size is very important.