It's not like whatever implementation of Atom they go with will be the same as the PC version, even if that's drastically updated and addresses every conceivable issue. The platform itself will have to be radically different, and it would be just the same for Bobcat.Reply

I believe he was referring to the fact that the bobcat core has a high performance (for a server) GPU in it. That would end up being wasted energy in almost any server. The only place GPUs are used heavily in servers is in HPC applications, and bobcat is far too slow for that.Reply

Actually, Bobcat itself does not contain a gpu, just like the Bonnell core doesn't. The Brazos platform chips, Ontario and Zacate, are what contain the gpu. Bobcat is just the low power x86-64 core used in them. But 16-32 of them on either a single C32 or G34 socket would make for a very dense, low power server.Reply

Yeah, it's not like Bobcat is based on Opteron, which put up a good showing until Nehalem. Before Intel's IMC and QPI, Intel's x86-turned-server products were engineering bandaids. FB-DIMMs? Really?Reply

Arm is coming to windows... not surprised to see this."Microsoft announced on 5 January 2011 that the next major version of the Windows NT family will include support for ARM processors. Microsoft demonstrated a preliminary version of Windows (version 6.2.7867) running on an ARM-based computer at the 2011 Consumer Electronics Show.[88]"Reply

At least proportionally. 2 much slower and simpler cores at a maximum of 1.6 GHz at 18 W seems outright stupid in comparison. Sure, it's got a GPU included in this power envelope.. but this thing shouldn't consume power when it's not doing anything (as does the SNB-GPU), otherwise it's a really bad choice for the task at hand.Reply

Atom in current design with inorder architecture is useless, this is already shown in many benchmarks that Brazos is a much better chip for that. Unless off course Intel totally redesigns the atom Reply

That was my initial thought but there are other uses for servers that I think microservers would work well, one example I've seen is within manufacturing equipment. In my experience there seems to be increasing use of PC equipment (rather than dedicated PLC equipment) and servers are chosen for uptime/reliability. However frequently these servers are overpowered so a system using a low power processor in a server system could be appealing.

That makes sense, but going with ultra dense micro servers seems overkill for equipment that doesn't require much computing power to begin with. Of course I would be a fool to doubt Intel's ability to size up a market, but I'm having a hard time picturing where an ultra dense microserver makes a better choice than what's already out on the market.Reply

I doubt that will have any impact on "every-second-idiot", as managing many micro-servers will be done using a micro-server management tool, with which you'll be able to do as many big mistakes as with a virtualization tool (or almost).

Dense microservers are useless.In the sense that, if you find it funny to put 2 cores on a micro-die, it is still way more interesting to put 36 cores on a normal sized die, with all the benefits and cost cutting that implies (shared mem controller etc.).

Also virtualization has brought a lot of manageability and I don't see that becoming useless because there is another power efficiency point of interest in "micro-servers".

Besides, any "spike" load on those atoms and you'll see how useless they can be.

Really, I can see 36 low-power cores on a die, but 2 low-power cores on a die IN a datacenter ? what for ... if you're in a datacenter you need 36+ of them anyway and you'll definitely prefer the manageability increase and the lower power requirement of a single die 36 core toy over 18* the even lower power req. of those dual core mini-cpus.

Also one must realize that splitting one 36 core box (assuming somebody sells a 36 low-power core die, which would NOT have more computing power than the current multi-core AMD cpu and would surely not use more power) in 18 2 core boxes means you have 18 power supplies to plug, 18 damn network cables adding to the spaghetti dish of the day, etc.

Not even talking about RAM yet, because if you're going to put something that's equal to 1/18th of a real server CPU in it's own box, it would be only right to give it like 1 Gig of RAM ...

Management costs up, maintenance costs up, complexity up, price up ... yes you get hardware-level isolation and potentially extremely cheap fully redundant solutions (like 2 4-socket atom boards in a 1U chassis with integrated load-balancing woo ..) but that still does not make any sense to anyone who uses a datacenter quarter rack or more.

Obviously some people will buy these, because Intel said so, but I think it's clear atom-like C/A PU's are just good for low-power stuff, like home file server, and the stuff mfeller2 mentions just down here. - And for the poor who need full h/w redundancy or those who don't like virtualization's manageability (and the every-second-idiot effect).Reply

I completely agree. Microservers are crap. If you have smaller cores that use less power, like an atom... you can stick lots of them together. There is no need for a deep pipeline... if you have 100 cores. 36 cores would be great... but we're going to have to wait a little bit. 16-core atoms is coming, though.

Forget about Intel for a moment and think about where these CPU's come from? The ARM's typically runs our smartphones software and as such it has to be battery savings that has been the holy grail as a target when producing it. Sure a concious target against a dedicated server market could errode some of this but on the other hand it isn't absolutely a must. Low power footprint as an isolated parameter has an ideal value of 0 in any use case but has different importants in the many different tasks.

Computer centers becomes more and more burdened by the power cost and to lower the power usage get more and more important and as such this kind of server chips and ultra mobility chips becomes a match. One could belive you would be heavily constraned by raw physics and get the simple proportionality as you describe but as it is today, the ARM chips has an efficiency advantage and by the same time not beeing to poor to be usable for a lot of tasks.

Had the bigger iron just been bigger, your analysis would be perfectly valid. As it is know you seem to forget the proportional differences in powwr usage.

Now get back to Intel and the Atom and think about why this is the best candidate? Well it is their low power chip but it has also some or a lot of the ARM battery saving mantra it share with it. That can make it do better than any 2 * 16 = 1 * 32, will show. Xeon has not been made with an overall battery savings agenda covering the whole architecture project from day one until production as of now!Reply

L, while I think your points are valid, it's important to note that micro servers are a different form factor to blades. They're not just dropping a low power CPU into a blade, but rather putting it into a half width / half height box, so that you can fit 4x into the same rack. Plus they plan to share power supplies... So while I think you're raised valid points, it really boils down to whether or not they address the issues you raise. If they don't address the issues, then the technology is indeed worthless, if however they do succeed in producing something that is competitive in some applications, in terms of computing power vs. power consumption/physical space, then they will succeed in those markets. My gut feel is that 5 years from now, Microservers will indeed have established themselves as a legit technology that is the best choice for some applications, while utilizing virtualization on more powerful boxes will be the best choice in other situations.Reply

There are MANY, MANY use case where e dedicated physical machine is either required or good to have. This is where the real market is.Give me a physical hosting on Atom over Virtuozzo "hosting" any day.Reply

1. I run an Atom "server" at home. File serving, proxy server, light web server duty, at a whopping 22 W typical. Smaller, quieter, and more power thrifty than the re-dedicated desktop.2. I have been interested in AMD's solution because (I believe) it supports VT in a slightly higher power envelope. Current Atom does not have hardware support for VMs. I sometimes use VM's as sand boxes to experiment in.

My use is not mainstream, and not the focus of Intel or this article, but adding ECC to the mix has an appeal to my little niche.Reply

I run an Athlon II X2 250 in my server. Power consumption is about 29W idling with an extra Intel NIC and 4GB DDR3, as well as one 3.5" 5400rpm hard drive spinning at all times and 2 more spun down but still connected. It supports VT, and if I had the right motherboard (which I don't) it can also use ECC memory.

I think that they are being shortsighted. Instead of limiting the server why not make a hybrid server that uses both low power and medium power CPUs? The small cores take over during light loads while the heavy weights wait power gated in some low C state, and once load peaks, the serer kicks in with full force. The idea sounds like a logistics nightmare but if anyone has the resources to solve it, it's intel...Reply

I believe high end VM farm software allows you to shuffle VM's between machines on demand; is there really a need to put both high and low end CPUs in the same enclosure instead of swapping the VM between two separate servers in the same rack?Reply

I guess the problem for Intel is that it needs to cover this potential market whether it is realised or not. Intel's big problem is that this has the potential to significantly reduce margins in their server business. Some claim such ARM servers will produce 5x to 10x performance per watt, and 15x to 20x better price performance. ( http://www.eetimes.com/electronics-news/4213963/Ca... ).

The other advantage the ARM eco-system has is that its cores can be tailored to specific requirements, so for example floating point silicon area could be reduced in favour of more cores and better integer performance in areas, for applications like web servers where floating point performance is not needed.Reply

"each node will consist of an ARM Cortex A9 quad-core CPU, DRAM and fabric interconnect , and will consume around 5W"

so, for 480cores you would need 120 quad-core nodes, that will consume 120*5W = 600W. this is in line with any multiple-cpus server.

take 4 quad-xeons (130w each), plus mobo and memory, and you are in the 600W arena in 2U space again.

there is simply a physical limit in doing calculationsby using electricity. it does not matter what kind of cpu you use, you will just need similar amounts of power to do similar amounts of calculations.Reply

This is more likely to combat what AMD is planning, which will (hopefully) be the release of a relatively low cost Llano with power gating on CPU core and GPU, with low idle power and ECC RAM support, which is what is needed in a home NAS or server. With all the video stuff on the APU and power gated, the total system power draw has potential to be very low.

A system with ECC RAM is the perfect complement to NAS with ZFS RAID, ZFS ensuring that your data is as it should be, and the ECC RAM ensuring that any data that should be written is the correct data. And also ensuring that the system is running as it was coded to, without mysterious crashes.Reply

An Atom-like server would have its uses. Twice in the last year I've had to spec a machine that needed tiny computation ability, but 60GB of ECC ram. The machine simply does not exist. Using a dual, quad-core Xeon for a que manager is painful, but there is currently no other way. The ram cannot be spared for adding VMs, so the cores just sit there. A single atom/bobcat/nano on a 64-bit ecc memory bus would be perfect.Reply

One BD module, dualcore, is aprox 32mm2 sans L3, thats less than half a zakate...Is the nearest competitor for AMD BD Atom or SB then? - or ARM?Soon Intel will realize the have a good SB cpu but without a profitable market for itReply

What moron would ever consider putting an underpowered POS like Atom in a server? Really, they aren't worth the juice they burn in computing power relative to any other computational device, even a single CUDA core (Stream proc), as you like it. It would be just dumb to waste space in or on a blade with this wimpy POS; it would create more heat than the feeble arithmetic it can do is worth removing. I had these embeddded in my gel electrophoresis vislilizer for taking photos of proteins as they separated out due to varying voltages and I hated it; somewhat like the guys in "Office Space" hated the photocopier. I hate Atom. Love the Xeons and core series though.Reply