ARM server startup tries jumpstarting datacenter software ecosystem

ARM server startup Caldexa has announced an initiative that represents an …

The ARM onslaught attack on the datacenter proceeds apace, as ARM server vendor Calxeda (formerly Smooth Stone) announces that it's teaming up with Canonical and nine other software vendors to form a "Trailblazer Initiative" aimed at creating a full-blown ARM server ecosystem.

Canonical's role in the effort arises from the fact that Calxeda has selected Ubuntu as the official OS for its 120-node, 2U server box. Each of the Calxeda server nodes contains a single quad-core ARM chip, a bit of memory, and some interconnect hardware that, all told, consumes about 5 watts. Calxeda can cram 120 (480 cores worth) of these into a single 2U rackmount server chassis, which makes for an incredibly dense cluster of cloud compute resources.

Calxeda's competition on the x86 side of the fence isn't just Xeon. Last year, a startup called SeaMicro also launched a similarly dense cloud server based on Intel's Atom processor. The SeaMicro box packs 512 cores worth of Atom into a 10U space. This is significantly less density than Calxeda provides, but the individual Atom cores outperform the ARM cores, so the comparison isn't quite apples-to-apples.

Both Intel and ARM and moving aggressively to position their respective low-power processors as datacenter alternatives. ARM's A15 core, codenamed Eagle, is aimed squarely at the datacenter; Intel, for its part, has been adding datacenter-friendly features to Atom (e.g., support for ECC memory) and plans to let Atom and Xeon duke it out for rack space.

At the most recent Intel investor day, one of the Intel execs made reference to the fact that, for the longest time, Intel protected Itanium from Xeon cannibalization by not adding some features to the latter. The exec then stated that Intel won't protect Xeon from Atom in this manner; Xeon will rise or fall vs. Atom based purely on customer demand.

Anyone know the power dissipation of a full c-class chassis? Including 2U of battery packs, an HP c-class chassis eats up 12U, so you can rack three of them in the same space you can rack 21 of these Calxeda devices. Just want to get a basis for comparison - I know at our old AT&T colo, we weren't allowed to put more than two c-class chassis in an individual rack, and we weren't allowed to put two racks with two chassis each next to each other; the cooling was insufficient.

"The exec then stated that Intel won't protect Xeon from Atom in this manner; Xeon will rise or fall vs. Atom based purely on customer demand."

That's funny. I'll bet the guys working on Xeon feel differently.

This is kind of off-topic, but does the Windows 8 ARM news, which was thought to be consumer focused, perhaps indicate a change in direction for their data centers? I know Intel and the PC makers' earnings have been up in the last couple years specifically due to MS's cloud building. If they switch to ARM for servers, that would affect Intel quite a bit. The Wintel relationship seems quite frosty right now, and I wonder if it's less about consumer than enterprise after all.

Admittedly, I haven't looked too hard for information, but I don't see much about the actual hardware.From the density, it sounds like it would have to be an SoC with the networking and mass storage controllers integrated in some fashion. But it might be interesting if the RAM, and additional controllers live on a bus that allows then to be removed, replaces, and possibly upgraded after purchase. I don't think there is a lot of demand for this in the market they are aiming for, the system may be inexpensive enough to replace failed nodes or the entire system when it has reached obsolescence.

It's a bit odd to see PCs and servers heading to an embedded type scenario where hardware configurability is irrelevant. Either the system has everything you need, or you buy a new one.

Admittedly, I haven't looked too hard for information, but I don't see much about the actual hardware.From the density, it sounds like it would have to be an SoC with the networking and mass storage controllers integrated in some fashion. But it might be interesting if the RAM, and additional controllers live on a bus that allows then to be removed, replaces, and possibly upgraded after purchase. I don't think there is a lot of demand for this in the market they are aiming for, the system may be inexpensive enough to replace failed nodes or the entire system when it has reached obsolescence.

It's a bit odd to see PCs and servers heading to an embedded type scenario where hardware configurability is irrelevant. Either the system has everything you need, or you buy a new one.

IME, in the medium business/small enterprise space, that's been the case for ~5 years. I've only upgraded the hardware in two servers* over the last five years, and one of those reluctantly at a vendor's insistence that more RAM would solve a problem (spoiler: it didn't). With a 3-year hardware lifecycle and a decent understanding of business growth, it's more cost-effective to overprovision a little bit on the front end.

Even better, I work with database servers, so load scales with data volume, and scale-out is much harder than scale-up. In the application server space, scale-out is (comparatively) easy, so capacity is added (again, IME) exclusively by adding or replacing machines, never by upgrading them.

* Note I'm only talking about dedicated physical servers. VM hosts are a completely different beast.

Admittedly, I haven't looked too hard for information, but I don't see much about the actual hardware.From the density, it sounds like it would have to be an SoC with the networking and mass storage controllers integrated in some fashion. But it might be interesting if the RAM, and additional controllers live on a bus that allows then to be removed, replaces, and possibly upgraded after purchase. I don't think there is a lot of demand for this in the market they are aiming for, the system may be inexpensive enough to replace failed nodes or the entire system when it has reached obsolescence.

It's a bit odd to see PCs and servers heading to an embedded type scenario where hardware configurability is irrelevant. Either the system has everything you need, or you buy a new one.

IME, in the medium business/small enterprise space, that's been the case for ~5 years. I've only upgraded the hardware in two servers* over the last five years, and one of those reluctantly at a vendor's insistence that more RAM would solve a problem (spoiler: it didn't). With a 3-year hardware lifecycle and a decent understanding of business growth, it's more cost-effective to overprovision a little bit on the front end.

Even better, I work with database servers, so load scales with data volume, and scale-out is much harder than scale-up. In the application server space, scale-out is (comparatively) easy, so capacity is added (again, IME) exclusively by adding or replacing machines, never by upgrading them.

* Note I'm only talking about dedicated physical servers. VM hosts are a completely different beast.

Larry Ellison's reply to that would be "Oracle services handle EVERYTHING! Just add more boxes, more hardware, and let the black-box services magically handle EVERYTHING, from virtual partitions to resource management to ... EVERYTHING. Meanwhile. the IT guy and DBA are left hoping something doesn't blow up."

Is the choice of Ubuntu server because of its EC2 offering? In my VERY limited experience at home with my ONE Ubuntu server box (MSI Wind PC, Intel Atom 230), it wasn't/isn't very easy to set up things like NFS, LDAP, MySql (partially because I learned about AppArmor WAY too late in the game).

I'd like to know what experienced server admins (like Control Group seems to be) have to say about the OS choice. I would've thought that by default an RHEL distro would've been used.

This is kind of off-topic, but does the Windows 8 ARM news, which was thought to be consumer focused, perhaps indicate a change in direction for their data centers? I know Intel and the PC makers' earnings have been up in the last couple years specifically due to MS's cloud building. If they switch to ARM for servers, that would affect Intel quite a bit. The Wintel relationship seems quite frosty right now, and I wonder if it's less about consumer than enterprise after all.

"..Calxeda has selected Ubuntu as the official OS for its 120-node, 2U server box."

Wow, cooling and power concerns aside, it's fascinating to think that we achieve this type of density today. I still remember when being able to squeeze multiple cores into a single 2U server was a big deal, and it wasn't that long ago.

This is kind of off-topic, but does the Windows 8 ARM news, which was thought to be consumer focused, perhaps indicate a change in direction for their data centers? I know Intel and the PC makers' earnings have been up in the last couple years specifically due to MS's cloud building. If they switch to ARM for servers, that would affect Intel quite a bit. The Wintel relationship seems quite frosty right now, and I wonder if it's less about consumer than enterprise after all.

People still use Windows on servers?

I think they have about a third+ of the market, but take home around half of all of the revenue... cant really find as good data on servers as compared to desktops...

Is the choice of Ubuntu server because of its EC2 offering? In my VERY limited experience at home with my ONE Ubuntu server box (MSI Wind PC, Intel Atom 230), it wasn't/isn't very easy to set up things like NFS, LDAP, MySql (partially because I learned about AppArmor WAY too late in the game).

I'd like to know what experienced server admins (like Control Group seems to be) have to say about the OS choice. I would've thought that by default an RHEL distro would've been used.

Can someone shed some light on this?

Unfortunately, I know exactly enough about server administration to be dangerous. I'm a DBA - and a SQL Server DBA, meaning Windows only - so my server admin experience is limited only to what's relevant to managing SQL Server machines.

Is the choice of Ubuntu server because of its EC2 offering? In my VERY limited experience at home with my ONE Ubuntu server box (MSI Wind PC, Intel Atom 230), it wasn't/isn't very easy to set up things like NFS, LDAP, MySql (partially because I learned about AppArmor WAY too late in the game).

I'd like to know what experienced server admins (like Control Group seems to be) have to say about the OS choice. I would've thought that by default an RHEL distro would've been used.

Can someone shed some light on this?

RedHat doesn't provide an ARM version of RHEL out of the box and while Fedora certainly has ARM builds available, Ubuntu seems to be a lot further along on the ARM front in both the portable and server space (see https://wiki.ubuntu.com/ARM).

The other thing is that RedHat has a lot more to lose than Canonical in this type of endeavor because RedHat is the big player in the server/enterprise space with Attachmate/Novell taking a distant second and Canonical taking third behind Novell (be aware that I am only referring to overall customer base and not to numbers of installs. There are a number of very large companies that use Ubuntu or derivatives for cookie-cutter systems that they don't want or need vendor support for and SUSE is certainly more common in Europe than in the US, although I do see it in the US from time to time.).

I think it really comes down to cost, availability and support from the OS packager/maintainer.

Hopefully it's more than a single bit, but does anyone have an idea of how much memory each node has?

I don't think ARM has gotten 64 bit addressing done. In that case, 4GB would be a the max, but like the PC, ARM uses MMIO that will take away some of that. So I'd guess 2-3GB per core. But that that with a grain of salt, I'm not expert. I'm just guessing based on the little bit I know from a fairly short time with ARM embedded systems.