Structure 2010: Hardware for a Power Hungry Cloud

Did you happen notice a green undercurrent at GigaOM’s Structure event this year? While topics like big data, exascale computing and, of course, the cloud and its many opportunities were the order of the day (two days technically), the conversation would occasionally turn to data center energy efficiency and how companies, particularly large ones, are struggling to squeeze more computing out of the power they have on hand.

Fortunately, Stacey Higginbotham’s panel on Wednesday, called “What Comes After the Blade?”, touched on two themes that will help put them in the right mindset.

No Longer Performance for Performance’s Sake

Stacey opened her session with a question: Why are VCs backing startups that are gunning for x86 and taking on Intel and AMD in the server market?

The answer, in part, is a game-changer called the cloud, says Tilera’s co-founder and CTO, Anant Agarwal. He states (starting at around 2:55 in the video below), “What the cloud has done is taken the equation from performance to performance per watt, performance per cubic inch or performance per dollar.”

All three metrics impact data center efficiency, and improving on any one (or preferably all) is a step in the right direction. A good performance per inch rating, for example, could indicate a dense, efficient system that maximizes its energy intake by shouldering big workloads within a small footprint that’s also easy to cool. As I noted last week during SeaMicro’s launch, energy and cooling costs are nearing parity with initial server spend, meaning companies should prepare to pay as much to power and cool their servers as it costs to buy them if things don’t change.

A disruptive shift in micro-architectures needs to take place, and judging by recent server architectures, it appears we’re in the starting stages of such a thing… away from x86, that is. It may come from technologies like Agarwal’s massively multi-core, low-power TILE chips, but the field is still wide open. The lesson here is that cloud companies are increasingly looking beyond the run-of-the-mill server architecture and innovators that deliver on their preferred measures of performance, which are tilting toward energy efficiency, will find willing customers at Intel’s expense.

But does this mean that Intel and AMD should throw in the towel? Not necessarily…

Opportunities in a Power-Limited Data Center

SeaMicro is and isn’t battling x86. Sure, its approach eschews powerful and high-margin x86 server processors, but the Intel Atom chip, on which the SeaMicro SM10000 server is based, is also firmly rooted in x86. And that not only works not only to SeaMicro’s advantage, but also Intel’s.

As SeaMicro’s CTO Gary Lauterbach explains (about 23:15 in the video above), many large data centers are “power-limited.” His company’s reliance on low-power chips to deliver four times the compute capacity of a traditional server architecture within the same power envelope actually benefits his company and the chip supplier, namely Intel. How? SeaMicro packs its server with 512 Intel Atom processors, which translates into 512 Atom processors sold for Intel. And while they might not be as high-margin as Xeon server chips, Lauterbach reminds us that “Intel does fine on Atom margins.” If and when SeaMicro’s servers start finding homes in more and more data centers, it’s likely those margins will stay as attractive.

This might also represent an opportunity for AMD if it gets serious about commercializing an Atom fighter. Non-x86 outfit ARM could also benefit from the low-power multiprocessor trend. We’re unlikely to see a server offering from the company until the “back end of ’11,” says Ian Ferguson, director of enterprise and embedded solutions for ARM. But in the interim, the company may want to support startups like Smooth Stone that use a multiprocessor strategy similar to SeaMicro’s but with ARM chips instead. With any luck, it will reap a bounty of licensing income — with no “fabs” to speak of, ARM licenses its chip designs — while it helps web companies and cloud providers wring more compute capacity out of each and every watt.