The New Face of Servers

Last week we heard three announcements that together could have a big influence on what servers will look like in the years ahead. First, Intel announced updates to nearly its whole line of server processors. Then HP announced it is shipping the first servers in its Project Moonshot family of microservers, which should allow for multiple different kinds of small servers. Finally, IBM unveiled its Flash Ahead initiative, a plan to ramp up the use of flash storage in servers, led by a new family of all-flash storage systems. Individually, each was interesting, but taken together they suggest that the server world is dramatically transforming.

Last week we heard three announcements that together could have a big influence on what servers will look like in the years ahead. First, Intel announced updates to nearly its whole line of server processors. Then HP announced it is shipping the first servers in its Project Moonshot family of microservers, which should allow for multiple different kinds of small servers. Finally, IBM unveiled its Flash Ahead initiative, a plan to ramp up the use of flash storage in servers, led by a new family of all-flash storage systems. Individually, each was interesting, but taken together they suggest that the server world is dramatically transforming.

One thing to note is that unlike PCs, server sales remain fairly strong. Gartner says server revenue is expected to grow 3.5 percent year over year in 2013, with shipments rising by 4.9 percent. But within that, there are some big changes. More and more, the largest Web scale companies are designing their own servers, often having the big Taiwanese ODMs (original design manufacturers) create custom servers just for them.

Intel's New Processors

Of last week's announcements, Intel's new server line was in some respects the most conventional. The company showed off new chips across a huge range for servers, from Atom chips aimed at microservers to the Xeon E7, aimed at huge four- to eight-socket machines.

The company announced that specific versions of its Atom 1200 processor, known as Centerton, are now available and was part of the HP Moonshot launch. This is a 32nm dual-core processor family that will come in a range of speeds from 1.6GHz to 2.0GHz, and with a thermal design power (TDP) from 6.1 to 8.5 watts; it can address up to 8GB of RAM, more than many competing microserver chip designs.

This will be followed later in the year with a 22nm version chip called Avoton, built on a new microarchitecture known as Silvermont. Intel says this will offer a 50 percent performance improvement and will include an integrated Ethernet controller. The company also announced Briarwood, a 32nm version aimed at the storage market, and Rangely, an upcoming 22nm version aimed at network and communications infrastructure.

For small traditional servers, Intel talked about the next version of its Xeon E3 family, a 22nm based on the Haswell architecture expected to show up in the Core desktop and laptop parts in the next couple of months. Like the existing E3 based on the Sandy Bridge architecture, the new E3-1200 v3 is mostly a desktop chip repurposed for small, single-socket servers. It will be available later this year in dual- and four-core versions. Intel says the lowest TDP will be 13 watts, down from previous versions.

The processor I see most in servers aimed at the traditional corporate market is the Xeon E5, designed for single- and dual-socket servers. The workhorse of the line is the Xeon E5, which accounts for most of the servers that company. The current version is based on a design known as Sandy Bridge-EP and goes up to six cores. Intel last week said a 22nm version, known as Ivy Bridge-EP, will be available in the third quarter, and will have up to eight cores.

Finally, for the high-end, Intel announced a new version of its Xeon E7 with up to 10 cores, aimed at four- and eight-socket servers. This processor, known as Ivy Bridge-EX, is due in the fourth quarter and will enable up to 12TB of RAM in an eight-processor configuration.

What takes this to another level is Intel's announcement of a plan for a new rack scale architecture with a design that involves separate subsystem level modules for CPU, memory, storage, and networking, with its own photonics and server fabric. The idea, as is typical with such designs, is a denser but more flexible, server design. We've seen lots of individual makers announce their own rack systems, and recently a more open approach (called Open Rack), so it will be interesting to see whether Intel can make headway with its own design.

HP's Moonshot

HP's announcement last week of the availability of the first entries in its Project Moonshot servers was somewhat anticlimactic as we already knew these products would use the Intel Atom 1200 (sometimes called "Centerton") chips. Still, the concept is certainly compelling.

In the first offering, known as the Moonshot 1500 server enclosure, will be a 4.3U device that can fit 45 Atom-based server cartridges. HP is already running Moonshot servers on its website and said running the whole site on such servers should only take the energy required by 12 60-watt light bulbs. Overall, the company said Moonshot servers should use up to 89 percent less energy, 80 percent less space, and cost 77 percent less than traditional servers.

HP will offer future server cartridges based on different architectures including other Intel processors, those from AMD, and perhaps most interestingly, ARM-based server vendors, including AppliedMicro, Calxeda, and Texas Instruments.

Around the announcement, AppliedMicro said its X-Gene will be the first ARM 64-bit SoC, featuring eight high-performance cores operating at up to 2.4GHz. Calxeda said its servers will feature four ECX-1000 processors, running at 1.4GHz, each with 4GB DRAM of addressable memory.

We've seen some ARM-based servers recently, but it may well take a big vendor such as HP to make this much more mainstream. ARM servers today often have more limited memory capacity than Intel servers (since most are 32-bit, topping out at 4GB), but 64-bit versions of ARM processors are coming with much better addressable memory. ARM's backers talk about providing server performance with much less power requirements, although Intel and AMD are working to reduce X86 power use as well.

So far, such microservers seem to be most widely anticipated for applications such as running websites, which tend to be more IO intensive than processor bound. If the economics could work on larger applications though, it could be a real game changer.

IBM Goes All Flash

Finally, last Thursday I attended an event where IBM declared that flash memory is at a "tipping point," making all flash systems economic and practical for a variety of applications. The company announced it will spend $1 billion in research and development on flash-based solutions, and said it is establishing a dozen "centers of competency" to run proof-of-concept scenarios to show the performance of flash.

But the most tangible product was a new line of flash memory storage arrays based on technology the company acquired from Texas Memory Systems. These are 1U units that fit into a server rack, with each unit capable of holding 12 2TB modules. That means each unit can store up to 20TB of flash memory at RAID 5 or 24TB of flash at RAID 0. A single rack can hold up to a petabyte of flash storage. That's a lot.

Specific models include the FlashSystem 820 and 810 based on "eMLC" flash and the FlashSystem 720 and 710 based on higher-priced SLC flash. (IBM says enterprise MLC flash is good for 30,000 read-write cycles, while SLC is good for 100,000 such cycles. The actual NAND flash memory comes from Toshiba.)

Steve Mills, IBM's senior vice president and group executive for software and systems, noted that over the past 10 years, CPU performance has improved eight to 10 times, DRAM performance seven to nine times, network speed 100 times, and bus speed 20 times, but disk speed is only 1.2 times better. With flash, he said, you could get more consistent latency—down to 100 microseconds, and thus more consistent performance.

Just as important, he said the overall system cost for a large system with flash could be up to 30 percent less than a system with standard storage due to lower environmental and power costs, higher storage utilization, a need for fewer servers, and thus lower maintenance and software license fees.

He noted that while cheap disks within an enterprise storage system might cost only $2 per gigabyte, high-performance disks could cost $6 per gigabyte. For the highest-end, performance related applications, hard drives could cost $30 to $50 per gigabyte because applications will only use the outer edges of the drives to reduce the travel time of the hard drive head. In contrast, the street price of IBM's new FlashSystems would be about $10 per GB, which makes them more efficient. (Obviously, the price of enterprise storage is much higher than raw memory or consumer-grade disks.)

A demo compared a system withfour4 of the FlashSystem 820 units running on a Power 780 server with 128 cores and DB2 against a similar configuration with either 18 racks with 5000 hard drives or with eight racks of more conventional storage, including 2500 hard drives and 128 SSDs. IBM claimed the flash system used 37 times less power, and cost 11 times less. The flash system provided more than 43,000 transactions per section, and over 1.3 million IOPs. IBM claimed a full rack of the servers could provide up to 22 million IOPs.

A variety of customers talked about using early versions of the system, including representatives from Sprint, Kroger, Thomson Reuters, and Vion Corporation (which sells systems to government agencies). Not surprisingly, they talked about improving response time while reducing space and power consumption.

In general, they agreed that there is still a big place for traditional storage but the all-flash arrays make sense in more places than generally perceived.

The Changing Server Market

Taken together, these three announcements (and other similar plans we've heard of over the past few months) point to how the server market may change over the next few years. These in turn will lead to all sorts of new questions for companies that want to deploy servers.

There are many new rack and fabric announcements: AMD has its Freedom Fabric as part of its SeaMicro acquisition; Intel had this week's announcements; and the Open Compute organization has its Open Rack standard. Individual server vendors have their own proprietary solutions, including HP both with the Moonshot servers and with its long-standing rack solutions, which compete with offerings from IBM, Dell, and Cisco. This will bring more competition into these designs.

We've already seen new kinds of server processors—not just high-end chips, but now more mainstream processors and even low-power ones aimed at microservers. The mainstream market may not be quite as dominated by x86 as it has been, as new ARM-based server chips come to market. Companies will have to determine which type of processor will prove to be most appropriate for specific applications.

Flash storage has been gaining ground, though in the data center, mostly as either a server-side add-in board or as a lower tier in a multi-tiered storage array. Now all-flash solutions are getting more competitive. Meanwhile, with server processors able to handle more RAM, we're likely to see more completely in-memory solutions.

Until recently, most companies buying a server largely had a fairly limited number of choices: rack or standard servers; dual or quad-sockets; Cisco, Dell, HP, IBM, or some smaller vendor; and which Intel processor fit the bill. Now there will be more options and more choices and the result will change how many data center servers are designed.

Michael J. Miller's Forward Thinking Blog: forwardthinking.pcmag.com
Michael J. Miller is chief information officer at Ziff Brothers Investments, a private investment firm. From 1991 to 2005, Miller was editor-in-chief of PC Magazine, responsible for the editorial direction, quality and presentation of the world's largest computer publication.
Until late 2006, Miller was the Chief Content Officer for Ziff Davis Media, responsible for overseeing the editorial positions of Ziff Davis's magazines, websites, and events. As Editorial Director for Ziff Davis Publishing since 1997, Miller took an active role in...
More »