Dell launches HPC building block, but don't worry: it's not new

Last week Dell launched a new series of PowerEdge C servers under the title “Dell Brings Specialized Cloud Computing Infrastructure To The Mainstream,” an announcement that is not the usual sort of thing I cover here. But there actually is an HPC angle to this story, and one that may signal a real shift in our market for a company which has had a hit and miss track record at best in HPC.

Dell clearly knows how to build and market computers; it is one of the largest PC manufacturers in the world, and enjoys a strong presence in the enterprise. Over the past several years the company has had some significant wins with large systems in HPC, and its CEO Michael Dell even keynoted SC08 in Austin, TX. But the company’s wins to date seem to have been mostly based around either highly specialized marquee custom builds, or on jamming its enterprise class systems in an HPC envelope and shipping them out to low end customers. There is evidence that Dell is maturing its approach to HPC however. Over the past two years the company has been on a hiring spree, hiring some of the top systems engineers and architects in the business from other companies in the HPC space. Even better, with the launch of the C6100, it looks like Dell is actually listening to the people it hired as it builds a product portfolio specifically for us.

What’s in the PowerEdge C6100

The PowerEdge C6100 is one of three systems launched last week for high density and low energy consumption deployments. In today’s IT-speak, these features get you permission to use the word “cloud” in your headlines. You can read the full release at Dell’s web site, and from it you’ll gather that the PowerEdge C1100 and C2100 really are mostly aimed at a scale-out market. But the C6100 is a 4-node cluster optimized server that Dell has built for HPC with direct input from their new HPC-savvy employees.

The C6100 is a dense solution, with 4 motherboards packed vertically in a 2U, horizontally-stacked chassis. Each board can handle either two Westmere (Xeon 5600) sockets or two Nehalem-EPs along with 12 DIMM slots each of which can run up to 1333MHz. Each board also has two built-in GigE ports, a PCIex16 slot for IB (for a GPU if you’d like, the unit has been Nvidia S1070 HIC certified) and a PCIe x8 slot suitable for a 10 GbE NIC or a Mellanox ConnectX 2 QDR IB dual-port daughter-card. Each board in the chassis is individually serviceable, so you can power one down and replace it without having to power down the other boards. You can configure a chassis with either 12 x 3.5” drives (3 per system), or 24 x 2.5” drives, and the drive bays are located across the front for easy access.

The chassis power supplies are hot-swappable at 1100W today, with some hints that a larger version may be coming soon. This is important because if you fully-load your chassis with drives, memory, and fast processors and you lose an 1100W power supply today you’d have to draw down power on one of the boards. Moving to a larger power supply will introduce full redundancy; with a less-than-maxed-out systems (say, mid-bin processors or less memory) you would be fully redundant on power today. This launch decision reflects an important HPC-oriented design point: the C6100 is not designed to hold anything that anyone might put in it. It’s designed to be energy conscious at the HPC price/performance sweet spot.

The next 500

With the C6100 Dell appears to be focusing not at the top of the list, although I’m sure they’d be happy to build you a big one if you’d like. One of the engineers I talked with at length about the system is motivated by a strong desire to reach the “next 500” in the HPC space, and in so doing open up the HPC mid market. The C6100 looks to be a good technical platform to help achieve that goal. But the technical specs are only part of the equation when you are aiming at the mid market. There you are selling into a market that doesn’t know HPC, and may not even have large enterprise servers. Dell can help allay customer installation and design concerns by bringing to bear all of its enterprise experience and worldwide team. These customers also have operational concerns, and Dell’s advantage with the C6100 is that it is not shipping a new product.

It’s new, but don’t worry

What I mean by that is that the technology that Dell is selling in this new PowerEdge C6100 server has actually been sold for the past several years through Dell’s Data Center Solutions group (as a side note the Dell team claims that if DCS were broken out into its own business it would still be one of the largest computer manufacturers in the country). This is the group that will build you anything you want, as long as you buy several thousand of them at one time. Working with these customers, Dell developed the progenitor of the C6100 and has sold “over 60,000” according to Dell’s Tim Carroll. Dell argues that this field experience has given them ample time to work out the kinks and refine the design for large scale customers, leading to a “launch” of a product they already know a great deal about from real field experience.

For those of you eyeing Dell’s mainstay HPC offerings, don’t worry. The blades and R410 rackmount server are still around, and still very much part of Dell’s HPC plans. But the C6100 definitely gives Dell’s current customers some interesting new options, and gives system architects another reason to look at Dell.

Trackbacks

[…] Last week I wrote about Dell’s recently launched PowerEdge C6100. The product isn’t new — there are over 60,000 units already shipped from its custom services unit according to the company — but the addition to the mainline product portfolio is new. […]

[…] Data Center Solutions (DCS) team at Dell have added another box to their PowerEdge C servers, announced back in March. According to a blog post at Dell, the new PowerEdge C410x was designed to satisfy the requirements […]

Resource Links:

Latest Video

Industry Perspectives

According to Intel, its new 2nd generation Intel Xeon Scalable Processor family includes Intel Deep Learning Boost for AI deep learning inference acceleration, fresh features and support for Intel Octane DC (data center) persistent memory, and more. Learn more about the offerings in a new issue of Parallel Universe Magazine. [READ MORE…]

White Papers

Long-term, leak-free performance in fluid management systems requires robust connectors. As liquid cooling is increasingly deployed in high performance computing and data center applications, these industries demand quick disconnects (QDs) purpose-built for their needs. Gone are the days of making do with metal ball-and-sleeve connectors designed for heavy industrial use. Download the new guide from CPC that outlines key performance factors of new PPSU QDs compared to traditional all-metal QDs for liquid cooling.