Powered by Intel’s Xeon E5-2600 v3 processor, Penguin Computing’s Tundra OpenHPC platform delivers density, performance and serviceability for demanding and extraordinary customers. Built to be compatible with Open Compute Open Rack specifications, the Tundra OpenHPC platform provides customers with a powerful and compact HPC server designed to reduce infrastructure costs when moving to the next generation of technology. The server shelf measures 1 OU (48mm) high by 176 mm wide.”

Full Transcript:

insideHPC: I’ve heard a lot about Tundra. What’s the deal with this box?

Phil Pokorny: Tundra’s a really exciting thing for Penguin Computing. We’re bringing the ideas that Facebook has contributed to their Open Compute Project, and we’re adapting that for the kinds of densities and manageability that the HPC customers demand. Facebook had a very specific kind of data center idea that they want to design out for, and we recognized that that really wasn’t the same kind of idea of what our HPC customers wanted. But there are a lot of really good ideas there, and we want to be sure to capture those and extend them into the HPC space, and that’s what we’re calling Tundra.

insideHPC: Tundra isn’t just about form factor, is it? I realize it’s not an 18-inch rack, and it’s different in a lot of ways. What does it have to offer HPC?

Phil Pokorny: It offers density. We can do up to 96 nodes in a 42U cabinet. With 10 gigabit Ethernet, we can do 81 nodes with InfiniBand. So, we’re talking about InfiniBand interconnects that Facebook doesn’t do. We’re talking about disaggregated power, being able to do A/B feeds with battery backup, being able to eat 277-volt power, 480 – all kinds of efficiency improvements, density improvements, serviceability improvements that we’re making to form factors that are enabled by the changes that Facebook introduced in the Open Rack design, and then we’re leveraging into the HPC space.

insideHPC: These design elements – help me out – they’re designed for hyperscale, right? What they call those big server farms that Facebook does just to run their service?

Phil Pokorny: Yes.

insideHPC: Certainly we don’t have HPC centers of that vastness, so help me out here. Where’s the goodness?

Phil Pokorny: Let’s go over here and take a look. Maybe a little closer.

insideHPC: Okay. Help me out, what are we looking at here?

Phil Pokorny: For Tundra, what we’ve partnered with is Emerson, who’s made a high-power shelf. Whereas Facebook is only doing about 10 kilowatts a rack, we want to be able to do 30 kilowatts a rack. Emerson allows that with a 15 kilo– this is configured as 15 kilowatts, but it can do up to 24 kilowatts as multiple three-kilowatt modules. Then we use that to drive– Facebook’s density was a 2U node, three-wide, so they got three nodes in 2U or about one-and-a-half U. What we’ve done is we’ve designed a node that’s only 1U high so we can get twice the number of nodes in a rack with a dual socket Haswell server with a x16 port for highest performance interconnect like InfiniBand, and make this the core of your compute infrastructure in a high-performance computing.

insideHPC: Can you do AMD as well with this kind of sled?

Phil Pokorny: Yeah. So, for AMD, we’re talking about using their ARM chips in those sleds like this, where it would be basically storage. So, we can use their ARM CPU in this configuration to drive up to eight drives, four three-and-a-halfs and four two-and-a-halfs, to deliver object kind of storage like Hadoop file systems, Swift, Ceph file systems, those kinds of things in a similar three nodes wide, 1U high tray. This gives us 12 three-and-a-half inch drives per rack unit, or with the addition of the two-and-a-half inch drives, we’re talking 24 drives in one rack unit.

insideHPC: Nice. The sled form factor, can you do GPUs and things like this as well?

Phil Pokorny: Actually, yes, we can. Using this same motherboard, which has a single socket Intel E3-1200 processor, we can replace this hard drive complex with a riser card and put a full-sized GPU in here. That could be an NVIDIA GRID card, where you could do offloaded remote visualization. That could be a GPU compute like a Tesla card or an AMD 9100 FirePro card.

insideHPC: Okay, so it does accommodate GPUs. What about the Intel Xeon Phi?

Phil Pokorny: We can also put in this form factor with the GPUs, Intel Xeon Phis would fit as well. And in the future, Intel will have bootable Knights Landing chips, and we’ll have a solution for that as well.

Resource Links:

Latest Video

Industry Perspectives

In this podcast, the Radio Free HPC team goes off the supercomputing rails a bit with a discussion on digital immortality. "A new company called Nectome will reportedly archive your mind for future uploading to a machine. While the price of $10K seems reasonable enough, they do have to kill you to complete the process." [Read More...]

White Papers

Mixing workloads rather than creating separate application domains is key to efficiency and productivity. Specific software is typically needed only in certain phases of product development, leaving systems idle the rest of the time. Download the insideHPC guide that explores how a powerful scheduling and resource management solution — such as Bright Cluster Manager — can slot other workloads into those idle clusters, thereby gaining maximum value from the hardware and software investment, and rewarding IT administrators with satisfied users.