We already introduced Supermicro's Twin 2U server (6027TR-D71FRF) in our Xeon E5 review. It is basically a two node 2U server that offers the density of 1U servers without the disadvantages. Instead of four redundant PSUs you only need two, and instead of noisy, energy hogging and prone to break 40mm fans you get slower turning 80mm fans.

The two servers are held in place using screwless clips.

There was one big disadvantage: there were only four DIMM slots per CPU, which limits each node to 128GB of RAM (8 x 16GB). That is a bit on the low side for 16 cores and 32 threads and makes this server less suitable for virtualization loads.

Of course, this server was never meant to be a virtualization server as it is equipped with 56Gb/s TFDR InfiniBand interconnect technology, great for processing intensive cluster applications like some clustered HPC apps. Nevertheless, we were intrigued. Supermicro has recently released a new Twin, the 6027TR-D70RF+, which has 16 DIMM slots.

Most 2U servers are limited to 24 memory slots and as a result 384GB of RAM. With two nodes in a 2U server and 16 slots per node, you get cram up to 512GB of RDIMMs in one server. The Supermicro Twin node (6027TR-D70RF+) looks like an attractive alternative for the more common 1U and 2U servers:

Much more (3, 2 full height) PCIe expansion slots than a 1U, almost as good as a traditional 2U

Lower energy consumption as two (up to 95% efficient) PSUs are powering two nodes

Much better and more efficient 80mm cooling fans than a 1U

Density of a 1U

33% more DIMM slots than a 2U

That all sounds great for any cluster solution including a virtualization cluster, but there is more. If you use LRDIMMs, you can double your capacity. LRDIMMs at 1333MHz are available as quad rank 32GB DIMMs. But before we can introduce you to these DIMMs, we want to take a step back and look at all the RAM options that a typical server buyers has.

Post Your Comment

26 Comments

"Most 2U servers are limited to 24 memory slots and as a result 384GB of RAM. With two nodes in a 2U server and 16 slots per node, you get cram up to 512GB of RDIMMs in one server. "

It's not one server. It's actually 2 servers. just because they're in a 2U X 1/2 width form factor doesn't mean they're just one system. There are 2 systems there. Sure you can pack 512GB into 2U with 2 servers, but there are better ways.

1. Dell makes a PowerEdge R620, where you can pack 384GB into 1U, two of those gives you the same number of systems in the same space, with 50% more memory.

2. Dell also has their new R720, which is 2U and has a capacity of 768GB in a 2U form factor. Again, 50% more memory capacity in the same 2U. However, that's short 2 processor sockets.

2. Now, there's the new R820. 4 sockets, 1.5TB of memory, 7 slots, in 2U of space. It's a beast. I have one of these on the way from Dell for my test lab.

Working as an admin in a test lab, dealing with all brands of servers, my experiences with various brands gives me a rather unique insight. I have had very few problems with Dell server, despite having nearly 30% Dell servers. We've had 7 drives die (all Toshiba) and one faceplate LCD go out. Our HP boxes, at less than 10% of our lab, have had more failures. The IBMs, ahile also less than 10%, have had absolutely no hardware failures. Our Supermicros comprise about 25% of the lab, yet contribute >80% of the hardware problems, from motherboards that just quit recognizing memory to backplanes that quit recognizing drives. I'm not too happy with them.Reply

Sure, you can load each of those Rxxx Dell servers with boatloads of memory, but you fail to mention that it comes with a significant performance/penalty. The moment you put a third Dimm on a memory channel your memory speeds drops from 1600 (IF you started with 1600 memory to begin with) to 1066 or worse, 800. On a virtualization host, that makes a big difference.Reply

A few corrections - the 192GB for HCDIMMs is incorrect - it should also be 384GB.

There is no data available that confirms a 20% higher power consumption for HCDIMMs over LRDIMMs. There is a suspicious lack of benchmarks available for LRDIMMs. It is possible that figure arises from a comparison of 1.5V HCDIMMs vs. 1.35V LRDIMMs (as were available at IBM/HP).

It is incorrect that LRDIMMs are somehow standard and HCDIMMs are non-standard.

In fact HCDIMMs are 100% compatible with DDR3 RDIMM JEDEC standard.

It is the LRDIMMs which are a new standard and are NOT compatible with DDR3 RDIMMs - you cannot use them together.

The 1600MHz HCDIMM mention is interesting - would be good to hear more on that.Reply