Typically, desktop and mobile systems use Unbuffered DIMMs (UDIMMs). The memory controller inside your CPU addresses each memory chip of your UDIMM individually and in parallel. However, each memory chip places a certain amount of capacitance on the memory channel and thus weakens the high frequency signals through that channel. As a result, the channel can only take a limited number of memory chips.

This is hardly an issue in the desktop world. Most people will be perfectly happy with 16GB (4x4GB) and run them at 1.6 to 2.133GHz while overvolting the DDR3 to 1.65V. It's only if you want to use 8GB DIMMs (at 1.5V) that you start to see the limitations: most boards will only allow you to install two of them, one per channel. Install four of them in a dual channel board and you will probably be limited to 1333MHz. But currently very few people will see any benefit from using slow 32GB instead of 16GB of fast DDR3 (and you'd need Windows 7 Professional or Ultimate to use more than 16GB).

In the server world, vendors tend to be a lot more conservative. Running DIMMs at an out-of-spec 1.65V will shorten their life and drive the energy consumption a lot higher. Higher power consumption for 2-3% more performance is simply insane in a rack full of power hogging servers.

Memory validation is a very costly process, another good reason why server vendors like to play it safe. You can use UDIMMs (with ECC most of the time, unlike desktop DIMMs) in servers, but they are limited to lower capacities and clockspeeds. For example, Dell's best UDIMM is a 1333MHz 4GB DIMM, and you can only place two of them per channel (2 DPC = 2 DIMMs Per Channel). That means that a single Xeon E5 cannot address more than 32GB of RAM when using UDIMMs. In the current HP servers (Generation 8), you can get 8GB UDIMMs, which doubles the UDIMM capacity to 64GB per CPU.

In short, UDIMMs are the cheapest server DIMMs, but you sacrifice a lot of memory capacity and a bit of performance.

RDIMMs (Registered DIMMs) are a much better option for your server in most cases. The best RDIMMs today are 16GB running at 1600MHz (800MHz clock DDR). With RDIMMs, you can get up to three times more capacity: 4 channels x 3 DPC x 16GB = 192GB per CPU. The disadvantage is that the clockspeed throttles back to 1066MHz.

If you want top speed, you have to limit yourself to 2 DPC (and 4 ranks). With 2DPC, the RDIMMs will run at 1600MHz. Each CPU can then address up to 128GB per CPU (4 channels x 2 DPC x 16GB). Which is still twice as much as with UDIMMs, while running at a 20% higher speed.

RDIMMs add a register, which buffers the address and command signals.The integrated memory controller in the CPU sees the register instead of addressing the memory chips directly. As a result, the number of ranks per channel is typically higher: the current Xeon E5 systems support up to eight ranks of RDIMMs. That is four dual ranked DIMMs per channel (but you only have three DIMM slots per channel) or two Quad Ranked DIMMs per channel. If you combine quad ranks with the largest memory chips, you get the largest DIMM capacities. For example, a quad rank DIMM with 4Gb chips is a 32GB DIMM (4 Gbit x 8 x 4 ranks). So in that case we can get up to 256GB: 4 channels x 2 DPC x 32GB. Not all servers support quad ranks though.

LRDIMMs can do even better. Load Reduced DIMMs replace the register with an Isolation Memory Buffer (iMB™ by Inphi) component. The iMB buffers the Command, Address, and data signals.The iMB isolates all electrical loading (including the data signals) of the memory chips on the (LR)DIMM from the host memory controller. Again, the host controllers sees only the iMB and not the individual memory chips. As a result you can fill all DIMM slots with quad ranked DIMMs. In reality this means that you get 50% to 100% more memory capacity.

I have whittled down the use case for HCDIMMs/LRDIMMs and RDIMMs as follows:

The HCDIMM use case is at:- 16GB at 3 DPC use- 32GB (outperform both RDIMMs and LRDIMMs)

LRDIMMs are not viable at:- 16GB (RDIMMs are better)- 32GB (HCDIMMs are better)

RDIMMs are not viable at:- 32GB (because they are 4-rank - trumped by LRDIMMs/HCDIMMs)

There is a reason the Netlist HCDIMMs were only released on the virtualization servers from IBM/HP - because at 16GB levels the only niche available for LRDIMM/HCDIMM vs. RDIMM is the 3 DPC space. This will expand considerably at 32GB to mainstream levels as soon as 32GB HCDIMMs are released (they are currently in qualification with IBM/HP and have not been announced yet - though maybe expected shortly).

I had created an infographic covering the memory choices - search the net for the article entitled:

Infographic - memory buying guide for Romley 2-socket servers

HCDIMMs are not available at SuperMicro (as they are for IBM/HP) - so I was surprised you even covered HCDIMMs (since the article is after all referring to the SuperMicro line of servers).Reply

BTW, Johan, I work for HP and asked some of the guys in ISS Technical Marketing why we don't send you our servers for eval like you get from SuperMicro and sometimes Dell

They felt that you guys didn't do alot of Server Reviews, and that your readership wasn't generally the kind of folks that buy HP Servers.

So I am curious if you could spin up a poll or something in the future to prove them wrong.If there is enough support I'm sure we can you some gear to play with.

I sometimes giggle when I see the stuff people on here get excited about in these reviews though. "Can you see the BIOS through IPMI?". Thats the kind of thing Compaq offered back with the RILOE II and have been integrated into the motherboard since iLO 1 which is like 4 or 5 years old at least.iLO4 on the Gen8 line have taken that a step further and we now hook the Display system BEFORE POST starts so instead of an invalid memory config getting you a series of beeps, you now get a full blown screen either on local VGA or on the Remote Console that straight up tells you you have a memory mismatch and why. i have seen his demo'd with NO DIMMs even installed in the server and you still get Video and obvious status messages.Reply