As big-data business analytics, real-time data for social media, and mobile applications continue their growth trajectory, the need for higher speed and memory capacity has never been greater. Over the last few years, companies in the memory eco-system have worked closely to continue advancing the system memory roadmap for enterprise applications. This article aims to highlight the advances the industry has made with the latest memory technology DDR4 and more specifically on DDR4 LRDIMM.

DDR4 LRDIMM (load reduced memory module) technology uses a distributed data buffer approach to accomplish memory bandwidth efficiencies when scaling to higher capacities and speed on the upcoming DDR4 enterprise server systems. This is in contrast to the unbuffered data approach used with DDR4 RDIMM (registered memory modules). LRDIMM, in general, has continued to evolve and improve its value to system users.

In Figure 1, Gen1 DDR3 enterprise systems, such as E5-2600, had sub-optimal LRDIMM speed for all capacities due to reasons that will be described below. E5-2600 v2 made significant progress in improving LRDIMM value to end-users and reversed the speed inversion issue that existed on E5-2600. DDR4 LRDIMM is expected to launch memory subsystem performance to a new paradigm. DDR4 LRDIMM not only appeals to the highest capacities, but also to a much wider range of applications that require the highest bandwidth and the highest capacities.

Figure 1

LRDIMM vs. RDIMM speed improvement.

The ecosystem has collectively made huge strides in ensuring that increases in LRDIMM speed translate into a corresponding scaling of LRDIMM memory bandwidth in GigaByte/second (GB/s). Speed is analogous to which track star can sprint the fastest for short periods of time. Memory bandwidth is analogous to who crosses the finish line first.

To summarize the DDR4 improvements made by various stakeholders in the ecosystem to improve the usable bandwidth in GB/s: