Intel Optane DC Persistent Memory modules can be configured at up to 3TB per CPU socket (in addition to the DRAM in the system). That means fewer I/O trips, and lower latency, for accelerated performance. In addition, the new media will offer a lower-cost alternative compared to DRAM.

Use Cases

Address space on Intel Optane DC memory modules can be partitioned as volatile main memory, as persistent memory, or as a combination of both. Further, the persistent memory address space can be accessed by applications using direct load/store, or by using standard storage APIs like file open/close/read/write.

Memory Mode

With Memory Mode, applications get a high capacity main memory solution at substantially lower cost and power, while providing performance that can be close to DRAM performance, depending on the workload. No modifications are required to the application—the operating system sees the persistent memory module capacity as the system main memory. For example, on a common two-socket system, the Memory Mode can provide 6TB of main memory, something very difficult and expensive to do with DRAM (if it is even possible). In Memory Mode, the DRAM installed in the system acts as a cache to deliver DRAM-like performance for this high-capacity main memory.

Although the media is persistent, Memory Mode makes the capacity appear volatile to application software. Data stored on the memory modules is protected with XTS AES 256 bit encryption. When memory mode is selected, the data is cryptographically erased by the controller on the module between power cycles, thus mimicking the volatile nature of DRAM.

A compelling use case for Memory Mode is to run more virtual machines (VMs) than in a traditional server system—you don’t have to starve a process for memory to spin-up a new VM. Cloud service providers and enterprise IT shops will benefit by reaching Service Level Agreements (SLAs) at lower cost. That is because memory is oversubscribed in traditional virtualized systems, and that requires repeated reads/writes to storage to meet SLAs. Persistent memory eliminates that need, thus improving performance and lowering system cost to support more or bigger VMs.

Persistent Memory

Just as some or all of the capacity of the Intel Optane DC memory modules can be provisioned as Memory Mode, some or all of the capacity can be provisioned as persistent memory. This is known as App Direct Mode, where software has a byte-addressable way to talk to the persistent memory capacity. There are many ways for applications to use App Direct Mode without any modifications. For example, the operating system may use App Direct while providing standard storage interfaces to the application. Similarly, some middleware libraries may use App Direct, again providing existing interfaces to applications so they don’t need to change. But ISVs may choose to modify their applications for persistent memory, using App Direct Mode directly to get the best value from the persistent memory modules. OS vendors such as Microsoft, Red Hat, Canonical, SUSE, and VMware have provided the enabling required to give software direct access to persistent memory.

With App Direct Mode, an In Memory Data Base (IMDB) restart time can be significantly reduced because applications no longer have to reload data from storage to main memory. In one of our lab tests, we determined the amount of time to reboot with a particular large IMDB was 35 minutes. Assuming a typical system is rebooted every couple of weeks to install security patches and updates, this led to an expected availability of 99.8%. By adding persistent memory, data structures such as indexes were made persistent, even though they live in memory. The reboot time was only 17 seconds for the same large database because the time to rebuild the indexes was eliminated. This resulted in an expected 99.999% service availability. In addition, our testing shows that using large capacity persistent memory with an in-memory database enables multi-TB capacity for large data sets without having to go from a 2-socket server to a more expensive 4-socket server.

For Automated Trading Systems (ATSs), the database can put its transaction logs in persistent memory, so in the event of an outage, the database can be rebuilt based on the log. In addition, a transaction is considered “complete” when it is written to a persistent medium. With persistent memory, your transaction is done as soon as it is written—even though it later gets transferred to a “warm” or “cold” storage device.

Storage Over App Direct Mode

As described above, the persistent memory address space can be accessed by applications by using direct load/store accesses in App Direct Mode. In addition, the same persistent memory address space can be accessed by using standard file APIs in Storage over App Direct Mode.

This allows existing storage based applications to access the App Direct region of Intel Optane DC memory modules without any modifications to the existing applications or the file systems that expect block storage devices. Storage over App Direct Mode provides high-performance block storage, without the latency of moving data to and from the I/O bus. This mode does require NVDIMM drivers which are already part of the Linux kernel starting with version 4.2, and Windows Server 2016.

Ecosystem Partnerships Drive Solutions

Intel partners with multiple industry groups and industry leaders to provide an ecosystem and updated specifications for using persistent memory:

Programming Model

The software interface for using Intel Optane DC Persistent Memory was designed in collaboration with dozens of companies to create a unified programming model for persistent memory. The Storage Network Industry Association (SNIA) formed a technical workgroup which has published a specification of the model. This software interface is independent of any specific persistent memory technology and can be used with Intel Optane DC Persistent Memory or any other persistent memory technology.

The model exposes three main capabilities:

The management path allows system administrators to configure persistent memory products and check their health

The storage path supports the traditional storage APIs where existing applications and file systems need no change; they simply see the persistent memory as very fast storage

The memory-mapped path exposes persistent memory through a persistent memory-aware file system so that applications have direct load/store access to the persistent memory. This direct access does not use the page cache like traditional file systems and has been named DAX by the operating system vendors.

When an independent software vendor (ISV) decides to fully leverage what persistent memory can do, converting the application to memory map persistent memory and place data structures in it can be a significant change. Keeping track of persistent memory allocations and making changes to data structures as transactions (to keep them consistent in the face of power failure) is complex programming that hasn’t been required for volatile memory and is done differently for block-based storage.

The Persistent Memory Development Kit (PMDK) provides libraries meant to make persistent memory programming easier. Software developers only pull in the features they need, keeping their programs lean and fast on persistent memory.

These libraries are fully validated and performance-tuned by Intel. They are open source and product neutral, working well on a variety of persistent memory products. The PMDK contains a collection of open source libraries which build on the SNIA programming model. The PMDK is fully documented and includes code samples, tutorials and blogs. Language support for the libraries exists in C and C++, with support for Java, Python*, and other languages in progress.

Author

Andy Rudoff is a senior principal engineer at Intel, focusing on Non-Volatile Memory programming. He is a contributor to the SNIA NVM Programming Technical Work Group and author of the Persistent Memory Development Kit hosted at pmem.io. His more than 30 years industry experience includes design and development work in operating systems, file systems, networking, and fault management at companies large and small, including Sun Microsystems and VMware. Andy has taught various Operating Systems classes over the years and is a co-author of the popular UNIX Network Programming text book. View all posts by Andy Rudoff

1 comment

I've only just joined the Intel Learning Site, I don't have a question to ask either yourself or your colleagues just yet, but I would like you to be aware of my aims, goals & objectives in joining this program.

Firstly, in 2010 I started to investigate developing new methods of Data Encryption, this led me down the avenue of Quantum Cryptography. Then I started to investigate Holography, which got me thinking, how can I combine Quantum Cryptography with Holography to further my desire to develop a world leading innovation in Cryptography.

I managed to put together a theoretical method of creating this innovation. To date I have not had a single response from anyone in the top tier of cryptography that I have contacted apart from an Intelligence Official located in the Netherlands called Joe Shenouda, Joe is a Cyber Security professional who stated that my innovation QCH was a fantastic idea but at that time he data I gave him was too limited, he advised that I try to get the innovation coded, due to financial constraints I have not been able to get the finance together to get it innovated.

So this month I have decided to put the Innovation on GitHub (Link:Quantum Cryptographic Holography) in the hope that I can get others to contribute in order to get it fully developed.

I think you guys at intel should take a minute to go view the note on GitHub to get an idea of how QCH works because I believe this solution could improve the device tree in the pyramid image you have placed above.

In my opinion QCH could improve intel's drivers/systems is several key areas, especially because of it's ability to work in a similar way to your "3D Xpoint Technology [optane] in writing over stored it I/O information. I've not read the optane documentation just yet but i'm guessing you use virtualization in a RAMdisk type configuration to do these write over operations, if so then you are doing a very basic version of QCH, but with QCH you will be able to do this an unlimited amount of times all in a single bit.

QCH will not only write over that bit an unlimited amount of times, the algorithms it will contain will allow for 3D Xpoint and its virtualization of multiple RAMdisks to machine learn processes and will automatically configure itself to compress the amount of processes it sends to the intel processor by combining all of the processes into a single bit of information allowing Intel to theoretically create an unlimited amount of Vcpu's and Vcpu Cores using the Holographic method of Quantum Mechanics where each bit will truly contain potentially millions of bits of data that can be processed.

I have no way of contacting the right guys in Intel management to explain this fully so I'd just like to say you can either let them know to visit my GitHub profile or get them to email me here: michael@silentium.uk so that I can potentially meet them in London [Im from UK] or explain QCH to them fully via other communications.