NVM Express

NVM Express, NVMe, or Non-Volatile Memory Host Controller Interface Specification (NVMHCI), is a specification for accessing solid-state drives (SSDs) attached through the PCI Express (PCIe) bus. "NVM" stands as an initialism for non-volatile memory, which is used in SSDs. As a logical device interface, NVM Express has been designed from the ground up, capitalizing on the low latency and parallelism of PCI Express SSDs, and mirroring the parallelism of contemporary CPUs, platforms and applications. By allowing parallelism levels offered by SSDs to be fully utilized by host's hardware and software, NVM Express brings various performance improvements.

NVM Express SSDs exist both in form of standard-sized PCI Express expansion cards[1] and as 2.5-inch drives that provide a four-lane PCI Express interface through the U.2 connector (formerly known as SFF-8639).[2][3]SATA Express storage devices and the M.2 specification for internally mounted computer expansion cards also support NVM Express as the logical device interface.[4][5]

Historically, most SSDs used buses such as SATA, SAS or Fibre Channel for interfacing with the rest of a computer system. Since SSDs became available in mass markets, SATA has become the most typical way for connecting SSDs in personal computers; however, SATA was designed primarily for interfacing with mechanical hard disk drives (HDDs), and has become increasingly inadequate as SSDs have improved.[6] For example, unlike hard disk drives, some SSDs are limited by the maximum throughput of SATA.

High-end SSDs have been made using the PCI Express bus before, but using non-standard specification interfaces. By standardizing the interface of SSDs, operating systems only need one driver to work with all SSDs adhering to the specification. It also means that each SSD manufacturer does not have to use additional resources to design specific interface drivers. This is similar to how USB mass storage devices are built to follow the USB mass-storage device class specification and work with all computers, with no per-device drivers needed.[7]

The first details of a new standard for accessing non-volatile memory emerged at the Intel Developer Forum 2007, when NVMHCI was shown as the host-side protocol of a proposed architectural design that had ONFI on the memory (flash) chips side.[9] A NVMHCI working group led by Intel was formed that year. The NVMHCI 1.0 specification was completed in April 2008 and released on Intel's web site.[10][11][12]

Technical work on NVMe began in the second half of 2009.[13] The NVMe specifications were developed by the NVM Express Workgroup, which consists of more than 90 companies; Amber Huffman of Intel was the working group's chair. Version 1.0 of the specification was released on 1 March 2011,[14] while version 1.1 of the specification was released on 11 October 2012.[15] Major features added in version 1.1 are multi-path I/O (with namespace sharing) and arbitrary-length scatter-gather I/O. It is expected that future revisions will significantly enhance namespace management.[13] Because of its feature focus, NVMe 1.1 was initially called "Enterprise NVMHCI".[16] An update for the base NVMe specification, called version 1.0e, was released in January 2013.[17]In June 2011, a Promoter Group led by seven companies was formed.

The first commercially available NVMe chipsets were released by Integrated Device Technology (89HF16P04AG3 and 89HF32P08AG3) in August 2012.[18][19] The first NVMe drive, Samsung's XS1715 enterprise drive, was announced in July 2013; according to Samsung, this drive supported 3 GB/s read speeds, six times faster than their previous enterprise offerings.[20] The LSI SandForce SF3700 controller family, released in November 2013, also supports NVMe.[21] Sample engineering boards with the PCI Express 2.0 ×4 model of this controller found 1,800 MB/sec read/write sequential speeds and 150K/80K random IOPS.[22] A Kingston HyperX "prosumer" product using this controller was showcased at the Consumer Electronics Show 2014 and promised similar performance.[23][24] In June 2014, Intel announced their first NVM Express products, the Intel SSD data center family that interfaces with the host through PCI Express bus, which includes the DC P3700 series, the DC P3600 series, and the DC P3500 series.[25] As of November 2014[update], NVMe drives are commercially available.

In March 2014, the group incorporated to become NVM Express, Inc., which as of November 2014[update] consists of more than 65 companies from across the industry. NVM Express was formed as an industry association to define a new storage interface protocol, NVM Express, to enable the full performance potential provided by the storage technology based on non-volatile memory. NVM Express specifications are owned and maintained by NVM Express, Inc., which also promotes industry awareness of NVM Express as an industry-wide standard. The NVM Express, Inc. is directed by a thirteen-member board of directors selected by the promoter group, which includes Avago Technologies, Cisco, Dell, EMC, HGST, Intel, Micron, NetApp, Oracle, PMC, Samsung, SanDisk and Seagate.[citation needed]

The Advanced Host Controller Interface (AHCI) interface comes with the benefit of wide software compatibility, but as a downside does not deliver optimal performance when used with SSDs connected via the PCI Express bus. As a logical interface, AHCI was developed back at the time when the purpose of a host bus adapter (HBA) in a system was to connect the CPU/memory subsystem with a much slower storage subsystem based on rotating magnetic media. As a result, AHCI introduces certain inefficiencies when used with SSD devices, which behave much more like DRAM than like spinning media.[4]

The NVMe device interface has been designed from the ground up, capitalizing on the low latency and parallelism of PCI Express SSDs, and complementing the parallelism of contemporary CPUs, platforms and applications. At a high level, the basic advantages of NVMe over AHCI relate to its ability to exploit parallelism in host hardware and software, manifested by the differences in command queue depths, efficiency of interrupt processing, the number of uncacheable register accesses, etc., resulting in various performance improvements.[4][26]:p. 17–18

The position of NVMe data paths and multiple internal queues within various layers of the Linux kernel's storage stack.[27]

Windows

The "NVMe Windows Working Group" is an initiative from the OpenFabrics Alliance to maintain software for Microsoft Windows to use PCI Express solid state devices. The baseline Windows driver contributed to the open-source initiative was developed by several promoter companies in the NVMe workgroup, specifically IDT, Intel, and LSI.[28]

Intel published an NVM Express driver for Linux.[31][32][33] It was merged into the Linux kernel mainline on 19 March 2012, with the release of version 3.3 of the Linux kernel.[34]

A scalable block layer for high-performance SSD storage, known as blk-multiqueue or blk-mq and developed primarily by Fusion-io engineers, was merged into the Linux kernel mainline in kernel version 3.13, released on 19 January 2014. This leverages the performance offered by SSDs and NVM Express, by allowing much higher I/O submission rates. With this new design of the Linux kernel block layer, internal queues are split into two levels (per-CPU and hardware-submission queues), thus removing bottlenecks and allowing much higher levels of I/O parallelization.[35][36][37]

As of version 4.0 of the Linux kernel, released on 12 April 2015, VirtIO block driver, the SCSI layer (which is used by Serial ATA drivers), loop device driver, unsorted block images (UBI) driver (which implements erase block management layer for flash memory devices) and RBD driver (which exports Ceph RADOS objects as block devices) have been modified to actually use this new interface; other drivers will be ported in the following releases.[38][39][40]

FreeBSD

The Intel NVM Express driver was imported to FreeBSD's head and stable/9 branches.[41][42]