Introduction

It becomes clear after flipping through our library of enterprise storage product evaluations that the cutting-edge of storage performance makes an appearance in our busy lab every day, and after a while it is easy to become a bit jaded with the latest "fastest-ever" product. However, occasionally something comes along that is truly exciting, as was the case with the One Stop Systems Flash Storage Array (FSA) 200 and the 64 PCIe SSDs that accompanied it.

The FSA 200 breaks the mold of a traditional storage array by employing up to 32 PCIe SSDs in a rugged chassis. The chassis does not provide the data services, such as deduplication, compression or snapshots, featured with many traditional arrays. Instead, the FSA 200 adheres to the JBOF (Just A Bunch of Flash) design philosophy, which means that the chassis simply presents flash resources to servers that provide the additional data services and manage the underlying flash.

JBOF implementations are gaining popularity, as evidenced by SanDisk's InfiniFlash, Samsung's reference all-flash architectures and Facebook's OCP-compliant Lightning NVMe JBOF. The Facebook NVMe Lightning stands as the lone competitor that employs a PCIe-connected fabric. The FSA 200 comes as a fully qualified and warrantied system, while the Facebook design is an open-source Open Compute Project (OCP) community design. The Lightning design also focuses on providing the utmost density as opposed to the utmost performance, and it only supports either M.2, 2.5" or 3.5" SSDs that are power-limited to 14W per slot.

In contrast, the FSA 200 allows users to employ AIC (Add-In Card) form factor PCIe SSDs of their choice with up to 75W per slot, which provides a blend of high performance and maximum storage density.

The FSA 200 is a 3U chassis that supports up to 32 half-length full-height PCIe SSDs, and it also comes in two smaller variants, the 8-slot 2U FSA 50 and the 4-slot 1U FSA 25. One Stop Systems designed the FSA series to support a wide range of workloads, such as high-density data center applications, HPC (high performance computing), medical, research, finance and ruggedized military applications. The company employs the chassis in military applications and also offers a 4U Flash Storage Array Test System and an FSA SAN product, which includes an additional 2U expansion server in tandem with the 3U FSA 200.

Testing an all-flash appliance that can support up to 32 PCIe SSDs requires serious SSD firepower, and to that effect, One Stop Systems sourced two separate SSD arrays for our tests.

SanDisk sent along 32 of its 3.2TB Fusion ioMemory SX300 PCIe SSDs. We compared online retailer and Dell/Lenovo pricing and came up with an average price of $16,500 per SX300, which values the 102.4 TB array at roughly $528,000 USD. It is noteworthy that SanDisk's newer SX350 model is a re-spin of the SX300 that leverages SanDisk's own NAND (in a bid to lower its price). The SX350 features the same performance and endurance specifications as the SX300 we employed in our testing, but has a slightly lower price of $15,000 per 3.2TB SSD.

Intel contributed 30 of its NVMe-powered 400GB DC P3700 SSDs for the project, and we added in two 1.6TB models from our lab to complete the 32-SSD array. The 400GB Intel DC P3700 weighs in at $1,000 apiece, so a 12.8TB array is more reasonably priced at roughly $32,000. The lower capacity point of the Intel SSDs weighs heavily into the price equation, but it is fair to say that the Intel SSDs are more cost competitive, in general, than the Fusion ioMemory products. A 1.6TB Intel DC P3700 retails for $4,000, or roughly $2.50-per-GB, in comparison to the $4.68-per-GB for the SanDisk SX350.

The knee-jerk reaction to receiving such an exciting package is to rack up the chassis, load it down with flash, and then attempt to reach the highest bandwidth and IOPS performance measurements possible. Let's take a closer look at the SSDs and the Flash Storage Array 200, and get the fans spinning.