SNIA

Dr. J. Metz talked with us about NVMe at Storage Field Day 16 in Boston. NVMe is rapidly becoming one of the new hypes in the storage infrastructure market. A few years ago, everything was cloud. Vendors now go out of their way to mention their array contains NVMe storage, or is at the very least ready for it. So should you care? And if so, why?

SNIA’s mission is to lead the storage industry worldwide in developing and promoting vendor-neutral architectures, standards and educational services that facilitate the efficient management, movement, and security of information. They do that in a number of ways: standards development and adoption for one, but also through interoperability testing (a.k.a. plugfest). They aim to help in technology acceleration and promotion: solving current problems with new technologies. So NVMe-oF fits this mission well: it’s a relatively new technology, and it can solve some of the queuing problems we’re seeing in storage nowadays. Let’s dive in!

Yes, I’m sorry about the title too. But also glad to announce I’m shipping up to Boston for Storage Field Day 16 this week! Just ignore the fact I’m not on a ship but on a train for now, and all should be well… Next stop is AMS, then a direct flight to BOS. It’s going to be a slightly shorter, two-day Storage Field Day this time around. But that doesn’t mean we’re going to receive a lot less content!

Consistency and predictability matter. You expect Google to answer your search query within a second. If it takes two seconds, that is slow but ok. Much longer and you will probably hit refresh because ‘it’s broken and maybe that will fix it’.

There are many examples that could substitute the scenario above. Starting a Netflix movie, refreshing your Facebook timeline, or powering on an Azure VM. Or in your business: retrieving an MRI scan or patient data, compiling a 3D model, or listing all POs from last month.

Ensuring your service can meet this demand of predictability and consistency requires a multifaceted approach, both in hardware and procedures. You can have a modern hypervisor environment with fast hardware, but if you allow a substantially lower spec system in the cluster, performance will not be consistent. What happens when a virtual machine moves to the lower spec system and suddenly takes longer to finish a query?

Similarly, in storage, tiering across different disk types helps improve TCO. However, what happens when data trickles down to the slowest tier? Achieving that lower TCO comes with the tradeoff of less latency predictability.

These challenges are not new. If they impact user experience too much, you can usually work around them. For example, ensure your data is moved to a faster tier in time. If you have a lot of budget, maybe forgo the slowest & cheapest NL-SAS tier and stick to SAS & SSD. But what if the source of the latency inconsistency is something internal to a component, like a drive?

I’m excited to announce I’ll be attending Storage Field Day 12! During the event we’ll talk storage technology for three days, starting on March 8th. There’s an impressive line-up of companies and delegates gathering in Silicon Valley and of course we’ll live stream the presentations for the folks back home, who can pitch in over Twitter. Did I mention the line-up of companies already? Oh boy!