Best practices for datacenter storage design

I enjoy good design. In fact, one of my favorite magazines is Monocle, which is frequently a fascinating mix of world news, business affairs, arts and design, shopping and dining, and culture. Another magazine I often enjoy is Atomic Ranch, which focuses on mid-century design of homes and their interiors. One thing I've learned about designers in general, however, is that they frequently seem to value form above function. That kind of approach to design is definitely a bad idea when it comes to designing something like the storage architecture for a datacenter. In fact, it's almost as bad as when you design a sofa or chair that's uncomfortable for you to sit in! OK, so Frank Lloyd Wright was a genius, but gimme a break!

Policy-based SDS

To better understand what might be involved in making good decisions when designing the storage architecture for a datacenter, I talked with a couple of well-known experts in the storage technology field to see what they had to say about the subject. After all, it's important to understand industry best practices in any technological area before you begin committing time, energy, and money toward an IT project -- even if you're just at the planning stage of things.

Peter van den Bosch was a senior consultant at PQR a Systems Integrator in the Utrecht Area of the Netherlands when he contributed a guest editorial on datacenter storage design for our popular WServerNews weekly newsletter just over two years ago. At the time, Peter focused his editorial on the choices one might need to make when designing a datacenter that will use VMware virtualization and HP storage intelligence technologies. So when I reached out again to Peter for his current thoughts on the subject, it came as no surprise that he now works for VMware as one of their technical account managers, where is duties include managing relationships with some of their largest enterprise customers.

Customers should choose an SDS solution and define their storage usage using policies.

From Peter's own experience and perspective working for many years as a systems engineer, technical consultant, and unit manager, the No. 1 best practice in storage design for datacenters is "independency from hardware," which Peter says "is important to ensure future SDS [software-defined storage] development." It's only natural that Peter would recommend VMware SDS as the best approach for achieving such a goal. "VMware vSAN and Virtual Volumes are solutions that are hardware independent and bring stunning performance," Peter says. Looking ahead, he predicts that "future developments to manage storage platform independence based on policies will include SAN storage, cloud storage, and local storage." Policy-based management is important because it allows you to define the storage requirements for virtual machines in a way that simplifies storage configurations and ensures availability and performance. "Customers should choose an SDS solution and define their storage usage using policies," says Peter, who also maintains a must-follow WordPress blog where he shares his stories and adventures in the virtual industry.

Start with the user

The other storage and virtualization expert I managed to talk with for this article was Didier Van Hoye, who works as a Domain Expert ICT for the Government Information Agency in Belgium. Didier is a well-known expert whose blog Working Hard in IT is one of the top IT pro blogs in the industry. Didier is also regularly featured on the Hyper-V Amigos Showcast, which he and co-host Carsten Rachfahl started so they could talk about how they got stranded in IT.

Didier's recommendations about datacenter storage design starts by stepping back and getting some perspective of the bigger picture involved than just one vendor's technologies. He begins by saying that "in the past five to seven years, the progress in storage technology has shifted from cruising along to full-speed ahead. The progress being made in speed and capacity (SSD, NVMe, NVDIMM) and in lossless networks is now pushing the limits of other components in the storage solution stack. The days where you just had to outperform the storage bottleneck to have the best possible performance are long gone. On top of that, cloud and serverless computing are offering new ways of using and consuming storage. All this has to be integrated in the most consumer and operations-friendly solutions. This has to be done both in a technically sound and cost effective manner."

There is no one-size-fits-all solution, and no single technology is best suited for all needs.

I asked Didier what some of the main challenges are today in designing storage solutions for datacenters. "Today we are seeing a bigger challenge to data governance than we have in the pact decades,” he said. “Data is moving to different storage solutions both on premises and in the cloud. Centralized SAN, hybrid cloud storage, converged, hyper converged, OneDrive, Dropbox, blob storage and file shares in the cloud, storage in IAAS VMs, and so on. This means that the job of making sure all data is protected in compliance with your organization's needs and requirements isn't getting any easier! There are fewer single forms of centralized storage, so by nature this responsibility is being decentralized and distributed as well."

Does he recommend any particular solution as being best over any others? "As with any good solution it has to be based on the needs, budget, and capabilities of the organization,” he says. “There is no one-size-fits-all solution, and no single technology is best suited for all needs. So whatever you decide to do, start off by creating a strategy. Map out the needs of your users, and then look at how those needs are evolving and what this might mean for the technology you'll need for your solution. Don't forget to consider the technical capabilities and the needs in regard to operations and support. A plan to execute all this is part of the strategy since ignoring this is nothing less than compromising the results. The users and the needs of users vary between public cloud providers, Fortune 500 companies, and SMEs. They differ not only in size, budgets, and economies of scale, but also in their capabilities and in the nature of their businesses. And there are also the security and legal obligations that must be taken into account -- company size alone is a bad parameter when deciding on what storage solutions are needed."

What does Didier see happening in the future? "New and fast-evolving solutions make for a diverse landscape where capabilities, needs, and obligations evolve fast. To stay on top of all this you'll have to learn how to deal with fast change in a result driven and cost effective way." What's the best way, I asked him, to stay on top of such a rapidly changing industry? "Keep things as simple and small as possible, stay on budget, and move fast. The usefulness of yearlong, large, and high-budget projects is diminishing."

The world is indeed changing, and datacenter storage architecture and design best practices have to continue to change and evolve with it.

Mitch Tulloch

Mitch Tulloch is Senior Editor of both WServerNews and FitITproNews and is a widely recognized expert on Windows Server and cloud technologies. He has written more than a thousand articles and has authored or been series editor for over 50 books for Microsoft Press and other publishers. Mitch has also been a twelve-time recipient of the Microsoft Most Valuable Professional (MVP) award in the technical category of Cloud and Datacenter Management. He currently runs an IT content development business in Winnipeg, Canada.

Latest Podcast

Featured Freeware

Follow Us

Best practices for datacenter storage design

TECHGENIX

TechGenix reaches millions of IT Professionals every month, and has set the standard for providing free technical content through its growing family of websites, empowering them with the answers and tools that are needed to set up, configure, maintain and enhance their networks.