Without software-defined storage you will size it wrong, guaranteed

I've done my fair share of capacity planning and sizing exercises over the years, enough to realise there is a lot of estimates, assumptions, caveats and just plain guess work. There are some great tools available which give you pretty detailed metric of what your infrastructure looks like today, but there aren't any tools out there that can predict the future. Even if you keep regular historic figures your predictive trending for the future is only a best guess. The best tools in the market are only as good as the data you feed into them and often the data fed in is only half the story.

So why is it that when buying an expensive storage appliance we are always asked to predict the future? "Tell me exactly what your business is going to be doing in 3-5 years and I'll give you the correct storage appliance to support it!" I've done strategic planning for companies over a 3 to 5 year period and that's difficult enough planning for high level concepts, let alone the technical detail of exact figures. There's dozens of reasons for this, but here's a few of the key ones. The industry moves so quickly that it's difficult to predict 18 months from now, let alone 3 years+. IT is usually the last to know of any key business decisions which would lead to increased data centre requirements. M&A activity can rarely be prepared for as often they are closely guarded secrets within the shareholders. Hockey stick growth isn't across all data sets, many data sets will hit a terminal velocity, while other data sets will continually grow at a static pace. Finally sometimes we just get it wrong, or the information we use is flawed. If I'm just 10% out on my year-on-year predictions and I'm sizing for 5 years, then this leaves me over 50% out at the end. Finally even if you get the 3/5 year prediction spot on, chances are (unless you have no growth at all) that you'll have an over-sized platform years 1-2 that's wasting power, cooling, space and your CAPEX budget.

Over the years I generally see 2 outcomes:

The platform has been oversized and there's free capacity

The platform sits on 99% utilisation for a year or more as there's no budget left to expand it.

In scenario (1) you've spent too much money on an appliance that can do a lot more than you're asking from it. In scenario (2) you've outgrown your existing appliance and need to pay quite a lot to upgrade it and/or do a data migration exercise to get a bigger system.

Hedvig software-defined storage changes the conversation

I'm talking to my customers about sizing for today, or at best the next 6-12 months. This is much easier to predict as we can base it on todays metrics, and even if we guess for the next year we may get the numbers incorrect but they can't be wildly incorrect. If I'm 10% out on the first year then that's only 10% out, not terrible and not 50%. We can do this with Hedvig because if you run out of capacity then all you need to do is buy an additional, small building block to add to the cluster. One more server. This isn't an entire appliance, a head upgrade or a data migration, just plug a new server in and add the new storage to the cluster. This makes mistakes or unpredictable growth easy to cater for. Exactly the type of IT purchasing that leads businesses into cloud services.

An example of how software-defined storage right-sizes your environment

Below is an example showing a business starting out with 500TB and predicting 20% growth each year for 5 years. Actual storage consumption over the years had 2 big unpredicted storage growths caused by business changes. A traditional storage array was sized for the full 5 years growth from day 1, but during these unpredicted growth events the traditional array has to be expanded, potentially beyond the actual capacity of the original storage controllers forcing a full controller upgrade. Notice the traditional storage array always maintains excess capacity unless its actually over-provisioned, this is because it is trying to predict the future. Alternatively when the Hedvig Distributed Storage Platform is used, the 2 lines of actual vs. Hedvig are difficult to distinguish as they are so close. In the example the Hedvig Distributed Storage Platform is configured to provide 50TB usable storage in each node.

We're finding this change the conversation with our customers. IT is a complicated beast at the best of times, don't over-complicate this by trying to predict the future. I bet if you look at your infrastructure 5 years ago it's unrecognisable from today, and I bet the same thing will happen in 5 years time. Society has moved on from being led by Soothsayers and Clairvoyants and now it's time we lead the IT industry away from the Mystic-Meg big vendor clairvoyants and into an IT industry based on science and facts. Size your next storage system for what you need today, and make sure it has practically unlimited headroom for the unknown sizing you might need tomorrow, without a cost penalty or pre-provisioning. Change the way you budget and purchase IT by scaling your platform for this years needs and budget. Worry about next year when it needs to be worried about, next year!

If you're interested in learning more, check out my video channel. I provide a series of technical whiteboard videos, including how to get sizing right... or at least as right as you can.

Chris Kranz

Chris is a VMware Certified Design Expert (VCDX) and senior systems engineer. Chris has in-depth experience in cloud, virtualization, storage and data center technologies gained from his work across numerous practices including web development, systems administration, and consulting. His advisory expertise helps customers better adopt and adapt to the technologies that best fit for their business requirements.