An IT industry insider's perspective on information, technology and customer challenges.

February 19, 2014

Storage Disruption: Choose Your Poison

An awkward moment: I’m with a customer group, and someone asks me an innocent question: “so, what’s happening in the storage world these days?”.

Gulp! OK, do they really want to know?

I give them a choice between the short answer and the long answer.

The short answer? A lot is changing, maybe too much.

Ready for the longer answer?

Once you understand it, you may be sorry you asked :)

Why You Should Care

Enterprises invest an awful lot in storage technology, the rough cut is about $30B annually in hardware alone. I think that’s a low number — we have to consider software, services and the all-important investment in skills, people and processes to make it all work usefully.

That money invested in storage technology is intended to last a long time: a minimum of 3 years, with longer periods not unusual. So the technology decisions people are making today will be used through 2017, or longer. The considerable investment in processes and skills around the technology, even longer.

I'm empathetic: given the rate of change that’s now in play, making “safe” bets with that sort of time horizon is becoming an increasingly difficult challenge. This makes things exceptionally difficult for most enterprises that have to commit a big pile of money to a particular technology.

I can’t tell you what the world will look like for you in 2017, but I *can* share with you what I think is very likely to change during that time.

One of the things that Pat Gelsinger has always been adamant about — both at EMC and now at VMware — is “always be on the right side of the technology curve”. By that, he means to make sure you’re investing in the new way of doing things vs. perpetuating legacy approaches.

I think that advice makes obvious sense for any technology vendor. It also makes good sense for many customers who want to invest wisely in the future. People are spending an awful lot of money on storage these days.

And when it comes to enterprise IT, the decisions you make today will be around for a very long time.

A Disclaimer Of Sorts

I work at VMware and focus on storage and availability topics. The list of “what’s changing” I'm presenting here is clearly from that perspective. Indeed, I’ve lifted this whole discussion from various internal strategy documents I’ve written.

While the arguments here tend to reinforce the VMware viewpoint, I also they’re also useful viewpoints for any enterprise spending decent money on storage tech.

#1 — A New Foundational Storage — Flash

Flash storage has now been around for many years, and most everyone understands that it can be a cheap way to get more performance, $/IOPS if you prefer.

What may not be as obvious is that flash is migrating its way closer to the CPU.

The first flash implementations were array-based, and then flash on the PCI-e bus became popular, and now it’s starting to find its way onto motherboards.

Why? Flash is faster when it doesn’t have to traverse a network — or an external bus, for that matter. The result is that the $/IOPS equation improves as a result — more performance for less money.

Once flash finds its way into the server, it’s a resource — like compute and network — that wants to be managed and virtualized -- ideally by the hypervisor.

#2 — A New Implementation Model — Storage As Software

Array vendors will joke that they’ve always been software vendors, it just comes in a really large box :) Hahaha …

All things being equal, many IT shops are expressing the desire for storage to be implemented as software on their choice of familiar server hardware. Maybe not everywhere, but they'd like to have that option.

Their motivations are multiple: simpler environments, perhaps less-expensive hardware, the ability to invest in functionality vs. proprietary hardware, and so on. Once again, the hypervisor seems a logical place to do this.

#3 — A New Technology Integration Model — Convergence

Convergence — the bringing together of disparate technologies — is one of those powerful forces in the technology business, and enterprise IT infrastructure is no exception.

We’ve already seen the popularity of hardware convergence: whether that be things like Vblocks or some of the newer all-in-one appliances.

But there’s more to be done.

If infrastructure functionality is to be delivered as software, shouldn’t we be aspiring to software convergence models? And, given that the hypervisor already abstracts compute — and more recently network and storage, the notion of hypervisor convergence has a certain technical and architectural appeal.

#4 — A New Alignment Model — Application-Centric

Today, we have a rather brute-force model for aligning storage with application requirements.

Perhaps the best way to describe it is “bottoms-up”: storage is acquired, carved into pre-defined buckets, and applications consume from the buckets as needed.

A better approach would be to flip the model: as an application gets provisioned, its requirements are driven via policy down to the supporting storage infrastructure. Resources and service levels aren’t carved into buckets until they are actually consumed.

Better yet would be to have these services precisely align by application boundaries vs. being part of a larger, generic container such as a LUN pool or similar.

Finally, the control model should be based around what applications need -- and not aligned around traditional infrastructure silos.

It’s not hard to conclude that the hypervisor — as the key interface between application and infrastructure — is in a perfect position to define the boundaries of an application, capture application policy, express it downwards to the infrastructure, and monitor compliance.

#5 — A New Consumption Model — Cloud

Yes, there it is again -- that "cloud" word.

The advent of public clouds has made IT incredibly easier to consume for many use cases. IT shops of decent scale have realized that there’s no real reason that prevents them from offering their users similar services. Everyone has access to the exact same technologies and operational models.

When it comes to basic storage, doing it in-house often has a clear cost and control advantage. But when it comes to topics like content distribution, unpredictable application demand and improved availability, external clouds can have a distinct advantage.

For external clouds to be effective as part of an enterprise IT environment, they need to seamlessly blend in with other assets. Ideally, everything would be managed consistently, and end users unaware of what was going where.

I believe that the hypervisor — as a broker of available services, sitting between applications and infrastructure — can potentially be very relevant in this context, even in a world of many public clouds.

#6 — A New Operational Model — IT as a Service

It’s pretty clear that the IT operational model is moving away from traditional functional silos (or, more euphemistically, cylinders of excellence) towards an integrated service delivery model.

IT becomes a builder/broker of services that the business wants to consume. IT gets reorganized around IT service production and consumption vs. the more familiar project orientation. Processes changes, roles change — and tools change as a result.

Ideally, storage becomes just another software service that can be dynamically invoked and adjusted arbitrarily. Storage becomes part of the service deliverable being consumed, and less of a stand-alone discipline.

#7 — New Application Models — Web-Scale and Big Data

Look at a modern web-scale application, and you’re immediately struck by how it uses storage very differently than a traditional application. The data management layers are different, the access patterns are different, data services like availability and replication are handled farther up the stack, and so on.

Turning to big data and the associated Hadoop environments, once again a sharp contrast to how traditional applications access data and use storage.

New application requirements means a new desire for newer approaches in how storage is built, deployed and managed.

#8 — A New Storage Buyer — The Cloud Architect

Historically, storage was the domain of dedicated storage teams: the server guys did their thing, the storage guys did their thing, and so on.

But in today’s virtualized and cloudy world, there’s a blurring of the lines where the newer infrastructure architects look at this whole thing very differently.

These people would prefer to think of storage as a software service, invoked as needed, and running on bog-standard industry hardware if at all possible. Everything should be orchestrate-able and programmable from higher-level frameworks. And you shouldn’t need a dedicated storage team just to provision a new application.

This is not an especially new thought. What is new is its recent groundswell. Once people start standing up and operating fully virtualized cloud-like environments, they start thinking about storage very differently.

Hard Decisions All Around

There’s a logical tendency in IT organizations to continue to do what’s worked in the past. In a complex, fast-changing world, it makes sense to stick with technologies and approaches you’re already familiar with.

And making an argument for change in a large organization can be, well, unpopular.

But from my perspective, things are moving very fast in our little storage world — faster than I’ve ever seen them move.

I'm sorry, I can’t tell you what the world will look like in 2017.

But I can tell you which side of the technology curves you want to be on :)

Comments

Chuck, always fascinating reading! I usually finish your blog with the same sentiments as when I walk out of the Spaceship Earth ride at Epcot. Wow! If the future looks like this, everything will be so simple and wonderful ! But then again, I had a similar experience way earlier in life at the GM Futurama exhibit! I wanna live underwater and fly my car above the traffic. Trends and buzzwords come and go (FCOE, iSCSI) but its always the same old routine at every company.

Man walks into the "Storage Guy" cube. "I need some storage" "How much" "about 20TB" "WOW, all that right away?" "Well not really but very soon" "OK, how much performance do you need?" "A whole lot" "No, I means what your IOPS/GB" "Huh?" "OK what service time do you need" "really Fast" "So you want Gold service?" "No, that's too expensive!" "I can only afford bronze" "So you dont care about performance?" "No of course I do !" "Lets start this over...."

I haven't yet seen any change in this conversation at any company even those developing the "3rd Platform" apps. Its just like ordering my FIOS service. I want really fast and cheap and no caps on how much I can use. Will storage ever be provisioned like this ?

I've had over 4000 comments on this blog since the beginning, and I'm seriously considering awarding you the prize for the best comment EVAH!

I think all of us would like to live in the world you describe. All you want, fixed price, no caps, etc. I know I would.

A quick anecdote? Prior to Joe Tucci running EMC, the CEO was Mike Ruettgers. At the end of the 1990s, he was keynoting the the idea of "data tone": plug in, and your data was just there. Very aspirational at the time -- and still aspirational!

But I think we're still a long way from that utopian ideal. Storage is like food, once consumed, nobody else can consume it -- unlike bandwidth, compute, etc. So the economic model is more like a supermarket vs. an airline or phone company. Since storage costs real money, people want choices vs. all-you-can-eat.

We can certainly simplify how choices are presented and provisioned. And, of course, there's always more value for your money. But I don't think we'll get to an all-you-can-eat model anytime soon.

Thanks Chuck !!! As a long time EMCer I do recall the data tone as well as Mosaic 2000 :) I totally agree we will see the 3rd Platform world just never sure how quickly the tsunami hits. And we try to mitigate the impact of large storage requests by do thin so we "share" capacity similar to bandwidth.And just like FIOS if we all were downloading movies together it wont be very fast. I guess my desire is to somehow educate that person requesting the resource. Nobody ever can answer the question of performance or real capacity so asking them to choose from a service menu seems to result in a blank stare. I hope we can enlighten the world moving forward.