Cynic

Could VMAX3 possibly be the last incarnation of the Symmetrix that ships?

As an Enterprise Array, it feels done; there is little left to do, arguably this has been the case for some time but the missing feature for VMAX had always been ease of use and simplicity. The little foibles such as the Rule of 17, Hypers, Metas, BCVs vs Clones all added to the mystique/complexity and led to many storage admins believing that we were some kind of special priesthood.

The latest version of VMAX and the rebrand of the Enginuity into HyperMax removes much of this and it finally feels like a modern array…as easy to configure and run as any array from their competitors.

And with this ease of use; it feels like the VMAX is done as an Enterprise Array…there is little more to add. As block array, it is feature complete.

The new NAS functionality will need building upon but apart from this…it’s done.

So this leaves EMC with VNX and VMAX; two products that are very close in features and functionality; one that is cheap and one that is still expensive. So VMAX’s only key differentiator is cost…a Stellar Artois of the storage world.

I can’t help but feel that VNX should have a relatively short future but perhaps EMC will continue to gouge the market with the eye-watering costs that VMAX still attracts. A few years a go; I thought the Clariion team might win out over the Symm team, now I tend to believe that eventually the Symm will win out.

But as it stands, VMAX3 is the best enterprise array that EMC have shipped but arguably it should be the last enterprise array that they ship. The next VMAX version should just be software running on either your hardware or perhaps a common commodity platform that EMC ship with the option of running the storage personality of choice. And at that point; it will become increasingly hard to justify the extra costs that the ‘Enterprise’ array attracts.

This model is radically different to the way they sell today…so moving them into a group with the BURA folks makes sense; these folks are used to selling software and understand that is a different model..well some of them do.

EMC continue to try to re-shape themselves and are desperately trying to change their image; I can see a lot of pain for them over the next few years especially as they move out of the Tucci era.

Could they fail?

Absolutely but we live a world where it is conceivable that anyone of the big IT vendors could fail in the next five years. I don’t think I remember a time when they all looked so vulnerable but as their traditional products move to a state of ‘doneness’; they are all thrashing around looking for the next thing.

And hopefully they won’t get away with simply rebranding the old as new…but they will continue to try.

Yes, I’ve not been writing much recently; I am trying to work out whether I am suffering from announcement overload or just general boredom with the storage industry in general.

Another day hardly passes without receiving an announcement from some vendor or another; every one is revolutionary and a massive step forward for the industry or so they keep telling me. Innovation appears to be something that is happening every day, we seem to be leaving in a golden age of invention.

Yet many conversations with peer end-users generally end up with us feeling rather confused about what innovation is actually happening.

We see increasingly large number of vendors presenting to us an architecture that pretty much looks identical to the one that we know and ‘love’ from NetApp; at a price point that is not that dissimilar to that we are paying from NetApp and kin.

All-Flash-Arrays are pitched with monotonous regularity at the cost of disk based on dedupe and compression ratios that are oft best-case and seem to assume that you are running many thousands of VDI users.

The focus seems to be on VMware and virtualisation as a workload as opposed to the applications and the data. Please note that VMware is not a workload in the same way that Windows is not a workload.

Don’t get me wrong; there’s some good incremental stuff happening; I’ve seen a general improvement in code quality from some vendors after a really poor couple of years. There still needs to be work done in that area though.

But innovation; there’s not so much that we’re seeing from the traditional and new boys on the block.

Storage Marketing is one of maddest and craziest parts of the technology industry; so many claims that don’t necessarily stand-up to scrutiny and pretty much all of them need to be caveated with the words

‘It Depends….’

And actually it is very important to understand that it really does depend; for example, when your flash vendor claims that they can supply flash at the price of spinning rust; they may well be making assumptions about deduplication or compression and your data.

If you are in a highly virtualised environment, you might well get a huge amount of deduplication from the operating systems..actually, even if you are not and you utilise SAN-boot, it’ll dedupe nicely. But what if you store your operating system on local disk?

What if you are already utilising compression in your database? What if your data is encrypted or pre-compressed media?

Of course this is obvious but I still find myself explaining this at times to irate sales who seem to assume that their marketing is always true…and not ‘It Depends’.

The problem is that many of us don’t have time to carry out proper engineering tests; so I find it best to be as pessimistic as possible…I’d rather be pleasantly surprised than have an horrible shock. This means at times I am quite horrible to vendors but it saves me being really nasty later.

There appears to be a missing announcement at EMC-World; I think the world and their dog were expecting a VMAX announcement. I certainly was; we’ve not had a big VMAX announcement at EMC-World for a couple of years.

So what gives?

Now, have no doubt….there is a new VMAX coming and EMC’s high-end array sales show that the market is expecting it and might well be holding off on new purchases and refreshes. Question is, is it late or is it something else entirely?

I’m tending towards the latter; I think EMC are trying their hardest to transition to a new culture and product-set; they can’t do this if they have a VMAX announcement as a distraction.

So I’m guessing we’ll see a special event later in the year…won’t that be great! Looking forward to it!!

Okay, storage vendor posts another stupid guarantee; it’s like deja-vu all over again.

And EMC, if you are so confident about your claims…make the guarantee unlimited, not time-bound and so when there are enough of the arrays around to ensure that there is a decent sample-base of strange corner-cases to cause problems.

I don’t always agree with Trevor Pott but this piece on ServerSAN, VSAN and storage acceleration is spot on; the question about VSAN running in the kernel and the advantages that brings to performance; and indeed, I’ve also heard comments about reliability, support and the likes over competing products is very much one which has left me scratching my head and feeling very irritated.

If running VSAN in the kernel is so much better and it almost feels that it should be; it kind of asks another question, perhaps I would be better running all my workloads on bare-metal or as close as I can.

Or perhaps VMware need to be allowing a lot more access to the kernel or a pluggable architecture that allows various infrastructure services to run at that level. There are a number of vendors that would welcome that move and it might actually hasten the adoption of VMware yet further or at least take out some of the more entrenched resistance around it.

I do hope more competition in the virtualisation space will bring more openness to the VMware hypervisor stack.

And it does seem that we are beginning towards data-centres which host competing virtualisation technologies; so it would be good if that at a certain level that these became more infrastructure agnostic. From a purely selfish point of view; it would be good to have the same technology to present storage space to VMware, Hyper-V, KVM and anything else.

I would like to easily share data between systems that run on different technologies and hypervisors; if I use VSAN, I can’t do this without putting in some other technology on top.

Perhaps VMware don’t really want me to have more than one hypervisor in my data-centre; the same way that EMC would prefer that all my storage was from them…but they have begun to learn to live with reality and perhaps they need to encourage VMware to live in the real world as well. I certainly have use-cases that utilise bare-metal for some specific tasks but that data does find its way into virtualised environments.

Speedy Storage

There are many products that promise to speed-up your centralised storage and they work very well, especially in simple use-cases. Trevor calls this Centralised Storage Acceleration (CSA); some are software products, some come with hardware devices and some are mixture of both.

They can have some significant impact on the performance of your workloads; databases can benefit from them especially (most databases benefit more with decent DBAs and developers how-ever); they are a quick fix for many performance issues and remove that bottleneck which is spinning rust.

But as soon as you start to add complexity; clustering, availability and moving beyond a basic write-cache functionality…they stop being a quick-fix and become yet another system to go wrong and manage.

Fairly soon; that CSA becomes something a lot closer to a ServerSAN and you are sticking that in front of your expensive SAN infrastructure.

The one place that a CSA becomes interesting is as Cloud Storage Acceleration; a small amount of flash storage on server but with the bulk of data sitting in a cloud of some sort.

So what is going on?

It is unusual to have such a number of competing deployment models for infrastructure; in storage, we have an increasing number of deployment models.

Centralised Storage – the traditional NAS and SAN devices

Direct Attached Storage – Local disk with the application layer doing all the replication and other data management services

Distributed Storage – Server-SAN; think VSAN and competitors

And we can layer an acceleration infrastructure on top of those; this acceleration infrastructure could be local to the server or perhaps an appliance sitting in the ‘network’.

All of these have use-cases and the answer may well be that to run a ‘large’ infrastructure; you need a mixture of them all?

Storage was supposed to get simple and we were supposed to focus on the data and providing data services. I think people forgot that just calling something a service didn’t make it simple and the problems go away.

‘*sigh* Another change to a licensing model and you can bet it’s not going to work out any cheaper for me’ was the first thought that flickered through my mind during a presentation about GPFS 4.1 at the GPFS UG meeting in London (if you are a GPFS user in the UK, you should attend this next time…probably the best UG meeting I’ve been at for a long time).

This started up another train of thought; in this new world of Software Defined Storage, how should the software be licensed? And how should the value be reflected?

Should we be moving to a capacity based model?

Should I get charged per terabyte of storage being ‘managed’?

Or perhaps per server that has this software defined storage presented to it?

Perhaps per socket? Per core?

But this might not work well if I’m running at hyperscale?

And if I fully embrace a programmatic provisioning model that dynamically changes the storage configuration…does any model make any sense apart from some kind of flat-fee, all-you-can-eat model.

Chatting to a few people; it seems that no-one really has any idea what the licensing model should look like. Funnily enough; it is this sort of thing which could really de-rail ServerSAN and Software Defined Storage; it’s not going to be a technical challenge but if the licensing model gets too complex, hard to manage and generally too costly, it is going to fail.

Of course inevitably someone is going to pop-up and mention Open-Source…and I will simply point out, RedHat make quite a lot of money out of Open-Source; you pay for support based on some kind of model. Cost of acquisition is just a part of IT infrastructure spend.

I think we’ve all been there and at times the feature lists seem to defy the brightest salesbod to explain. Although, the sales audience seems to be rather too interested in what the product they are expected to sell actually does and what value it might bring..

Howard has a piece titled ‘Separating Storage Startups From Upstarts’; it actually feels more like a piece on how to be a technology buyer and how to be a savvy buyer. As someone who on occasion buys a bit of technology or at least influences buying decisions…here’s some of my thoughts.

List price from any vendor is completely meaningless; most vendors only seem to have list prices to comply with various corporate governance regimes. And of course having a list price means that the procurement department can feel special when they’ve negotiated the price down to some stupidly low percentage of the original quote; in a world where 50%+ discounts are common, list is nonsense.

What is true is that often a start-ups list price will be lower than the traditional vendor; it’s got to be even to start a conversation.

In my experience; the biggest mistake an end-user can make is not being willing to take any bid competitive; dual supplier type arrangements are good but often can lead to complexity in an environment. If you can split your infrastructure into domains; say for example you buy all your block from one vendor and your file from another vendor..or perhaps, you have a tiering strategy that allows you to do something similar.

But loyalty can bring rewards as well; partnership is thrown around but learning to work with your vendor is important. Knowing which buttons to press and learning how a vendor organisation works is often key to getting the most out of infrastructure procurement.

Howard’s assertion about a three-year life of an array in a data centre? This doesn’t seem to ring true for me and many of my peers; four-five years seems to the minimum life in general. If it were three years, we would generally be looking at an actual two-year useful life of an array; six months to get on, two years running and six months to get off. Many organisations are struggling with four years and as arrays get bigger; this is getting longer.

And the pain of going through a technology refresh every three years; well we’d be living in a constant sea of moving data whilst trying to do new things as well. So my advice, plan for a five year refresh cycle…

My advice to any technology buyer is to pay close attention to the ‘UpStarts’ but also pay attention to your existing relationships; know what you want and what you need. Make sure that any vendor or potential vendor can do what they say; understand what they can do when there are problems. Test their commitment and flexibility.

Look very carefully at any new offering from anyone; is it a product or a feature? If it is a feature; is it one that is going to change your world substantially? Violin arguably fell into the trap of being a feature; extreme performance…it’s something that few really need.

And when dealing with a new company; understand where their sales-culture has come from…if you had a bad experience with their previous employer, there’s a fair chance that you might have a similar experience again.