Monthly Archives: March 2010

Posted onMarch 31, 2010|Comments Off on The journey to the Unified Storage Platform

Who would have thought such a simple question – really more of “seeking to understand” as my VP of Sales Mark Glasgow calls it – would kick off such a slew of e-mails, comments and tweets. The other day I asked the following question on Twitter:

Let’s take a step back. The majority of my storage life has been in the block-based access protocol. For the majority of my career it was all about Fibre Channel connectivity. Have you heard the saying “If all you sell are hammers, then everything in the world is a nail”? That’s sort of where I was at 5 or so years ago. The FC was my hammer, and all connectivity issues could be resolved with FC!! Then a few years ago iSCSI started to inch its way into the discussion. Xiotech adopted it as another option for connectivity and I got to add another tool into my bag 🙂

Then I asked the above question as a means of self reflection. Why would someone choose to bypass block-based connectivity in lieu of file-based? Just didn’t seem logical to me. I even happened to have that particular tool in my bag, so it’s wasn’t as if I was trying to compete against it, just wanted to see what the big deal was. Today, almost all storage vendors (Xiotech included) offer NFS connectivity. Some utilize gateways like the EMC Celerra, NetApp V-Series as well as Xiotech. Others use native support like PillarData and NetApp FAS product line.

At first blush I thought, it has to be because IP is viewed as being less expensive, less complicated then native Fibre. But I think at this point, the price argument against FC should be put to bed, thanks in large part for Cisco/Brocade/Qlogic driving down the costs. Complexity is also something I think should be/could be put to bed. Just sit in front of a Cisco Ethernet and a Cisco MDS switch, the IOS is the same; the perceived complexity around FC is really no longer an issue. Now, for the smallest of the SMB, maybe costs and perceived complexity is enough to choose NFS. I can see that.

Maybe it’s because these gateway devices offer something their block based architecture can’t support. That starts to make sense. Maybe it’s some sort of feature that drives this decision. In some cases, maybe its thin provisioning, better integrated snapshot, single instance storage/Data DeDupe and even advanced async replication. Most storage arrays on the surface can do these, but with a gateway device maybe they can do this better, cheaper, faster, etc.? For the SME/SMB I can see this as a reason.

Then again, according to some of the people that responded to my blog and twitter, maybe it’s for performance reasons. Some sort of ability to cache the front-end writes make the applications/hypervisors/OSes just run quicker. Others suggested that gateway devices made it just “stupid simple” to add more VM’s because you could essentially treat an NFS mount as a file/folder, and you can just keep dropping VMDK’s (files essentially) into these folders for easier management. That makes sense as well. I can see that, I mean if you look at a NAS device it’s essentially a server that runs an OS with a file system that connects to DAS/JBOD/SBOD/StorageArray on the backend right? It could be viewed as a caching engine.

Then it dawned on me, it’s not really about one being better than the other, it’s more about choices. That’s what “Unified Storage” is all about, the ability to add more tools to your bag to help solve your needs. If you look inside a datacenter today, most companies have internally tiered their applications/servers to some extent. Not everything is run on the same hardware, software etc. You pick the right solution, for the right application. Unified Storage is the ability to choose the right storage connectivity for the various different applications/hypervisors and operating systems. The line gets really blurred as gateway devices get more advanced and better integrated.

Either way, everyone seems to be moving more and more to the Unified Storage device. It should be interesting to see what sort of things come out of Storage Networking World in a few weeks !!

So, why would I choose to run for instance VMware Vsphere (as an example) on NFS when I could just as easily run it without ? Is it that the file system used for that particular companies NAS solution offers something that their block based solution can’t? ie) thin provisioning, Native DeDupe, Replication etc. Is it used more as some sort of caching mechanism and gives better performance? Is it more fundamental and it’s more of a connectivity choice (IP vs FC vs NFS Mount)?

If you are a VMware Admin, or a Hyper-Visor admin from a non-specific point of view, Xiotech’s “Virtual View” is the final piece to the very large server virtualization puzzle you’ve been working on. In my role, I talk to a lot of Server Virtualization Admin’s and their biggest heartburn is adding capacity, or a LUN into an existing server cluster. With Xiotech’s Virtual View it’s as easy as 1, 2, 3. Virtual View utilizes CorteX (RESTful API) to communicate, in the case of VMware, to the Virtual Center appliance to provision the storage to the various servers in the cluster. From a high level, here is how you would do it today.

I like to refer to the picture below as the “Rinse and Repeat” part of the process. Particularly the part in the middle that describes the process of going to each node of the server cluster to do various admin tasks.

VMware Rinse and Repeat process

With Virtual View the steps would look more like the following. Notice its “wizard” driven with a lot of the steps processed for you. But it also gives you an incredible amount of “knob turning” if you want as well.

Virtual View Wizard Steps

And for those that need to see it to believe it, below is a quick YouTube video Demonstration.

If you run a VMware Specific Cluster (For H.A purposes maybe) of 3 servers or more, then you should be most interested in Virtual View !!!

I’ll be adding some future Virtual View specific blog posts over the next few weeks so make sure you subscribe to my blog on the right hand side of this window. !!

If you have any questions, feel free to leave them in the comments section below.

By the way, if by chance 10,000 is just not enough users for you. Don’t worry, add a second ISE and DOUBLE IT TO 20,000. Need 30,000, then add a THIRD ISE. 100,000 users in 10 ISE or 30U of RackSpace. Sniff Sniff….I love it !!!!!!!!!!!!

By the way – Check out what others are doing:

Pillar Data = 8,500 Exchange Users with 24GB of Cache !!! I should say, our ISE comes with 1GB. It’s not the size that counts, it’s HOW YOU USE IT !! 🙂