Common Topics

Recent Articles

Blocks and Files Three blog posts about VMware’s VSAN had me thinking furiously. Where is VMware going with this and where could or should it go?

The first blog was Storagebod on VSANity in which he says VSAN, the ESXi hypervisor’s aggregating of server-direct attached storage (DAS) into a virtual SAN, was nice up to a point. He asks: “Why limit [it] to only 35 disks per server?”

Overall he sums it up thus; “VSAN is certainly welcome in the market; it certainly validates the approaches being taken by a number of other companies… I just wish it were more flexible and open.”

So VSAN, along with HP’s P4000 and other VSAs (Virtual Storage Arrays) along with converged server/storage hardware and software products from Nutanix, Pivot3, Scale Computing and Simplivity, is a valid approach that reproduces a subset of physical SAN array features using servers’ DAS.

The world and its storage dog typically thinks of VSANs as suitable for applications needing SANs but not full-blown SAN arrays.

At least, that was the view until VMware stepped in to the market with its “new high performance storage tier optimised for virtual environments” and capable of scaling to 32 nodes, 4.5 petabytes and two million IOPS.

Some have taken the stance that VSANs are opposed to physical SANs – not that VMware and parent EMC take that view.. VMware evangelist and blogger Chuck Hollis says that VSANs and PSANs work together in his latest blog. Let the SAN, the networked array, with its mature data management functions - “snaps, remote replication, deduplication, encryption, tiering … density, efficiency, serviceability … compliance auditing” - be used for the heavy critical data lifting while the VSAN is used for fast access and less critical data.

Chuck provides VDI as an example: “A good example is VDI. Users care most about their data store — their personal files — so that may go on a shared NAS device with a rich set of data services. The VDI images themselves, scratch spaces, swap, etc. — all go on VSAN.”

Oracle production databases go on the PSAN, while “test and development. Decision support and OLAP queries. Scratch and temp spaces. FGA used for flash-back queries. And more”, in Chuck's words, can go on the VSAN.

So VSAN is seen as a fast access storage tier with the PSAN used as the critical data storage tier. Chuck says admins can use VASA functionality and policies to place data on either VSAN or PSAN. Several SAN arrays can automatically place data within their own, internal, tiers.

The whole focus of VSAN was to move customers away from needing shared physical storage. In fact this has been one of the marketing messages since the technology was first conceived. VSAN removes the issues of dealing with those pesky storage teams, those expensive and complex storage arrays and takes us to a new world of simplification and ease of use where all of our resources live happily in the server.

He agrees wholeheartedly with Chuck that external SAN arrays have mature data management feature set which are not available with VSAN.

Of course many of these features are not available in VSAN 1.0. The initial release doesn’t support even support vSphere features such as Fault Tolerance, Storage DRS, SIOC or Distributed Power Management.

Storage array vendors don’t support VVOLS - “VMware’s attempt to encapsulate the virtual machine into a logical object on disk that can then have policies (performance, resiliency etc) applied to it.”

What I’m seeing here is that VSAN, as an abstraction layer, may be mis-focused.

Let’s agree that virtual server admin staff would like to manage virtual machine storage in VMware terms, with Tintri possibly providing the best example of VMware-aware storage.

Yet there is something more fundamental going on, to my way of thinking.

If server DAS is going to be aggregated into a SAN and be a tier of storage that doesn’t suffer from networked array access, then, well and good, but it isn’t high capacity and it lacks data management features. In an ideal world applications would talk to a storage abstraction layer and this would direct their I/O requests to the appropriate storage, be it aggregated server-attached (VSAN) or network-attached (PSAN or filer). Data placement would take place between these two top-level tiers, and within them, managed automatically and dynamically.

There should be an abstraction layer which provides access to and management of a single storage space, virtualised from, for example, VSAN and PSAN and filers, with virtual server storage constructs used as one management style. This would mean VMware, KVM and Hyper-V, and the trad block and file access constructs used as well plus the oncoming object ideas.

This, I think, is the new storage Holy Grail. Server-based storage and server-aggregated storage has to play nice with networked storage. We need a new form of storage controller that covers both areas; VSAN and PSAN.

This is probably best provided by a storage array vendor that is server hypervisor-agnostic, but not necessarily so.

Think perhaps, of EMC’s ViPR implemented as a server-resident system application with a VSAN as one of its storage resources along with networked storage arrays. Or, alternatively, IBM’s SVC implemented as software and using VSAN as one of its storage resources. NetApp’s Clustered ONTAP could be literally wrenched away from the FAS arrays, run atop VSAN in servers and use the FAS array hardware as a networked storage resource.

Of course this could be the ravings of a pot-smoking hack (que? - Vulture Central's backroom gremlins) but it seems an urgent need to me that the VSAN virtual silo should be integrated with physical networked arrays and the whole managed as an entity by a software abstraction layer. ®