Recently VMware announced version 5.0 of their vSphere virtualization solution with a theme of reducing complexity, enabling automation, and supporting scaling with confidence. As a key component for supporting cloud, virtual and dynamic infrastructure environments, vSphere V5.0 includes many storage related enhancements and new features.

Memory and storage hierarchy

Before going further into why the importance and relevance of storage related enhancements for vSphere V5.0 including DRS, lets take a quick step back to look at the big picture. The storage hierarchy (figure 1) extends from memory inside servers out to external shared storage, including virtual and cloud accessible resources. Often, discussions separate the relationship between physical machine (PM) server memory and data storage as one is considered to be a server topic and the other a disk discussion. However, the two are very much interrelated particularly with virtual machines (VMs) and thus benefit as well as impact each other. Servers need I/O networking to communicate with other servers, with users of information services, and with local, remote, or cloud storage resources.

In figure 1, an example of the storage or memory hierarchy is shown, ranging from fast processor core or L1 (level 1) and L2 (level 2) on-board processor memory to slow, low-cost, high-capacity removable storage. At the top of the pyramid is the fastest, lowest-latency, most expensive memory or storage, which is also less able to be shared without overhead with other processors or servers. At the bottom of the pyramid is the lowest-cost storage, with the highest capacity while being portable and sharable.

Figure 1: Memory and storage hierarchy

The importance of main or processor (server) memory and external storage is that virtual machines need memory to exist when active and a place on disk to reside when not in memory. Keep in mind that a virtual machine is a computer whose components are emulated via data structures stored and accessed via memory. The more VMs there are, the more memory is required and not just more, but faster memory is also important. Why are there all these different types of storage? The answer, with all technology points set aside, comes down to economics. There is a price to pay for performance. For some applications this can be considered the cost of doing business, or even a business enabler, to buy time when time is money and using a low-cost storage could have a corresponding impact on performance. In other words, it is important from both cost and performance perspectives to use the right tool for the task at hand to enable smarter, more intelligent, and effective information services delivery.

Back to vSphere V5.0 and storage related enhancements

With a theme of supporting scaling with confidence and reducing complexity leveraging automation, it should not be a surprise that vSphere V5.0 enhancements address a mix of performance, availability, capacity and efficiency. Supporting scaling with confidence means managing larger vSphere installations without adding management complexity. This includes supporting rapid provisioning of VMs and their associated storage resources. vSphere V5 accomplishes that with a mix of new features and enhancements to previously announced capabilities.

Storage and management features of vSphere V5.0 include:

vSphere Storage Appliance (VSA) is a new feature that enables internal dedicated direct attached storage (DAS) to be shared with another physical machine for smaller vSphere environments. Some vendors such as EMC have added support for their storage systems such as the VNXe to interface with VSA enabled systems providing transparent data migration when upgrading from dedicated DAS to shared storage.

VMware File System V5.0 (VMFS-5) builds on previous versions by supporting scale with larger devices (64TB volumes), unified block size and improved sub block handling such as 8KB or 1KB for small files.

VASA storage awareness API to facilitate viewing how storage systems are configured including attributes of LUNs, volumes, data stores or file systems such as RAID or protection level, performance and space optimization characteristics thick or thin provisioned along with replication and other data protection capabilities. This capability enables vCenter management along with other functionality tools such as Storage Distributed Resources Scheduler (SDRS) to have timely insight into resource configuration capabilities

Storage IO control that provides workload balancing and fairness access is an existing feature that was enhanced as part of vSphere V5.0 by extending support form iSCSI and Fibre Channel block to NAS NFS based storage on a cluster wide basis. Also enhanced were iSCSI support and new capabilities for Fibre Channel over Ethernet (FCoE) software initiators complimenting existing FCoE hardware adapter support.

One of the objectives of Storage DRS (SDRS) is to reduce the time and complexity of provisioning virtual servers enabling more to be done by existing staff. Complimenting SDRS is storage vMotion. With vSphere V5.0 storage vMotion has been enhanced to perform the VM migration faster and requiring less resources in order to perform the relocation laying the ground work to support SDRS. Storage vMotion is similar to traditional vMotion which enables a currently active and executing VM to be moved from one PM to another without disrupting running applications. In the case of storage vMotion, an active VM can be moved from one data store to another while in use to support storage tiering, HA and BC in addition to load balancing or other maintenance functions.

A new object part of vCenter is data store clusters to support SDRS. SDRS data store clusters combine different storage resources (block or NFS) including various tiers (figure 1) by abstracting them into a single unit of consumptions enabling rapid and smart placements. vCenter leverages the new storage awareness APIs to gain visibility into storage systems capabilities combined with SDRS scheduled analysis of resource usage to make informed recommendations. In figure 2 as 12 TB data store cluster is shown that is made up of four 3 TB data stores (storage volumes, LUNs or NFS files systems). SDRS removes management complexity by providing initial placement and ongoing storage load balancing recommendations based on performance (latency) and space (capacity). During the provisioning processing, storage is allocated from a data store in the SDRS cluster for VMs or virtual disks (VMDKs). Once provisioned, SDRS makes ongoing recommendations for manual intervention, or on an automated basis.

Recommendations are made by SDRS on an ongoing basis based on space and performance user adjustable threshold settings. The recommendations are based on evaluations that by default occur every eight hours, however can be adjusted to meet specific needs. Similarly, space utilization and IO latency performance thresholds can also be tailed to meet specific requirements. VMs and VMDKs are moved between the different data a store in the SDRS cluster based on polices and thresholds recommendations.

Another feature of SDRS are affinity rules that enable control over where virtual disks should be placed including on the same or different data store within in a vCenter resource cluster. By default VMs virtual disks are kept together (affinity) on the same data store. Additional affinity rules include VMDK anti-affinity where virtual disks of a VM with multiple virtual disks are placed on different data stores. Another mode is VMDK affinity where virtual disks are kept together on the same data store while VM anti-affinity specifies that two VMs with associated virtual disks are placed on different data stores for resiliency. Another option is maintenance mode where SDRS automatically moves all VMs and virtual disks from selected data stores to remain data stores in the resource clusters. Maintenance mode can be used for facilitating hardware upgrades or retiering of storage devices to optimize performance or capacity.

Profile driven storage and automation

Another enhancement to vSphere with version 5.0 is profile drive storage management that matches the service level agreement (SLA) and service level objectives (SLO) of VMs with the appropriate storage resource. The benefit of profile driven storage is to reduce the challenge and complexity associated VM and storage provisioning while meeting SLA and SLO requirements. Various tiers of storage resources (refer back to figure 1) associated with different templates or profiles describe service requirements are allocated to VMs. Profiles are used during provisioning, storage vMotion operations and cloning so that compliant data stores in the SDRS cluster are used. Complaint data stores are those that meet the specific SLA and SLO needs of VMs. SDRS combined with profile drive storage, storage vMotion, vSphere storage APIs, storage awareness help to automate, reduce complexity and increase scaling with confidence of virtual server environments.

Comments and recommendations

What vSphere version 5.0 new features and enhancements, VMware is enabling scale of virtual servers while reducing the associated management complexity. Combing policy driven storage management with awareness of underlying tiered storage resources, vSphere SDRS can help make better effective use of important IT resources. For those who are comfortable with automation, vSphere and associated tools can leverage policy based management and resource optimization. Another option is to use the recommendations of the vCenter and associated tools to make more informed decisions pertaining to vSphere environments. The net result is that vSphere V5.0 enables scaling with confidence while reducing complexity that translates in to taking cost out of managing server and storage environments.

Share this Article:

Greg Schulz is Founder and Sr. Analyst of independent IT advisory and consultancy firm Server and StorageIO (StorageIO). He has worked in IT at an electrical utility, financial services and transportation firms in roles ranging from business applications development to systems management and architecture planning. Greg has also worked for various vendors in addition to an analyst firm before forming StorageIO. Mr. Schulz is author of several books (Cloud and Virtual Data Storage Networking – CRC Press, The Green and Virtual Data Center – CRC Press, Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures – Elsevier), active with social media with his engaging approach and a top ranked blogger. He has a degree in computer science and master’s degree in software engineering. Learn more at www.storageio.com