By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Data store clusters do for storage what VMware Distributed Resource Scheduler (DRS) clusters do for memory and
network resources. Namely, they allow admins to take an array of data stores that potentially
reside on different storage arrays and create a single data store cluster object.

When virtualisation admins create a new virtual machine (VM) on vSphere 5, they no longer have
to worry about which data store it should reside on, just as they don’t worry about which ESX host
a VM executes on in a HA/DRS cluster.

Users often create data store clusters based on class (gold, silver, bronze) or on different
performance attributes (FC/SSD, FC/SAS, iSCSI/SAS, NFS/SATA); or with other
attributes, for example clusters that support storage vendor snapshots or replication.

It mirrors cloud computing where the underlying plumbing (storage, network and servers) is
abstracted to create a commodity-based model of data centre resources.

VMware vSphere 5 data store clusters are a radical departure from individual data stores, but
data store clusters align nicely to the trend of storage tiering -- where storage arrays are no
longer managed as stand-alone units but as an array of arrays, as it were.

Problems can occur when the storage array as well as the Storage DRS can move data volumes
around, so it makes more sense to use either data store clusters or Storage DRS, but not both.
Another alternative is to use Storage DRS for initial placement but not for moving VMs.

Here are some considerations to keep in mind when working with data store clusters:

• Data stores that make up a data store cluster can reside on different storage arrays.
• Data stores work with different storage protocols (FC, iSCSI, NFS), and users are advised not to
mix these together in a single data store cluster.
• It is possible to mix VMFS-3 and VMFS-5 data stores together, although it isn’t
recommended.
• Data stores in a data store cluster:
- have the same performance characteristics such as the same number of spindles, disk types and
RAID levels;
- have the same attributes. Consistency is the key here. For example, all the data stores in the
cluster should be enabled for replication based on the type and on frequency;
- are only accessible by ESXi5.

VSphere 5 Storage DRS
Another vSphere 5 storage highlight is Storage
DRS, or SDRS. SDRS is complementary to data store clusters in that users must create data store
clusters in order to use SDRS.

The job of vSphere
5 Storage DRS is to place a VM’s virtual disks on the right data stores within the right
cluster -- just like its sister technology DRS puts the VM on the correct ESX hosts
within its cluster to balance CPU and memory resources.

SDRS can also move VMs from one data store to another within a data store cluster to improve
overall use.

Figure 1: Admins can
select the data stores they want to use within a particular data store cluster.

Figure 2: In this example, four data stores on four different
storage arrays were added to the data store cluster.

DRS, like SDRS, has affinity and anti-affinity rules to ensure that VMs or virtual disks with
similar storage I/O requirements don’t compete for disk time.

By default, all the virtual disks that make up a VM with multiple Virtual Machine Disks (VMDKs)
are placed in the same data store, but with anti-affinity rules, admins can reorganise and
distribute them across many data stores for optimal performance.

They can also indicate that two virtual disks must never reside on the same data store to avoid
friction in the infrastructure. Additionally, there is a “maintenance mode” feature that allows
admins to fully empty a data store for maintenance purposes.

SDRS is compatible with all the main VMware vSphere features (VMware snapshots, RDMs, NFS and VMFS), but it is only
compatible with ESX5i.

SDRS manifests itself when users create, clone or build a new VM from a template.

Figure 3: SDRS is
visible when building a new virtual machine from a template. There’s also an option to disable SDRS
for a particular VM.

SDRS controls disk activity according to these metrics:

• SDRS uses a combination of free space and the latency to the storage to calculate the best
data store within a data store cluster.
• It uses only the latency to decide if a virtual machine’s files should be moved to improve
performance.
• It checks space on a data store every five minutes using vCenter.
• SDRS checks latency only every eight hours, so don’t expect VMs to whiz about from one data store
to another within a cluster every millisecond.

Data store clusters are created in the “Data stores and data store clusters” view in vCenter and
then assigned to an existing HA/DRS cluster. Users must ensure that all ESX hosts in the target
cluster can see all the data stores in the cluster. Theoretically, if you don’t do that, problems
would occur. Let’s say one data store cluster is constructed of ten data stores, but one of your
ESX hosts can only see nine of the data stores. You’d be in trouble with features like HA, DRS and
VMotion.

Finally, disk-intensive backup windows can really mess up SDRS. Users can schedule SDRS to
ignore backup windows so its calculations are based on true operational disk activity.

Figure 4: SDRS Scheduling is fully automated, but admins
can configure SDRS to schedule it so that it ignores backup windows.

With VSA, storage intelligence, which is usually based on some type of Linux/FreeBSD
distribution, is ported out of hardware (or “storage controllers”) into a virtual machine.

The VMware vSphere 5 VSA can use the local server’s storage and share it across the network,
turning direct-attached storage (DAS) into an NFS
appliance.

VSA is deployed in two ways:
1. With two ESX hosts and a separate vCenter server. Here, the vCenter server must run a clustering
and management service so it can act as witness to the two ESX hosts and determine if a error has
occurred.

2. With three or more ESX host without a vCenter host to act as witness. In case of a physical
failure of an ESX host, the data is protected by creating replicas to other ESX hosts in the
cluster that are also running the VSA.

The idea of the VSA is to create a distributed array of shared storage without a single point of
failure and without using an expensive centralised SAN system. It is aimed at companies in the
SMB/SME market that don’t have the budget for a SAN.

But, in its first release, some industry insiders say VSA’s purchase price combined with the
disk space it uses for the replicas make it a poor choice. With that said, recent updates to the
different RAID types available have significantly improved VSA’s actual disk use ratios. The same
system with the same level of protections can now deliver more disk space capacity in real
terms.

Here are some best practice tips for vSphere 5 VSA:

• The VSA has two virtual NICs -- one for
the front-end and one for the back-end ports. The front-end NIC advertises an IP address for
inbound connections and for ESX hosts to mount the NFS data stores. The back-end virtual NIC is
used for management and for the cluster network.
• The VSA’s default memory usage is 24 GB, up to 8 disks and one SCSI controller. VMware recommends
a one gigabit network interface as minimum.
• VMware recommends that local direct-attached storage be configured with RAID, either RAID1+0,
RAID5 or RAID6.

VMware has made huge improvements to vSphere 5, especially from a storage
perspective. VMware vSphere 5’s storage capabilities along with these best practice
recommendations will help IT maximise the use of the product.

Mike Laverick is a former VMware instructor with 17 years of
experience in technologies such as Novell, Windows, Citrix and VMware. Since 2003, he has been
involved with the VMware community. Laverick is a VMware forum moderator and member of the London
VMware User Group. He is also the man behind the virtualisation website and blog RTFM Education, where he publishes free guides and utilities
for VMware customers. Laverick received the VMware vExpert award in 2009, 2010 and 2011.

Since joining TechTarget as a contributor, Laverick has also found the time to run a weekly
podcast called the Chinwag and the Vendorwag. He helped found the Irish and Scottish VMware user
groups and now speaks regularly at larger regional events organised by the global VMware user group
in North America, EMEA and APAC. Laverick published books on VMware Virtual Infrastructure 3,
vSphere4, Site Recovery Manager and View.

Email Alerts

By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Disclaimer:
Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.

It can be tempting to stray from the security roadmap security professionals have put in place when data breaches like the Sony and Anthem breaches are all over the news. But experts say it's crucial to stick to the security basics.

The Open Data Platform has arrived, but not all Hadoop vendors are on board. The initiative, aimed at boosting interoperability, formed a backdrop for discussion at the Strata + Hadoop World 2015 conference.