Menu

Category Archives: VSAN

I recently asked the vSAN vExpert slack channel the following: “Is there a way to set a default storage policy for a specific vSAN cluster? Use case – shared vCenter server with 10 hybrid vSAN clusters and 1 “private” customer with dedicated cluster running AF vSAN. The Private customer wants to use RAID6 but their deployment method just now does not allow the selection of a storage policy. We can’t change the default policy as the other 10 hybrid clusters are using this (and also don’t have a way to select a policy during deployment).”

Slightly embarrassingly for me that I didn’t know this but Steve Kaplan (@stvkpln) told me how to do it!

If you browse to the vSAN datastore object then Manage then general you can set the default policy for that datastore. Simples!

In case you have missed the VSAN 6.2 announce recently, there is also a PDF which has been released – What’s New with VMware Virtual SAN 6.2. The paper details what’s already being announced about VSAN 6.2. Note that I have also had the details written in my post here – VMware VSAN 6.2 Announced – Nearline dedupe, Erasure Coding, QoS ++ .

According to the VSAN maximums, there is a 100 VM limit per host in a VSAN 5.5 cluster and 200 in a VSAN 6 cluster. This seems to be a soft limit as I was recently able to deploy 999 VMs in to a 4 node VSAN 5.5 cluster (with one host acting as a dedicated HA node, so not running any compute). I got to ~333 VMs per host before I reached the 3000 component limit (which is a hard limit) on each host. The below is a screen grab of vsan.check_limits from RVC:

After upgrading VSAN 5.5 to VSAN 6.0 I thought it would be a good idea to run the same set of tests that I ran previously (VSAN 5.5 Performance Testing) to see how much of a performance increase we could expect.

The test was run using the same IOAnalyser VMs and test configuration, on the same hardware. The only different was the vCenter/ESXi/VSAN version.

When running through some VSAN operational readiness tests I stumbled across an issue when simulating host failures. When there are more VSAN Components than physical disks and a host fails, the components will not be rebuilt on remaining hosts.

Firstly here is some background information about the test cluster:

4 x Dell R730XD Servers

1 Disk Group per server with one 800GB SSD fronting six 4TB Magnetic Disks

I recently carried out some VSAN performance testing using 3 Dell R730xd servers:

Intel Xeon E5-2680

530GB RAM

2 x 10GbE NICs

ESXi5.5, 2068190

800GB SSD (12GB/s Transfer Rate)

3 x 4TB (7200RPM SAS disks)

On each of these hosts I built a IOAnalyzer Appliance (https://labs.vmware.com/flings/io-analyzer) (1 with it’s disks placed on the same host as the VM and the other 2 with “remote” disks). Something similar to this:

When HA is turned on in the cluster, FDM agent (HA) traffic uses the VSAN network and not the Management Network. However, when a potential isolation is detected HA will ping the default gateway (or specified isolation address) using the Management Network.

When enabling VSAN ensure vSphere HA is disabled. You cannot enable VSAN when HA is already configured. Either configure VSAN during the creation of the cluster or disable vSphere HA temporarily when configuring VSAN.

When there are only VSAN datastores available within a cluster then Datastore Heartbeating is disabled. HA will never use a VSAN datastore for heartbeating as the VSAN network is already used for network heartbeating using the Datastore for heartbeating would not add anything,

When changes are made to the VSAN network it is required to re-configure vSphere HA.