Deploying vSAN with vCenter Server Appliance

vSAN deployments in brownfield environments are simple. New hosts are configured based on projected workloads (plus points for utilising vSAN Ready Nodes), they’re purchased, racked, built, and absorbed into an existing vCenter workload domain before vSAN is finally enabled and configured. But how would we deploy vSAN into a greenfield environment? An environment with no vCenter, no shared storage, but only brand new ESXi hosts with valid (yet unconfigured) cache and capacity vSAN disks? As vSAN is reliant on vCenter for its operations, we seemingly have a chicken-and-egg scenario.

In this article, I detail the process of deploying (Stage 1) and configuring (Stage 2) a vCenter Server Appliance into a greenfield environment and, more specifically, onto a single-node vSAN cluster in hybrid-mode (Note – this is in no way supported by VMware for anything other than deploying vCenter and vSAN into a greenfield environment). I then add additional hosts to the cluster and configure vSAN storage and networking via the brilliant Cluster Quickstart tool (Stage 3), before applying a vSAN VM Storage policy to the vCenter Server Appliance (Stage 4). Once complete, our vSAN cluster will be ready to host live workloads.

Before we jump into the vCenter deployment, let’s take a look at our environment and a few prerequisites.

Environment Prerequisites

My environment has access to both DNS and NTP services. DNS records (forward and reverse) have been created for the vCenter Server Appliance and all vSphere Hosts (ESXi). This is a requirement than cannot be circumvented.

vSAN Prerequisites & Pre-Reading

I would advise a little pre-reading to ensure your vSphere Hosts (ESXi) are compatible. Please visit the VMware Docs Hardware Requirements for vSAN article for the full list of hardware requirements. A number of the key points are detailed below:

Cluster Requirements

The vSAN cluster must contain a minimum of three ESXi hosts and must contribute local storage (four or more is recommended).

Hosts residing in a vSAN cluster must not participate in other clusters.

Each ESXi host in the cluster must have a dedicated VMkernel port, regardless of whether it contributes to storage.

Storage Requirements

Cache Tier:

Minimum one SAS or SATA solid-state drive (SSD) or PCIe flash device.

Capacity Tier:

Hybrid Configurations – Minimum one SAS or NL-SAS magnetic disk.

All Flash Configurations – Minimum one SAS or SATA solid-state (SSD) or a PCIe flash device.

A SAS or SATA HBA, or a RAID controller set up in non-RAID/pass-through or RAID 0 mode.

Flash Boot Devices: When booting a vSAN 6.0 enabled ESXi host from a USB or SD card, the size of the disk must be at least 4 GB. When booting a vSAN host from a SATADOM device, you must use a Single-Level Cell (SLC) device and the size of the boot device must be at least 16 GB.

ESXi/vSAN Node Disk Configuration

Each of my vSAN nodes (of which there are three) has been configured with the below disk layout:

1x 20 GB (ESXi)

1x 50 GB SSD (vSAN Cache)

2x 500 GB HDD (vSAN Capacity)

Network Configuration

Conforming to the vSAN Network Requirements, we will use a dedicated network for vSAN traffic. Likewise, Management and vMotion traffic will also be segregated to their own network segments. As such, each vSAN node has been assigned six (6) NICs (two per service). vMotion and vSAN NICs will be configured in Stage 3 of this article via the Cluster Quickstart tool.

11. As discussed at the start of this article, we will be deploying vSAN at the same time as the vCenter Server Appliance. As such, select Install on a new vSAN cluster containing the target host.

12. We now have the ability to configure our new vSphere Datacentre and vSphere Cluster. Specify a name for each and click Next.

13. Configure the vSAN disks accordingly. For example, in the screenshot below you can see I have allocated the 50 GB flash drive to the Cache tier, and the two 500 GB HDD drives to the Capacity tier. For lab purposes, I have also opted to Enable Thin Disk Mode. When ready, click Next.

14. As mentioned earlier, make sure forward and reverse DNS records have been created for the new vCenter Server. Configure the Network Settings and click Next.

Before we do that, however, let’s take a quick look at the vSAN health.

1. Login to the vSphere Client (https://<VCSA-FQDN>/ui) and browse to Hosts and Clusters. Note the creation of the vSphere Datacentre and vSphere Cluster (as per the earlier datastore configuration above), and also the vSAN network warning. This is expected as we have not yet configured a vSAN network.

2. Select the new vSphere Cluster, and browse to Monitor > vSAN > Health. Note the vSAN health warnings. These are expected as a) the single vSphere Host (ESXi) has not been allocated a vmknic, b) it also lacks a vSAN network, and c) the vSphere Cluster does not have the required number of hosts. Again, all to be expected, and all will be resolved in Stage 3.

Stage 3: Cluster Quickstart – Add Hosts & Configure vSAN Cluster

Now that we’ve deployed our vCenter Server Appliance onto a single-node vSAN cluster, I add two additional hosts to the cluster (VGL-ESX-MGMT-02 and VGL-ESX-MGMT-03), create the required vSAN disk group(s), and configure the networking for both vMotion and vSAN traffic on all hosts. Thankfully, the Cluster Quickstart tool allows us to do all of this from one simple interface. A serious well done and thank you to the VMware teams who made this possible. This is one awesome tool!

5. When prompted, accept the thumbprints for the additional vSphere Hosts (ESXi) and click OK.

6. Review the Host Summary and click Next.

7. At the Review and Finish tab, click Finish.

8. Before proceeding, note the addition of our two new hosts. To ensure the hosts do not partake in any live cluster services until we have configured their storage and networking, the hosts are automatically added to the cluster in Maintenance Mode. Under Configure Cluster, click Configure.

9. On the Distributed Switches tab, select the required number of Distributed Switches for your environment and configure accordingly. For my environment, I require two (one for vMotion and one for vSAN). These I label as VDS-vMotion and VDS-vSAN respectively.

10. Scroll down a little and assign the distributed switches to the appropriate traffic type, as well as assign a name for the distributed port group in each distributed switch. Lastly, assign the relevant physical adapter to the appropriate distributed switch.

11. On the vMotion Traffic tab we can configure the vMotion VMkernel interfaces for all hosts.This is pretty cool, and allows us to configure all hosts from within one window!

12. On the Storage Traffic tab we can configure the vSAN VMkernel interfaces for all hosts.

13. The Advanced options allow us to configure a number of aspects of both vSphere HA and vSphere DRS. In my environment I simply enable HA and DRS, define an NTP sever to be used by all vSphere Hosts (ESXi) and leave all other options as default. When ready click Next.

14. Next we will define which of the available storage devices will be utilised in our vSAN datastore. Note, all of the 50 GB flash SSDs have been grouped, as have all of the HDDs. These have then been assigned to the appropriate tier (Cache or Capacity).

In Summary

As I noted at the very start of this process, enabling vSAN in a brownfield environment is easy. The conundrum around deploying a vCenter Server Appliance into a brand new greenfield environment is just as simple. There is no chicken-and-egg scenario, simply a vCenter and vSAN deployment. The Cluster Quickstart tool also helps to speed things up nicely; being able to configure vMotion and vSAN VMkernel adapters in one simple interface makes things a breeze.