When we think about engineering and architecting virtual infrastructures for the enterprise datacenter, one of the critical requirements when we think about building out any enterprise technology is high availability. As we have seen, Hyper-V is a tremendously capable and maturing virtualization product that many are using in the enterprise today. However, like other virtualization technologies, when we think about making Hyper-V highly available, we can’t rely on a single point of failure when it comes to our Hyper-V host architecture. This means we need to provision multiple hosts that can take over in the event of a host failure. Hyper-V makes use of Windows Server clustering technology to make this available to Hyper-V.

The first thing we want to take a look at is the configuration of the hosts themselves. In this setup, we are using two Windows Server 2016 Server Core installations. Why Windows Server 2016 Server Core? In thinking about designing a production Hyper-V cluster, to minimize moving parts and increase our security posture, using the Windows Server 2016 Server Core installation eliminates a lot of unnecessary components and makes our footprint smaller. The increased difficulty of administration with Server Core is offset by the benefits of efficiency and security. Speaking of administration and configuration of Windows Server 2016 Server Core, we will utilize a lot of PowerShell in configuring.

Before thinking about the Hyper-V hosts as a cluster, we must think of them first individually to perform our initial configuration of the hosts themselves before the clustering process. Proper planning for the environment must be done as well to mitigate any issues that might arise due to missing any details. Initial host configuration involves much of the same steps to prepare any Windows server – naming, network configuration, patching, etc. Network planning is crucial in a Windows cluster as it allows for proper cluster communication as well as communication with shared storage.

We also want to be sure and update any prospective Hyper-V cluster nodes to the latest patch level available.

For best practice, we want to have all our potential cluster nodes configured identically except for computer names and IP addresses, so everything is standardized between the hosts including patch levels, networks, etc.

Hyper-V Cluster: Network Planning

For the lab network in this scenario, four network adapters have been configured for each host. Both lab hosts are not doing any NIC teaming for simplicity sake. However, you would want to team your adapters in production so as not to have a single point of failure in any one network. For our lab setup we have:

Management and VM traffic

iSCSI

Private Cluster Traffic

Live Migration

With the above networks in mind, we want to assign IP Addresses to each server in our desired subnet ranges. VLANs are also a consideration here as you will most likely want to align your subnets with VLANs provisioned. This would need to be thought through in advance.

Hyper-V Cluster: iSCSI Storage Target Setup

For iSCSI storage in the lab environment, in this example, we are using FreeNas to create iSCSI targets and present these to our Hyper-V cluster. Of course, setting up iSCSI on vendor hardware of your choice or software iSCSI can be different depending on the vendor, so always follow the methods defined for each.

Below is a quick overview of how the storage is set up using FreeNas. We won’t delve into all the details of how to setup FreeNas for iSCSI, however, below are the basic settings for presenting a couple of iSCSI targets to our Hyper-V hosts. Remember to start the iSCSI service in FreeNas and setup your network configuration for the storage network to match what you intend for the Hyper-V hosts.

Here we see the base IQN setup for our resulting targets we will create.

Here we set up a Portal in FreeNas to listen for iSCSI traffic.

Next, we setup our iSCSI target names. For our Hyper-V cluster, we are setting up a quorum volume and also a volume for VM storage. What is the quorum volume used for? To begin, let’s talk about what Quorum is exactly. Quorum is the mechanism in a Windows cluster that is used to make sure that in the event of something going down between parts of a cluster, you always have a majority of cluster resources available for the cluster to function. Starting with Windows Server 2012, by default, every node in the cluster has a single quorum vote. By adding an extra vote with a file share, disk, or the new cloud storage account in Windows Server 2016, one part of the cluster should always get more than 50 percent of the quorum vote by claiming the share, disk, or cloud storage account vote.

With Windows Server 2012 R2, the recommendation changed to always configure the disk or file share witness. The vote is extended to the additional witness (file share, disk, or cloud storage account) only if there are an even number of nodes. In an odd node number, the witness does not get a vote and isn’t used.

Below in our FreeNAS appliance we have created two target names – a quorum volume to be used as a disk witness as well as a volume to be used for our cluster shared volume to house our VMs. We will discuss the cluster shared volume topic later.

Next, we need to add Extents, which in the below example are mapped to the individual disks that we have physically assigned in our FreeNas appliance.

Finally, we associate the Target with the Extent so the targets are mapped to the storage in FreeNas.

Thoughts

In planning our Hyper-V cluster, there are a lot of items that we need to verify in our pre-cluster planning phase including host configuration, network planning, and storage target configuration. In the next part of the series, we will take a look at installing the Hyper-V Server role and creating our Windows Failover Cluster.