Creating and Configuring an iSCSI Distributed Switch for VMware Multipathing

In an earlier post I configured my Synology DS1513+ Storage server for iSCSI and enabled it for Multi-Pathing, in this post I will show you how to create and configure a vDS (vSphere Distributed Switch) for iSCSI use and how to enable Multipathing to use more than one network path to your storage (there by increasing throughput).

Using the vCenter client you can see that in the above screenshot that I have 4 Intel NIC ports on my hosts that aren’t currently assigned to a vswitch.

Click on Inventory \ Networking \ Add a vSphere Distributed Switch

As this is a vSphere 5.5 lab I went for the 5.5.0 version switch, click Next

For ease of management I have named my switch iSCSI, click Next

Select the NIC’s from each host to add to the switch, click Next

Un-tick the Automatically create a default port group as we are going to add these next, click Finish

Right click your new switch and choose New Port Group

We are going to create 4 new Port Groups, I have gone for 3 ports per Port Group because I only have 3 hosts. Name your Port Group (in this case iSCSI-1) and define the number of ports, click Next

Click Finish, now repeat for every new port group required.

Here we can see that I now have 4 different Port Groups configured.

Right click iSCSI-1 and click Edit Settings

Click the Teaming and Failover setting, we can see that currently all 4 uplinks are active, for iSCSI use you’re going to need to have only a single active uplink.

Because this is iSCSI-1 I am using uplink 1 to be the active uplink, move the rest of the uplinks to Unused Uplinks

Here are the iSCSI-2 settings, I have used uplink 2 as the active uplink.

Here are the iSCSI-3 settings, I have used uplink 3 as the active uplink.

Here are the iSCSI-4 settings, I have used uplink 4 as the active uplink.

Go to Hosts and Clusters and click on your host, browse to Configuration and Networking, click on Manage Virtual Adapters

Click Add

Choose New virtual adapter, click Next

Click Next

Select the required port group (iSCSI-1) and click Next

The Hosts IP address is 192.168.2.30, because I have used the 192.168.5.x network for my iSCSI traffic (on a dedicated switch) I went with 192.168.5.31 for this NIC’s first IP, click Next.

Repeat for the remaining adapters.

In this screenshot you can see all 4 adapters added.

Clicking each Port Group we can see that each one is linked to an individual Uplink as mentioned previously.

Click the iSCSI Software Adapter, if it’s not present simply click the Add button on the top right of the screen to add the iSCSI Software Adapter.

You can see here that we have a single path configured to our storage (it’s currently using the LAN port group for connectivity), click Properties from the lower pane to bring up the iSCSI Initiator properties.

Click Network Configuration

Now we need to add the new Port Groups, Click Add

Add each required port group individually. Once all 4 have been added remove the LAN port group.

We now have 4 port groups added to the iSCSI Initiator, click the Dynamic Discovery tab

Add the IP or Host name for your iSCSI storage (in the previous post I configured my Synology with 3 address in the 192.168.5.x network range), click on the Static Discovery tab and remove any unrequired entries there. Click Close

Click Yes

We now have 12 paths to the iSCSI Storage server but we would only be using a single path back to our storage so we need to change our Path Selection setting from MRU (Most Recently Used) to Round Robin.

Right click the LUN and choose Manage Paths…

Change the Path Selection to Round Robin, it’s worth pointing out that currently the Status for each path is Active but only a single path would be used for issuing I/O to the LUN

Changing to Round Robin has made all of the Paths Active and all of them are now being used to issue I/O to the LUN.

Repeat the above steps on all hosts.

It’s worth noting at this time that by default the Path Selection Policy (PSP) will send down 1000 I/O’s before moving on to the next path, in a busy Production environment that’s not much of an issue but if you’re using this in a home lab you may want to reduce this down to a single I/O per path. If you want to go down that route please have a look at Cormac Hogan’s site here for instructions on how to configure that.

2 Comments

Sweet post. I am trying to get as much information to show them that going to VDS is the way when using ISCSI storage. So one question, if a NIC on a single host that is supporting ISCSI goes out will a VDS send the traffic out another host connection because we have a NIC from each physical host in the DVUplink? I think I know the answer but thought I would ask you to get the correct answer.