VMware iSCSI multipath (Round Robin) for Equallogic Part 2

Earlier I posted an entry about manually creating what you need for iSCSI multipath on vSphere 4.1 with an Equallogic storage array.

In the comments a Dell guy pointed out that the Equallogic Multipathing Extension Module (MEM) installation utility could be used to setup multipath iSCSI even if you don’t have the vSphere Enterprise or Enterprise+ licensing that allows for 3rd party storage plugins.

Get the script

Log into support.equallogic.com and go to downloads/VMware Integration. Click on Version 1.0.0 under “EqualLogic Multipathing Extension Module for VMware® vSphere” then click “EqualLogic Multipathing Extension Module.” Note: don’t bother downloading the user manual as it’s included in the .zip file.

The file you want is the setup.pl that in the folder of the downloaded zip. The MEM manual suggests using vMA, you could also use the VMware CLI (the perl-based remote CLI not he Powershell one). vMA is a good appliance to have, you can get it from here. If you are not familiar with vMA read the Guide also available at that link.

Put the script where you can run it To copy setup.pl to the vMA use a scp utility such as WinSCP. Using winscp, connect to the vMA using the ip address assigned to the appliance. The default user name for the vMA appliance is “vi-admin” and the appliance makes you set a password the first time you turn it on. Drag and drop setup.pl into the vMA (by default you you are connected to the home directory of vi-admin).

While still in winscp, once the script is copied select it, right-click and choose properties. You’ll want to set the permissions to eXecute or you will not be able to run it – one way is change the octal to 0777.

Run the script Once setup.pl is copied to the vMA appliance and the properties have been changed, connect to the vMA using a SSH client such as putty or just use the vSphere client and open the console of the vMA.

Note: If you’re using the VMware CLI, just copy the setup.pl script to a directory on the machine with the CLI installed and run it there. My examples will be using the vMA but it works the same.

Put your selected host into maintenance mode (if you only have one host, use either the local CLI or run vMA in VMware Workstation which is what I do). Make a note of the nics you’ll be configuring for iSCSI multipath as well as the IP addressed you’ll be using (you want one per NIC). execute the setup script using:

[vi-admin@vMA ~]$ ./setup.pl

if you don’t include any parameters you will get a list of available options.

To start the configuration, add the parameter for the server to be configured (hostname or IP) and the script will walk you through the rest:

[vi-admin@vMA ~]$ ./setup.pl –configure –server=192.168.176.201 Use of the vMA fastpass is recommended, see the ‘vifp’ command for more information. You must provide the username and password for the server. Enter username: root Enter password:

Do you wish to use a standard vSwitch or a vNetwork Distributed Switch (vSwitch/vDS) [vSwitch]:I left the default, vSwitch. I love vDS but lets keep this simple, shall we?

Found existing switches vSwitch0. vSwitch Name [vSwitchISCSI]:When in doubt, leave the default. Nice useful name too.

Which nics do you wish to use for iSCSI traffic? [vmnic1]: vmnic1,vmnic2Here is where you enter the nics using commas to separate. Note the script lists the first unused nic by default.

IP address for vmknic using nic vmnic1: 10.10.10.2 IP address for vmknic using nic vmnic2: 10.10.10.3 Netmask for all vmknics [255.255.255.0]:Remember, you want the iSCSI all on the same broadcast network and separate from other traffic for performance and security. You also do not want iSCSI traffic on the same broadcast domain as any other VMkernel traffic. VMware has 3 types of VMkernel ports (management, FT, vMotion) but 4 types of VMkernel traffic (same plus iSCSI). iSCSI traffic will use whatever VMkernel port it can to access the iSCSI targets, which may not be this fancy mutipath setup we are doing here!.

What MTU do you wish to use for iSCSI vSwitches and vmknics? Before increasing the MTU, verify the setting is supported by your NICs and network switches. [1500]: 9000yeah yeah, leave the defaults. Except here. Jumbo is the default on the Equallogics, just not on VMware. Don’t forget to set it on your switches also.

What prefix should be used when creating VMKernel Portgroups? [iSCSI]:de-fault

What PS Group IP address would you like to add as a Send Target discovery address (optional)?: 10.10.10.10saves you from that 5 second step in the GUI!

Configuring iSCSI networking with following settings: Using a standard vSwitch ‘vSwitchISCSI’ Using NICs ‘vmnic1,vmnic2’ Using IP addresses ‘10.10.10.2,10.10.10.3’ Using netmask ‘255.255.255.0’ Using MTU ‘9000’ Using prefix ‘iSCSI’ for VMKernel Portgroups Using SW iSCSI initiator Adding PS Series Group IP ‘10.10.10.10’ to Send Targets discovery list

The following command line can be used to perform this configuration: /home/vi-admin/setup.pl –configure –server=192.168.176.201 –vswitch=vSwitchISCSI –mtu=9000 –nics=vmnic1,vmnic2 –ips=10.10.10.2,10.10.10.3 –netmask=255.255.255.0 –vmkernel=iSCSI –nohwiscsi –groupip=10.10.10.10

Nice summary, and you can copy that command for your documentation plus edit it a little then run it on any other server. Nice touch E.

great post. did you have the test result from using vmware natvie round robin and equallogic MEM to compare with? What I heard was equallogic MEM is true load balancing all active path (active/active) while round robin is just doing one path at any given time.

With this setup, how many paths does your datastore on the equallogic show? I only see two paths per volume, one from each of the first iSCSI NICs on each ESX hosts, but I do not see the second iSCSI NIC from either host making a connection in the equallogic. I have a two member equallogic with two volumes as extents for one datastore. ESX storage configuration also shows only two paths and I did enable roundrobin and rescanned. Also restarted host to no avail. When I go into manage paths on the datastore, I only see one path listed. I’ll try to changing the second host’s which is still set as “Fixed” path. Thanks

I double checked settings following TR1049 “Configure vSphere SW iSCSI with PS Series SAN v1 2” and both NICs on each server have their own VMkernel with different IPs one set to unused and the one active. When this was setup initially, apparently both VMkernels set the second NIC to unused. I already corrected this and rescanned but show now difference. The NICs not having an active connection (when viewing volumes on Equallogic) is the second NIC from both host. ESX 4.1 U2 with latest updates. Equallogic (EL) also has latest firmware. Under the iSCSI software adapter on ESX, in the dynamic discovery tab lists the EL group’s IP and static discovery shows 2 targets, namely the EL group’s IP with targets showing two volume’s iqn.

Can you put screen shots up somewhere? Sounds like it wasn’t configured right and a rescan should have shown the extra paths after the fix. Both kernels are on the same switch, same VLAN ID, same IP domain, same subnet mask? Jumbo on or off? If this isn’t prod can we disable the NIC or delete the VMkernel currently in use and see if the other can connect?

So going back to VMware licensing, if they require having the Enterprise version in order to leverage Storage APIs for Array Integration, Multipathing is VMware truly leveraging the MEM plugin? If so, does VMware support this type of configuration? It seems odd to me that you could install a third party plugin without being licensed properly.

One thing to remember is w/o MEM you need to change all the pathing for EQL volumes to VMware Round Robin AND change the IOs per path from 1000 to 3. This will enhance the performance. This script will do that for you:

Solution Title HOWTO: Change IOPs value // Round Robin for MPIO in ESXi v5.x Solution Details This is a script you can run to set all EQL volumes to Round Robin and set the IOPs value to 3.