esxcli

One of the major storage enhancements that was introduced in vSphere 5.1 as part of the new I/O Device Management (IODM) framework was the addition of SMART (Self Monitoring, Analysis And Reporting Technology) data for monitoring FC, FCoE, iSCSI, SAS protocol statistics, this is especially useful for monitoring the health of an SSD device. Historically, there was not a public vSphere API to consume this information and customers had to rely on ESXCLI which is not very friendly from a programmatic standpoint.

One of the nice enhancements that was introduced in vSAN 6.6 from an API standpoint is that you can now access SMART data using the vSAN Management 6.6 API. The other really cool thing about this enhancement is that although this API was added under the vSAN Management API, you do not actually have to be using vSAN to be able to use this new API!

There are two methods in which you can access the SMART data:

vCenter Server - When connecting to a vCenter Server, you can access the VsanQueryVcClusterSmartStatsSummary() method which is available as part of the VsanVcClusterHealthSystem and you simply just provide it the name of a vSphere Cluster.

ESXi Host - When connecting directly to an ESXi host, you can access the VsanHostQuerySmartStats() method which is available as part of the HostVsanHealthSystem.

Sometimes it is the small updates which improves an existing feature or enhances the current user experience that I most appreciate with a new vSphere release. One area that I recently came across while working with vSphere 6.5 is just how easy it is now to retrieve the ESXi installation date which can be useful for troubleshooting or auditing purposes. This previously required you to decode the ESXi UUID which was needed to construct the originally installation date as outlined in this VMware KB 2144905 article.

With ESXi 6.5, you can now quickly retrieve the ESXi installation date simply by using this new ESXCLI command:

esxcli system stats installtime get

Note: ESXCLI can be executed either locally within the ESXi Shell or remotely using vCLI or PowerCLI.

In case that was not enough, the Engineer who added this capability was also kind enough to add a native vSphere API to also retrieve the ESXi installation date from a programmatic approach. Under the existing ImageHostConfigManager there is now a new vSphere 6.5 API called installDate() which returns the installation date in UTC format.

To demonstrate this new vSphere API, I have created a small PowerCLI function called Get-ESXInstallDate which can be downloaded from here.

Here is an example of retrieving the installation date for a specific ESXi host:

One of my all time favorite features of VSAN is still the ability to be able to "bootstrap" a VSAN Datastore starting with just a single ESXi node. This is especially useful if you would like to bootstrap vCenter Server on top of VSAN out of the box without having to require additional VMFS/NFS storage. This bootstrap method has been possible and supported since the very first release of VSAN which I have written in great detail here and here.

With the release of VSAN 6.1 (vSphere 6.0 Update 1), an all-flash VSAN configuration was also now possible in addition to a hybrid configuration which uses a combination of SSDs and MDs. One observation that was made by a few folks including myself was that you could not configure an all-flash diskgroup using ESXCLI which was one of the methods that could be used to bootstrap VSAN. If you tried to create an all-flash diskgroup using ESXCLI, you would get the following error:

Unable to add device: Can not create all-flash disk group: current Virtual SAN license does not support all-flash

This turned out to be a bug and the workaround at the time was to add the ESXi host to a vCenter Server which would then allow you to create the all-flash diskgroup. This usually was not a problem but for those wanting to bootstrap VSAN, this would require you to have an already running vCenter Server instance. While setting up my new VSAN 6.2 home lab last night

I found that this issue has actually been resolved in the upcoming release of VSAN 6.2 (vSphere 6.0 Update 2) and you can now create an all-flash diskgroup using ESXCLI which includes do so from the vSphere API as well. For those interested, you can find the list commands required to bootstrap an all-flash VSAN configuration below:

Step 1 - You will need to change the default VM Storage Policy on VSAN to allow "Force Provisioning" since you only have a single node to start with (we will change this back to default once you have deployed vCenter Server):

Step 2 - Ensure that you enable VSAN traffic type on a specific VMkernel interface, in this example, I am using vmk0 by running the following command:

esxcli vsan network ipv4 add -i vmk0

Step 3 - Create a new VSAN Cluster by running the following command:

esxcli vsan cluster new

Step 4 - Run the following command to identify the SSD devices you plan to use for your "Caching" and "Capacity" Tier, specifically you will need to make a note of the device you plan to use for "Capacity" as we will need to tag that device in the next step.

vdq -q

Step 5 - To tag the specific SSD device as "Capacity", run the following command and substitute the ID of the SSD device from previous step:

Step 7 - If everything was configured correctly, we view our new all-flash diskgroup by running the following command:

esxcli vsan storage list

At this point, you are now ready to provisioning vCenter Server on top of the VSAN Datastore and once that is setup, you can use then use the vSphere Web Client to add your remainder VSAN nodes as you normally would.

Note: If you do NOT plan on running a *single* VSAN node (not recommended but its possible), then remember to change back the original VM Storage Policy settings once you have setup your vCenter Server by running the following ESXCLI command on that first ESXi node used to bootstrap VSAN:

VSAN 6.0 includes a large number of new enhancements and capabilities that I am sure many of you are excited to try out in your lab. One of the challenges with running VSAN in a home lab environment (non-Nested ESXi) is trying to find a platform that is both functional and cost effective. Some of the most popular platforms that I have seen customers use for running VSAN in their home labs are the Intel NUC and the Apple Mac Mini. Putting aside the memory constraints in these platforms, the number of internal disk slots for a disk drive is usually limited to two. This would give you just enough to meet the minimal requirement for VSAN by having at least a single SSD and MD.

If you wanted to scale up and add additional drives for either capacity purposes or testing out a new configurations, you are pretty much out of luck, right? Well, not necessary. During the development of VSAN 6.0, I came across a cool little nugget from one of the VSAN Engineers where USB-based disks could be claimed by VSAN which could be quite helpful for testing in a lab environment, especially using the hardware platforms that I mentioned earlier.

For a VSAN home lab, using cheap consumer USB-based disks which you can purchase several TB's for less than a hundred dollars or so and along with USB 3.0 connectivity is a pretty cost effective way to enhance hardware platforms like the Apple Mac Mini and Intel NUCs.

Disclaimer: This is not officially supported by VMware and should not be used in Production or evaluation of VSAN, especially when it comes to performance or expected behavior as this is now how the product works. Please use supported hardware found on the VMware VSAN HCL for official testing or evaluations.

Below are the instructions on how to enable USB-based disks to be claimable by VSAN.

Step 1 - Disable the USB Arbitrator service so that USB devices can been seen by the ESXi host by running the following two commands in the ESXi Shell:

/etc/init.d/usbarbitrator stop
chkconfig usbarbitrator off

Step 2 - Enable the following ESXi Advanced Setting (/VSAN/AllowUsbDisks) to allow USB disks to be claimed by VSAN by running the following command in the ESXi Shell:

esxcli system settings advanced set -o /VSAN/AllowUsbDisks -i 1

Step 3 - Connect your USB-based disks to your ESXi host (this can actually be done prior) and you can verify that they are seen by running the following command in the ESXi Shell:

vdq -q

Step 4 - If you are bootstrapping vCenter Server onto the VSAN Datastore, then you can create a VSAN Cluster by running "esxcli vsan cluster new" and then contribute the storage by adding the SSD device and the respective USB-based disks using the information from the previous step in the ESXi Shell:

If we take a look a the VSAN configurations in the vSphere Web Client, we can see that we now have 4 USB-based disks contributing storage to the VSAN Disk Group. In this particular configuration, I was using my Mac Mini which has 4 x USB 3.0 devices that are connected and providing the "MD" disks and one of the internal drives that has an SSD. Ideally, you would probably want to boot ESXi from a USB device and then claim one of the internal drives along with 3 other USB devices for the most optimal configuration.

As a bonus, there is one other nugget that I discovered while testing out the USB-based disks for VSAN 6.0 which is another hidden option to support iSCSI based disks with VSAN. You will need to enable the option called /VSAN/AllowISCSIDisks using the same method as enabling USB-based disk option. This is not something I have personally tested, so YMMV but I suspect it will allow VSAN to claim an iSCSI device that has been connected to an ESXi host and allow it to contribute to a VSAN Disk Group as another way of providing additional capacity to VSAN with platforms that have restricted number of disk slots. Remember, neither of these solutions should be used beyond home labs and they are not officially supported by VMware, so do not bother trying to do anything fancy or running performance tests, you are just going to let your self down and not see the full potential of VSAN 🙂

In vSphere 5.1, one of the major storage enhancements that was part of the new I/O Device Management (IODM) framework was the addition of SMART (Self Monitoring, Analysis And Reporting Technology) data for monitoring FC, FCoE, iSCSI, SAS protocol statistics, this is especially useful for monitoring the health of an SSD device. The SMART data is provided through a SMART daemon which lives inside of ESXi and runs every 30 minutes to gather statistics and diagnostic information from the underlying storage devices and provides the information through the following ESXCLI command:

esxcli storage core device smart get -d [DEVICE]

If you would like to learn more about IODM and SMART, be sure to check out Cormac Hogan's in-depth article here.

The default polling interval for the SMART daemon in vSphere 5.1 was not configurable and 30 minutes was the system default. For most customers, the out of the box configuration should be sufficent. However, for some customers who wish to have greater flexibility in the polling frequency, the default can now be adjusted in vSphere 6.0. The smartd process now includes a new -i option which specifies the polling interval.

If you wish to change the default, you will need to modify the /etc/init.d/smartd init script and include the interval option. One issue that I have found is that changes to the init script do not persist reboots as modifications to these files should not be performed by users. In the case of adjusting the polling interval, we need to add the additional option for smartd startup.

We can still accomplish this by adding the following to /etc/rc.local.d/local.sh make the necessary adjustments and restarting the smartd process:

Note: The -i option is only visible when smartd process is not running

If you wish to see the changes live immediately, then you can run /etc/rc.local.d/local.sh command once or this will automatically happen upon ESXi booting up. If we perform a process look up using "ps", we can see that our smartd is now configured to poll every 35 minutes instead of the default 30.

There has been a great deal of interest from customers and partners for an All-Flash VSAN configuration, especially as consumer grade SSDs (eMLC) continue to drop in price and the endurance levels of these devices lasting much longer than originally expected as mentioned in this article by Duncan Epping. In fact, last year at VMworld the folks over at Micron and SanDisk built and demoed an All-Flash VSAN configuration proving this was not only cost effective but also quite performant. You can read more about the details here and here. With the announcement of vSphere 6 this week and VMware Partner Exchange taking place the same week, there was a lot of excitement on what VSAN 6.0 might bring.

One of the coolest feature in VSAN 6.0 is the support for an All-Flash configuration. The folks over at Sandisk gave a sneak peak at VMware Partner Exchange couple weeks back on what they were able to accomplish with VSAN 6.0 using an All-Flash configuration. They achieved an impressive 2 Million IOPs, for more details take a look here. I am pretty sure there are going to be plenty more partner announcements as we get closer to the GA of vSphere 6 and there will be a list of supported vendors and devices on the VMware VSAN HCL, so stay tuned.

To easily demonstrate this new feature, I will be using Nested ESXi but the process to configure an All-Flash VSAN configuration is exactly the same for a real physical hardware setup. Nested ESXi is a great learning tool to understand and be able to walk through the exact process but should not be a substituted for actual hardware testing. You will need a minimum of 3 Nested ESXi hosts and they should be configured with at least 6GB of memory or more when working with VSAN 6.0.

Disclaimer: Nested ESXi is not officially supported by VMware, please use at your own risk.

In VSAN 1.0, an All-Flash configuration was not officially supported, the only way to get this working was by "tricking" ESXi into thinking the SSD's used for capacity tier are MD's by creating claimrules using ESXCLI. Though this method had worked, VSAN itself was assuming the capacity tier of storage are regular magnetic disks and hence the operations were not really optimized for anything but magnetic disks. With VSAN 6.0, this is now different and VSAN will optimize based on whether are you using using a hybrid or an All-Flash configuration. In VSAN 6.0, there is now a new property called IsCapacityFlash that is exposed and it allows a user to specify whether an SSD is used for the write buffer or for capacity purposes.

Step 1 - We can easily view the IsCapacityFlash property by using our handy vdq VSAN utility which has now been enhanced to include a few more properties. Run the following command to view your disks:

vdq -q

From the screenshot above, we can see we have two disks eligible for VSAN and that they both are SSDs. We can also see thew new IsCapacityFlash property which is currently set to 0 for both. We will want to select one of the disk(s) and set this property to 1 to enable it for capacity use within VSAN.

Step 2 - Identity the SSD device(s) you wish to use for your capacity tier, a very simple to do this is by using the following ESXCLI snippet:

We can quickly get a list of the devices and their ID along with their disk capacity. In the example above, I will be using the 8GB device for SSD capacity

Step 3 - Once you have identified the device(s) from the previous step, we now need to add a new option called enable_capacity_flash to these device(s) using ESXCLI. There are actually three methods of assigning the capacity flash tag to a device and both provide the same end result. Personally, I would go with Option 2 as it is much simpler to remember than syntax for claimrules 🙂 If you have the ESXi hosts connected to your vCenter Server, then Option 3 would be great as you can perform this step from a single location.

Option 1: ESXCLI Claim Rules

Run the following two ESXCLI commands for each device you wish to mark for SSD capacity:

Step 4 - To verify the changes took effect, we can re-run the vdq -q command and we should now see our device(s) marked for SSD capacity.

Step 5 - You can now create your VSAN Cluster using the vSphere Web Client as you normally would and add the ESXi host into the cluster or you can bootstrap it using ESXCLI if you are trying to run vCenter Server on top of VSAN, for more details take a look here.

One thing that I found interesting is that in the vSphere Web Client when setting up an All-Flash VSAN configuration, the SSD(s) used for capacity will still show up as "HDD". I am not sure if this is what the final UI will look like before vSphere 6.0 GA's.

If you want to check the actual device type, you can always go to a specific ESXi host under Manage->Storage->Storage Devices to see get more details. If we look at our NAA* device ID, we can see that both devices are in fact SSDs.

Hopefully for those of you interested in an All-Flash VSAN configuration, you can now quickly get a feel for that running VSAN 6.0 in a Nested ESXi environment. I will be publishing updated OVF templates for various types of VSAN 6.0 testing in the coming weeks so stay tune.

Primary Sidebar

Search this website

Author

William Lam is a Staff Solutions Architect working in the VMware Cloud on AWS team within the Cloud Platform Business Unit (CPBU) at VMware. He focuses on Automation, Integration and Operation of the VMware Software Defined Datacenter (SDDC).