Menu

vSphere 6.5 UNMAP Improvements With DellEMC XtremIO

Two years ago, my manager at the time and myself visited the VMware HQ in palo alto, we had an agenda in mind which is around the UNMAP functionality found in vSphere, the title of the presentation I gave was “small problems lead to big problems” and it had a similar photo to the one above, the point we were trying to make is that the more customers that will be using AFAs that VMware do not release back the unused to capcity to, the bigger the problem gets because from a $ per GB, each GB matter…they got the point and we ended the conversation with a promise to ship them an XtremIO X-Brick to develop automated UNMAP on XtremIO that the greater good will benefit from it as well, not just XtremIO.

if you are new to the UNMAP “stuff”, i encourage you to read multiple posts i wrote on the matter..

VMware has just released vSphere 6.5 which an enhanced unmap functionality at both the volume (Datastore) level and the In-Guest level, lets examine both

Volume (datastore) level

Using the UI, you can now set an automated UNMAP at the datastore level and set it to either “low” or “none” , low basiecally means that once in a 12 hour internal, the ESXi crawler will run it at the datastore level, however, you can set different thresholds using the ESXI cli as shown below

Why can’t you set it as “high in the UI? I assume that since space reclamation is a relatively heavy IO related, VMware want’s you to ensure your storage array can actually cope with the load and the cli is less visible than the UI itself…note that you can still run an ad hock space reclamation at the datastore level like you could in vSphere 5.5/6.0, running it manually will finish quicker but will be the heaviest option.

If you DO chose to run it manually, the best practices for XtrmeIO is to set the chunk size used for the reclamation to 20000 as seen below

In-Guest space reclamation

In vSphere 6.0, you could have run the “defrag” (optimization) at the windows level, windows server 2012, 2012 R2 and 2016 were supported as long as you set the VMDK to “thin” and enable it at the ESXi host

And running the latest VM hardware version which at vSphere 6.0 was “11” and in vSphere 6.5 is “13” as seen below

So, what’s new?

Linux! In the past, VMware didn’t support the SPC-4 standard which was required to enable space reclamation inside the linux guest OS and now with vSphere 6.5, SPC-4 is fully supported so you can run space reclamation inside the linux OS using either manual cli or a crone job. In order to check that the linux OS does indeed support space reclamation, run the “sg_vpd” command as seen below and look for the LBPU:1 output.

Running the sq_inq command will actually show if SPC-4 is enabled at the linux OS level

In order to run the space reclamation inside the linux guest OS, simply run the “rm” command, yes, its that simple.

You can see the entire flow in the following demo that was recorded by Tomer Nahum from our QA team, thanks Tomer!

P.S

Note that at the time of writing this post, we have identified an issue with the windows guest OS space reclamation, AFAIK, it doesn’t work with many (all?) the arrays out there and we are working with VMware to resolve it. also note that you must use the full web client (NOT the H5 client) when formatting the VMFS 6 datastore, it appears that when using the H5 embedded client, it doesn’t align the volume with the write offset

Naa.xxxx01 is created via webclient, naa.xxxx02 is created via embedded vClient