I ran into this gotcha on site and thought I’d share on the blog for others who may also be impacted and unaware of it. In this particular situation, an environment that was upgraded from vSphere 4 to vSphere 5 was experiencing slower than expected storage vMotion performance. A discovery of the environment gave me the needed information to make suggestions that ultimately resolved the issue. This post will cover a brief intro to VMFS block sizes, as well as recommendations I made to increase storage vMotion performance and leverage VAAI.

A Brief Introduction to VMFS Block Sizes

VMFS3 block size has historically been debated on several fronts. I remember first working with vSphere and setting the block size for each datastore to the maximum size it could grow to without really understanding what the potential design impacts were. So for an 800 GB LUN I might have chosen a 4 MB block size, while formatting a 1500 GB LUN with an 8 MB block size.

I ran into complications with virtual machines riding on different datastores and using a snapshot based backup method in vSphere 4.0. This boiled down to the configuration file datastore, which is where the snapshot would go, being of a smaller block size than some of the data VMDKs on large block size datastores. I later ended up choosing a unified block size of 8 MB for all of my datastores.

It also seemed that a consensus was reached in the VMware community that the “best” way forward was just to just format all VMFS3 datastores to use an 8 MB block as it had no real influence on performance. It also avoided these snapshot snafus. I would imagine that a lot of folks are using 8 MB across the board.

Fast Forward to VMFS5

With VMFS5 there is no longer a choice in block size. All newly created VMFS5 datastores are formatted with a 1 MB block size. I’m quite happy this was done, as I really saw no solid advantage to having a choice and it often confused many administrators (myself included!).

The one gotcha is when upgrading from VMFS3 to VMFS5: The old block size is kept. And that’s just one of the down sides to upgrading.

So in this case, we have a number of “legacy” upgraded VMFS5 datastores at 8 MB and all new VMFS5 datastores are at 1 MB. As more new VMFS5 datastores are added to the cluster, the number of mismatched datastores increases. Prior to the upgrade, all datastore block sizes matched (they were all VMFS3 with an 8 MB block size). Why exactly does this matter for Storage vMotion?

Storage vMotion Loves Unified Block Sizes

Storage vMotion has two engines available for accomplishing its task: FSDM and FS3DM. For reference, I typically turn to this older VMware KB article on reclaiming null blocks:

When a different blocksize is used for the destination, the legacy datamover (FSDM) is used. When the blocksize is equal, the new datamover (FS3DM) is used. FS3DM decides if it will use VAAI or just the software component.

This was the part that was overlooked – Storage vMotion was using the legacy FSDM data mover due to the mismatched block sizes (which only happened after the vSphere 5 upgrade as new datastores were added). This seems like a common occurrence as the datastore’s block size isn’t exactly staring you at the face – you have to drill down into the host’s view of the storage to see it.

This problem will only grow as new VMFS5 datastores are introduced as storage targets. This highlights the need to weed out the other block size datastores, especially if you want to leverage VAAI as it requires unified block sizes from source to target.

Thoughts

Sometimes I think it’s worth the trouble of getting back to basics. As I see more and more environments make the move to vSphere 5, I would be willing to bet that many of them are still using mismatched block sizes. The migration process is a great opportunity to wipe the slate clean of any block size mismatches and create new VMFS5 datastores that come with all the bells and whistles.

I understand that this isn’t possible for all environments, but you should definitely be aware of the caveats. Unless you were already using a 1 MB block size with VMFS3 (which seems unlikely).

Have you gone through your environment to see how the block sizes are configured and want to share? Did you end up destroying your old VMFS3 datastores and creating fresh ones with VMFS5 during a vSphere 5 upgrade? Please feel free to comment below!

I took the opportunity to migrate our Exchange mailbox VM’s to VMFS5 Datastores after the 5.0U1 upgrade.
The majority of our other datastores are 1MB but migrating them to new LUNs with VMFS5 gives us the opportunity to do some other array housekeeping too.

Great article! I’ve seen this exact same thing in our dev lab when storage vmotioning VMs from an older VMFS5 cluster that had been upgraded from v3 to a newer VMFS5 one, even when the storage subsystems both support VAAI. Having the block sizes matched make quite a remarkable difference

One advantage of keeping one vmfs3 LUN about is that the older a motion engIne can re-thIn your thinly provisioned disks when moving datastores. I haven’t seen this happen with new VMFS5 volumes due to the standard block size. So it doesn’t hurt to keep one legacy volume kicking about just in case