I am in the process of investigating some performance issues myself and I have noticed that VMs that have Fixed size VHDX files are experiencing a significant performance degradation. Using SQLIO as the test base, 64K random blocks are written at 40 - 60
MB/s for fixed disks but 400 - 600 MB/s for dynamic disks. I can also reproduce this on demand by converting and compressing the fixed disks. I am trying to find information on this and why we are experiencing this problem.

I would also add that this has been the same on an iSCSI CSV based cluster and SMB 3 share hosted VMs. Any information on this would be great. Thanks

ClusterSharedVolumes is a mount point, not the same volume as C:. So checking the queue depth for C: didn't tell you anything about your iSCSI device. You'll have to find the physical disk that corresponds to CSV, not to C: if you want this data.

I would suspect that your iSCSI disk isn't giving you the throughput you need.

Fixed VHD always performs better than Dynamic VHD in most scenarios by roughly 10% to 15% within the exception of 4K writes, where fixed VHD performs significantly better according to the Windows Server Performance Team Blog.

If you found this post helpful, please give it a "Helpful" vote. If it answered your question, remember to mark it as an "Answer". This posting is provided "AS IS" with no warranties and confers no rights! Always test ANY
suggestion in a test environment before implementing!

Did you ever find a resolution to this issue? I'm having a similar issue where after I converted to Hyperv from VMware my disk performance has severely decreased (with the same backend storage) and after switching from dynamic to fixed I get even
worse performance.

Microsoft is conducting an online survey to understand your opinion of the Technet Web site. If you choose to participate, the online survey will be presented to you when you leave the Technet Web site.