General VMM Feedback

Do you have an idea or suggestion based on your experience with VMM? We would love to hear it! Please take a few minutes to submit your idea in the one of the forums available on the right or vote up an idea submitted by another VMM customer. All of the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building VMM.

This forum (General Feedback) is used for any broad feedback related to VMM. Be specific on your feedback. The more specific, the easier and quicker for us to review. Please be concise!! For more information, please see User Voice How to FAQ .

Currently when you create a VM in VMM it adds a prefix and suffix to the role name when created in Failover Cluster Manager. "SCVMM *ServerName* Resources" for example.

When I name a VM, I want the name to stay the same no matter where I am looking. (Hyper-V Manager, Failover Cluster Manager and VMM.) If I name a server "JoesWebServer" I don't want it to change in FCM to "SCVMM JoesWebServer Resources".

The requirement for running sysprep when preparing VM Templates (with Windows) looks a bit confusing, because as soon as you start converting finalized VM, the status changes to 'Sysprepping...'.

If VMM could do sysprepping on it's own, I'd like to be able to skip doing this step in my Task Sequence, because one of the last steps I'm doing is applying some hardening GPOs, which in turn, cause sysprep to succeed only after a subsequent reboot (and automating one is quite a challenge as apparently MDT doesn't like something that's being touched by the hardening policies)

Should be able to configure LCM for tenant or service provider https pull server. Optionally Should allow DSC configuration to be included as a custom resource and compiled and placed locally during configuration process.

I am not able to fix whatever causes error 23352 to be thrown after
- Creating a new generation 2 Virtual machine
- Booting, installing and configuring the machine
- shutting down the machine
- Creating a VM template from the VM
- Enabling "Create a differencing disk using the specified disk as parent"
- Trying to create new virtual machines from the template.

After the VHDX was deployed to the host(s) and differencing disks are to be created, error 23352 ("VMM cannot find the device or this device is not valid for a boot device") is thrown.

KB2955362 suggests to manually set the boot device to the disk which contains the OS. I use that command, but it did not fix the issue.

There is no hint on what exactly may be the cause of this error.

Sometimes it helps to create a new VM template after changing the name of the VHDX file of the original VM. But sometimes this does not help.
In the forum topic mentioned above it was posted that this issue still occurs in SCVMM 2016.

I am not able to fix whatever causes error 23352 to be thrown after
- Creating a new generation 2 Virtual machine
- Booting, installing and configuring the machine
- shutting down the machine
- Creating a VM template from the VM
- Enabling "Create a differencing disk using the specified disk as parent"
- Trying to create new virtual machines from the template.

After the VHDX was deployed to the host(s) and differencing disks are to be created, error 23352 ("VMM cannot find the device or this device is not valid…

All options on the memory tab in any VMs properties are inactive (grayed out), yet virtual computers running Windows 10 and Windows Server 2016 support hot add and hot remove of memory. It works through Hyper-V Manager but in SCVMM it's not functioning.

We have specific properties that we pass up and down to the VM via KVP, such as 'managed custom properties' to add a 'add existing kvp attribute'. These can then be selected to show on the main view.

Currently, we have a non-trivial WMI script that queries and updates same-name VMM custom attributes with this data so we have a single pane of glass for these values

For example, typically for us we bring drive usage (to compare against VHDx size), escalation manager, and particular system component information up (particular result of a SQL query) to view within VMM so they can be actioned directly rather than swapping to other views or dashboards.

We also pass down certain information, like VMMid (as we have clone VMs, along with automatic hostname and vmname, etc) - obviously scripted in the background, but useful to confirm it had been passed in or setable via trivial scvmm commandlets.

We have specific properties that we pass up and down to the VM via KVP, such as 'managed custom properties' to add a 'add existing kvp attribute'. These can then be selected to show on the main view.

Currently, we have a non-trivial WMI script that queries and updates same-name VMM custom attributes with this data so we have a single pane of glass for these values

For example, typically for us we bring drive usage (to compare against VHDx size), escalation manager, and particular system component information up (particular result of a SQL query) to view within VMM so…

When I select "Start the virtual machine after deploying it" in the Migrate VM Wizard, the VM is not started. I have not tried powershell and I only have 1 install to test with. All Hyper-V targets are 2016 fully patched. Source hosts are 2012 R2.

Make VMM support "4k-native" VHDXs. VM creation will fail in VMM if that VM has a 4k-native VHDX. 2012R2 Hyper-v supports 4k native VHDXs just fine. But VMM fails on it.
Before you say "use VHDX format, VMM supports it", please re-read carefully and check - the default vhdX is 512e; VHDX is 4k native only when you explicitly set logical sector size as 4k during its creation. And if you use 4k native VHDX - VMM fails on it (though works fine with 512e VHDX)
Tested on VMM 2012 R2 UR10 and Windows 2012 R2 File Server Storage Pools: a) Physicalsectorsize:4096 / logicalsectorsize:4096 and b) Physicalsectorsize:4096 / logicalsectorsize:512

Why this is a problem:
Newest HDD's that are currently supplied by HW vendors are no longer 512e they are truly "4k-native"; so the Storage Pools have to be are 4k-native.
Sure a lot of supply is stocked in 512e disks… but at least for all new deployments the disks are 4k native. We already had um… 10 years for 512e transitioning period?…..
Let's make VMM support 4k till it's too late?

Make VMM support "4k-native" VHDXs. VM creation will fail in VMM if that VM has a 4k-native VHDX. 2012R2 Hyper-v supports 4k native VHDXs just fine. But VMM fails on it.
Before you say "use VHDX format, VMM supports it", please re-read carefully and check - the default vhdX is 512e; VHDX is 4k native only when you explicitly set logical sector size as 4k during its creation. And if you use 4k native VHDX - VMM fails on it (though works fine with 512e VHDX)
Tested on VMM 2012 R2 UR10 and Windows 2012 R2 File Server Storage Pools:…

Currently we are using SMB scale out file storage with a HyperV clusters. We constantly get "?" when we look at the SMB File share storage in the cluster properties. There is no explanation given as to what the issue is. Would be nice to get some possible reasons for the status to help with troubleshooting

Today we are able to prevent virtual machines deployments on the same Host in a cluster with availability sets, automatically.
In case of deploying VMs in two datacentre (or two Cluster) there is no such mechanism.
It would be nice if availability sets can be set up for preventing VMs are located in the same datacentre or hostgroup.

We have different support groups that requires different level of access in the VMs created on Hyper-V hosts. Global ops should be able to manage all VMs. AD ops only domain controller VMs. SAP ops only SAP VMs. We can to it by assigning reources for each role, but it would be much easier if we could have VM groups that would be used to apply security and some other configurations. VMware uses folders to organize VM groups.

Patching and update the VM template should be possible from SCVMM console, and from the current VM template.
I mean it should be possible to convert the current VM template to a VM so we can apply patches and updates and then reconvert the VM to Template.

For VMs that uses an application cluster, like SQL Always on or Exchange DAGs it would be a nice feature to Point out these Machines and let VMM be aware that these VMs should not be on the same Hyper-V Host, when you put a host in maintenance mode. Like the "Preferred Owner" in Failover Cluster Manager.