Why do you need logging on a separate partition? Skip the partitioning altogether. I would only provision what you need, and do it all on the R10. No need to setup blank VMs unless you have a very specific reason to. I would use all the R10 space as needed, then if you run out of storage there you could start chipping away at the R1. Typically you'd just buy all the same hard drives and have OBR10 instead of having 2 different arrays, this would give you better performance, reliability, etc.

So I can create the VM's no problem.
But i'm not so clear on how I give the VMs a C:\ drive from part of the R1 Volume. I think i'm over confusing myself how to get it done best way

You would simply create a new disk and designate it as coming from that array (on that VM). On XenServer or ESXi you can specify what storage to use when creating disks, I'm almost 100% positive that this is doable on Hyper-V as well.

When you create the VM in HyperV Manager, it asks you where you want to create the .vhdx disk for the VM... choose the C drive of the host. Then after the VM is created, you can create another disk for the VM on the RAID10 in the VM settings.

As for what you are doing, your tech team is almost 100% wrong and they don't understand virtualization. You need to send them here.

When you create the VM in HyperV Manager, it asks you where you want to create the .vhdx disk for the VM... choose the C drive of the host. Then after the VM is created, you can create another disk for the VM on the RAID10 in the VM settings.

As for what you are doing, your tech team is almost 100% wrong and they don't understand virtualization. You need to send them here.

I'm specifically talking with @Joel in PM, unfortunately he is outsourced IT in this case, and the client wants this system yesterday.

He understands what is wrong, and I've guided him to get things to be inline with what we'd do.

Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

Running VM's on the same partition as your hypervisor and having noisy neighbor issues impact the hypervisors ability to perform can cause interesting race conditions. Now if your hypervisor is embedded (can run from RAM once loaded) this isn't a big deal, but in the case of Hyper-V (That has a god awful huge footprint) I wouldn't call this a bad idea.

Their tech team doesn't understand the benefits of OBR10 then and why splitting arrays like this was never a good thing.

Running VM's on the same partition as your hypervisor and having noisy neighbor issues impact the hypervisors ability to perform can cause interesting race conditions. Now if your hypervisor is embedded (can run from RAM once loaded) this isn't a big deal, but in the case of Hyper-V (That has a god awful huge footprint) I wouldn't call this a bad idea.

So there's no misunderstand, I'm using the terms "above" and "below" as in, hardware is at the bottom, and VMs are at the top.

Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.

The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

To say you can run a VM on the same partition as the hypervisor is wrong. You can't do it.

Nobody is suggesting to stash a VM on the same partition as the hypervisor. What we are saying is to have one big RAID 10, with multiple partitions on it. And if one VM is that busy it's slowing down the rest... then that needs to be addressed separately. Nothing liek that was mentioned.

This disk race condition is hypervisor agnostic, and happens between two or more VMs if one is too noisy.

If you have a super busy hi disk I/O VM running on the same physical disk as another VM, it's going to slow down the other VM for sure unless you enable QoS.

Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.

This disk race condition is hypervisor agnostic, and happens between two or more VMs if one is too noisy.

The race condition happens because of IO components running on top of the lower level, and if they loose communication with the schedule you get a race condition (this is arguably 10x worse on VSA systems though). This is far more of an issue in systems that have IO pass through VM's, than ones where the IO/Networking driver stack is 100% in the hypervisor.

Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.

You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.

You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.

My point is that things in the IO path go through that VM. They didn't want to write a full IO driver stack for Hyper-V so they have the VM for that. Compute/Memory doesn't go through it (that I know of), but network and disk IO do. (Otherwise Perfmon wouldn't work as a monitoring solution on the host).

AFAIK only ESXi uses a microkernel that has a fully isolated management agent plane (It's actually just a busybox shell).

Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.

You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.

Windows Server runs on top, and every VM runs beside it. The only thing Windows Server can do, is manage the VMs using various components.
The Hyper-V Hypervisor is only 20 MB. It runs in memory. Not sure what you mean by "god awful footprint"?

If that VM is a pure control plane, then I can reboot or patch it without impacting network or storage IO on the other VMs in the same way I can restart management agents on ESXi or KVM right? If Hyper-V is handling the network and storage traffic 100% then surely it must have its own driver stack, and not be dependent on the management VM for these functions, right?
Unless this has changed, you lost every VM on a host from a simple reboot of the management VM previously.

You expect Microsoft to do things rationally, or correctly? That'd be a nice change of pace.

XenServer does the same thing too.

Where did I claim that anyone gets it right? Looks to me like only ESXi get's it with this particular issue.

My point is that things in the IO path go through that VM. They didn't want to write a full IO driver stack for Hyper-V so they have the VM for that. Compute/Memory doesn't go through it (that I know of), but network and disk IO do. (Otherwise Perfmon wouldn't work as a monitoring solution on the host).

I think you have some misconceptions or misunderstandings regarding the Hyper-V architecture and components... or Hyper-V stack...

This is not true at all, as it actually depends on the OS running in a VM.

Operating systems that already have the integration components baked into their kernel (Enlightened VMs) use their own Hypercalls to communicate directly to the hypervisor, then to the physical hardware.

Only for non-supporeted (older) operating systems, does the "parent partition" intercept the VM communication, emulating Hypercalls. In this case, there are performance degradations as the management OS needs to work as a bridge to allow the VM to access the hardware.

To note, this is why it's important for VMs to be running with the latest IC version.