Begin the journey to a private cloud with datacenter virtualization

Post navigation

Do you use the Distributed Switch?

I am at VMworld 2010 Copenhagen this week and have noticed through informal customer surveys and conversations that the use of the Distributed Switch is low when compared to other vSphere features. This fact confirms what I discovered at the VMworld 2010 show in SF a few weeks back and in other more formal surveys I have seen of our customer base.

Why is this the case? One idea I have is that users don't really get the value proposition of the Distributed Switch. I discussed 2 of these areas (less setup, vMotion info capture from a network perspective) in a blog post last year. In addition to those benefits, I believe the need for just one port group brings up huge OpEx savings. Another element that is important to remember is that the loss of vCenter itself will not cripple the function of the switch. Some users believe this could cripple the environment but that is not the case. You can read more about that in an article I found on-line recently.

Finally, the use of the Distributed Switch with vSphere 4.1 opens up the possibility of using the new Network I/O Control feature. Network flow types (of which there are six) can now be given equal priority for network resource access and then be given more advanced priority to say which flow gets the network resource in a congested environment. The graphic below shows this type of setup and also refers to the new load-based NIC teaming that can be used with NIOC to balance load across 2 10 GE NICS. NIOC becomes especially important in this type of environment.

12 thoughts on “Do you use the Distributed Switch?”

We’re still on 4.0. When we moved from 3.5 to 4.0 we decided not to use DVS for a simple reason. I couldn’t put different networks on a single DVS in different folders. As such, I had no easy way to assign permissions for who could join VMs to which networks. For this reason we had to stick with standard vswitches.
If anyone knows if this issue is fixed in 4.1, I’d love to hear about it.

To me a more interesting question would be for those using vSphere enterprise plus, the only version that offers this feature, are you using distributed switch? Did you get any indication of the low usage of this feature was because people weren’t using enterprise plus?

We use vDS on new clusters built with vSphere from beginning. On older clusters we haven’t started migration to vDS yet.
One big downside of vSwitch to vDS switch is that you cannot do vMotion from vSwitch to vDS even though you would name the port groups identically.

Most of my customers’ optioned up to Enterprise-plus when moving to vSphere, but it is mixed whether they ask us to have Host profiles or vDS designed and integrated. In one of our larger customer case’s we have run into rather serious challenges with the vDS. In the event of a location “catastrophic” power failure, if your ESX hosts return to service prior to your vCenter (happens in this case due to vCenter being on a VM) we have experienced loss of network settings on the VMs hosted. All of them. It is pretty devastating to have to “touch” every single VM in your datacenter to reset its port group and re-enable its vNIC. This has happened several times and we have determined that it “may” be resolved by keeping 1 physical DC (for DNS) vCenter physical (with local SQL instance) and delay the power on of ESX hosts in the event of a power outage (even if gracefully handled by the UPS). This affects any interest in deploying any of the vDS-dependent services (NIOC, Nexus 1000v, etc) as our customer wants us to “take that out…. NOW!”. We are pursuing a scenario solution with VMware but have hit nothing but a dead-end as of late.

Have you seen the price. I could purchase another physical server for the cost of 2 vCPU Enterprise plus.
Hyper-V is going to eat in to your SMB markey then works its way in to enterprise if your not careful.
Stop looking at the clouds focus on your hypervisor.

@JR: with vDS you get port-persistence across vMotion which you don’t get with vSS i.e. port moves with VM. You are required to apriori configure both vSSes similarly.
@Tomi: By location “catastrophic” failure, did you mean complete cluster failure (all hosts failing)? And were these stateless hosts?

Yes, for VM Network’s only. Reasons are:
1) iSCSI multipath IO is not supported on a dvSwitch as of vSphere 4.1. Apparently this support is coming in future releases.
2) If your vCenter is a VM then your asking for trouble if you lose the NIC’s on the VM. Can end up really bad for you, trust me on this. I run a mixed configuration with a standard vSwitch doing VMotion and VMKernel. This switch also provides a VM Network port for vCenter only. If we have the Nexus 1000V then I usually put the VSM on this switch as well, same subnet as vCenter and ESX(i) hosts. Apparently support for vCenter on a dvSwitch is coming in 4.2.

Hi,
I am about to deign a new virtual infrastracture network and I am really confused in choosing between vNetwork Distributed Switch and vSwitch. I wonder if I should remove the hosts and their VMs from normal vSwitches and connect the them to the vDS. or if I should leave the hosts and their VMs on the normal vSwitches and still connect them to vDS. How can I remediate from vCenter failure if I use vDS please? Thanks a lot.

Hi there just wanted to give you a brief heads up and let you know a
few of the images aren’t loading properly.
I’m not sure why but I think its a linking issue.
I’ve tried it in two different internet browsers and both show the
same results.