VMworld 2012 is fast approaching and I wanted to provide you a quick update on some of the key networking sessions conducted by VMware folks. Obviously, I will be one of the presenter and would like to invite you to that session more than any other sessions !! ( just kidding ) Networking is definitely going to be one of the hot topic of discussion at VMworld 2012. This is not only due to the Nicira acquisition but also due to the amount of new networking capabilities and features we are going to announce during the course of the conference.... Continue reading

I would like to clarify few things in this blog entry about the Port-mirroring feature that is available on vSphere Distributed Switch (VDS). This feature is similar to the port mirroring capability available on the physical switches. Network administrators can use this feature to troubleshoot any network related issues in the virtual infrastructure and monitor virtual machine to virtual machine traffic that is flowing on the same ESXi host. Network administrators use network analyzer tool, which captures traffic, along with the port mirror feature to perform monitoring and troubleshooting activities. In the physical network, depending on where the analyzer or... Continue reading

Hi JPG,
Good question.
Virtual switches VDS or VSS don't participate in spanning tree because they don't create loops. So while connecting virtual switch uplinks to the same switch or different switch you don't have to worry about the loop.
Here is some architectural difference between virtual and physical switches.
Virtual switch doesn't create forwarding table based on the packets MAC SA and DA. It populates it's forwarding table when a VM is connected to the virtual switch. Also, when a broadcast packet is received from a VM that packet is only sent on one uplink, thus avoiding any loop issues.
In this case the two uplinks are terminated on two separate access layer switch. And both links will be utilized.
Hope this explains. Let me know if you have any other question

Rack Server with Two 10 Gigabit Ethernet network adapters The two 10 Gigabit Ethernet network adapters deployment model is becoming very common because of the benefits they provide through I/O consolidation. The key benefits include better utilization of I/O resources, simplified management, and...

Hi FY,
The only concern I have heard about static port binding is the overhead of managing the number of ports for a port group. Customers don't have to worry about number of ports on a port group while using ephemeral port binding.
However, with Auto expand feature explained in this blog post http://blogs.vmware.com/vsphere/2012/02/automating-auto-expand-configuration-for-a-dvportgroup-in-vsphere-5.html, customers don't have to worry about running out of ports and keeping track of those ports with static binding as well.
Now, I have heard that in VDI deployments some customer prefer Ephemeral port binding because they have VMs that get launched and destroyed at frequent rate. If you use static port binding in such deployments, lot of static ports will be created and will never be recycled. In my other blog post on demystifying port limits http://blogs.vmware.com/vsphere/2012/04/demystifying-configuration-maximums-for-vss-and-vds.html, I talk about the configuration maximums.
If you look at the vCenter Server and Host limit numbers, I don't see there will be any issue using static binding in VDI deployment as well.
In my opinion ephemeral doesn't provide any benefits when you look at the downside of loosing the visibility in to the virtual ports.
But would love to hear what others think.

In the last post Demystifying port limits... I discussed the vitual port limits on the vSphere Standard (VSS) and Distributed switch (VDS). While discussing the VDS limits, I talked about the three different port-binding options available when you configure a port group on VDS. The port binding ...

In the last post Demystifying port limits... I discussed the vitual port limits on the vSphere Standard (VSS) and Distributed switch (VDS). While discussing the VDS limits, I talked about the three different port-binding options available when you configure a port group on VDS. The port binding option describes how a virtual port on the virtual switch binds with virtual machine or a vmkernel nic. In this post, I would like to highlight why you should choose Static Port binding over Ephemeral port binding. As per the definition of Ephemeral binding, there is no port binding with this choice. When... Continue reading

In this blog entry, I will spend some time discussing the configuration maximums related to vSphere standard switch (VSS) and vSphere distributed switch (VDS). I always get this question, what will happen when you cross those configuration maximum limits? Especially, with the vSphere Distributed Switch configuration maximums where there are vCenter Server level limits as well as host level limits. I would like to clarify some of the things regarding these limits in this post. Here are the configuration maximums for vSPhere 5.0 as it pertains to Hosts, VSS, and VDS. Host Maximums (These apply to both VSS and VDS):... Continue reading

Timi,
You are right. Blades do provide option of pass through. I didn't cover that as part of the deployments.
In my opinion the topology will be similar to the Rack servers with multiple 1 gig ports, which are directly connected to external access switch instead of within the chassis switch blade.
So take a look at the following post : http://blogs.vmware.com/networking/2011/11/vds-best-practices-rack-server-deployment-with-eight-1-gigabit-adapters.html
Let me know if this helps.

Blade Server in Example Deployment Blade servers are server platforms that provide higher server consolidation per rack unit along with benefits of lower power and cooling costs. Blade chassis that hosts the blade servers have proprietary architectures and each vendor has its own way of managing...

Recently, there were some changes made to VMware blog site. In that process we have moved the networking blog from its old site blogs.vmware.com/networking to the new site blogs.vmware.com/vsphere/networking. From now on all new posts will be posted on this new site blogs.vmware.com/vsphere/net...

Recently, there were some changes made to VMware blog site. In that process we have moved the networking blog from its old site blogs.vmware.com/networking to the new site blogs.vmware.com/vsphere/networking. From now on all new posts will be posted on this new site blogs.vmware.com/vsphere/net...

Roman,
I will avoid using multiple VDSs as far as possible. Because as you add more switches you are increasing your management tasks.
Another reason for not going for more than one VDS is the additional requirement of uplink ports ( minimum 2 per switch ). Specially, with 10 gig deployments this is difficult to do.
With two VDSs you are again creating siloed network, which is not good to achieve flexibility in resource management

Rack Server in Example Deployment After looking at the major components in the example deployment and key virtual and physical switch parameters, let’s take a look at the different types of servers that customers can have in their environment. Customers deploy ESXi host either on a Rack Server o...

Recently, there were some changes made to VMware blog site. In that process we have moved the networking blog from its old site blogs.vmware.com/networking to the new site blogs.vmware.com/vsphere/networking. From now on all new posts will be posted on this new site blogs.vmware.com/vsphere/networking. I think the content of the old site will remain there. I will try and see if I can have the old blog posts moved to the new site. So please update your bookmarks. Also, I wanted to let you know that the VDS best practices paper is now available to download here. Continue reading

Multi-NIC vMotion achieves the goal of providing bigger pipe (2 Gig with 2 NICs) for vMotion. This 2 gig pipe is dedicated for vMotion traffic only. In other words both the uplinks will be used to carry vMotion traffic.
So not sure why you want LBT enabled here. You have already achieved the goal of increasing the BW for vMotion process.
Also, as I mentioned in earlier comment for LBT to work you need to have more than one active uplink assigned to the port group.
Send me an email at deshpandev@vmware.com if you need to discuss more on this.

Rack Server in Example Deployment After looking at the major components in the example deployment and key virtual and physical switch parameters, let’s take a look at the different types of servers that customers can have in their environment. Customers deploy ESXi host either on a Rack Server o...

Loren,
Thanks for reading through these long posts.
You can definitely add another NIC and use multi-NIC vMotion, but in that case the LBT algorithm is not in use. If you look at the table 4 Hybrid design configuration there you can see that uplink 5 and uplink 6 are used for vMotion. One thing to note here is that the uplinks are not teamed together. In port group PG-B1 uplink 5 is Active and uplink 6 is standby. Similarly in PG-B2 uplink 6 is Active and uplink 5 is standby.
In multi-NIC vMotion the vSphere platform utilizes the two uplinks. LBT can't be used here as there is no teaming of uplinks in the portgroup
Hope this clarifies.

Rack Server in Example Deployment After looking at the major components in the example deployment and key virtual and physical switch parameters, let’s take a look at the different types of servers that customers can have in their environment. Customers deploy ESXi host either on a Rack Server o...

Stacy,
The pass-through term that I have used refers to the blade chassis with no built in network switch.
The design options described assumes that the blade chassis has built in network switch. So they are not in passthrough mode.
I think you are referring to the two different network architecture that HP virtual connect offers (tunneled and mapped VLAN modes ). There passthrough means something different.
Please let me know if I am missing something here.

Blade Server in Example Deployment Blade servers are server platforms that provide higher server consolidation per rack unit along with benefits of lower power and cooling costs. Blade chassis that hosts the blade servers have proprietary architectures and each vendor has its own way of managing...

Rack Server in Example Deployment After looking at the major components in the example deployment and key virtual and physical switch parameters, let’s take a look at the different types of servers that customers can have in their environment. Customers deploy ESXi host either on a Rack Server o...

So in your case the mgmt traffic is not low and you have to make sure that appropriate BW is allocated. Just curious about your deployment in terms of number of network adapters. Are you using 10 gig or multiple 1 Gig interfaces ?

After a long break I am back again on this forum. Over the last couple of months I was busy interacting with various customers as part of the VMworld US and Europe conferences and trying to understand how customers are deploying distributed switch in their environment. I have also spent lot of t...

Before I introduce you to the new networking features in vSphere 5, I want to take a moment and introduce myself first. My name is Venky and I work in the Technical Marketing group at VMware. I am responsible for technical marketing activities around vSphere Networking features. I am really exci...

J,
All the new features are available on Distributed Switch only. Thus you need ent+ license to avail these.
On your question on SPAN. we have an option that allows you to replace the VLAN with Encapsulation VLAN. Here you can configure the VLAN to which you want this packet to be sent. This is not similar to RSPAN, where an extra VLAN tag is inserted.

Before I introduce you to the new networking features in vSphere 5, I want to take a moment and introduce myself first. My name is Venky and I work in the Technical Marketing group at VMware. I am responsible for technical marketing activities around vSphere Networking features. I am really exci...

Hi Jake,
Currently, we don't collect packet based time stamps. So it is not possible to measure latency.
Also, in NetFow V5 record there is only start of flow time stamp, so we can't send this packet time stamp information across to collectors.

As part of the Network Monitoring and Troubleshooting features, vSphere 5 provides NetFlow and Port Mirroring capabilities. In this blog entry I will discuss the NetFlow feature that is available in vSphere 5. NetFlow NetFlow is a networking protocol that collects IP traffic information as recor...