My ramblings about all things technical

vSphere 5.1 Announced with Distributed Switch Enhancements

With the release of vSphere 5.1, VMware brings a number of powerful new features and enhancements to the networking capabilities in the vSphere platform. These new features enable customers to manage their virtual switch infrastructure with greater efficiency and confidence. The new capabilities can be categorized into three main areas: operational improvements, monitoring and troubleshooting enhancements, and improved scalability and extensibility of the VMware vSphere Distributed Switch (VDS) platform. Following are some of the key features:

5)Netdump – Provides the ESXI hosts without disk (stateless/Autodeploy) the ability to core dump over network

6)Improved Scaling numbers

Network Health Check

Network Health check prevents the common configuration error such as Mismatched VLAN, MTU and teaming configuration.

This tool is very helpful in an organization where the network administrators and vSphere administrators respectively take the management ownership of physical network switches and vSphere hosts. In such organizations vSphere admins can provide the network related warnings to the network admins and help identify issues quickly.

Configuration Backup and Restore

VDS configuration is managed through vCenter Server and all the virtual network configuration details are stored in the vCenter database. Previously, In case of database corruption or database loss events, customers were not able to recover their network configurations and had to rebuild the virtual networking configuration from scratch. Also, there was no easy way to replicate the virtual network configuration in another environment or go back to the last working configuration after any accidental changes to virtual networking settings.

All of the above concerns are addressed through the VDS configuration backup and restore feature.

Backup a VDS Configuration

Restore a Port Group Configuration

Rollback and Recovery

The management network is configured on every host and is used to communicate with vCenter Server as well as to interact with other host during vSphere HA configuration. This is critical when it comes to centrally managing hosts through vCenter Server. If the management network on the host goes down or there is a misconfiguration, vCenter Server can’t connect to the host and thus can’t centrally manage resources.

If there is any issue with management network the Hosts can’t reach the vCenter server. And thus vCenter server can’t make any changes to the network and push to the hosts.

In such situation, The only way for the customer to recover is to go to individual hosts and build a standard switch with proper management network configuration. Once all the hosts have their management networks attached to a standard switch, vCenter Server can manage the hosts and re-configure the VDS.

With Rollback and recovery option customers don’t have to worry about going to standard switch route to recover from any mgmt. network failure scenario.

The Automatic Rollback and Recovery feature addresses all the concerns that customers have regarding the use of management network on a VDS. First, the automatic rollback feature automatically detects any configuration changes on the management network and if the host can’t reach the vCenter Server, it doesn’t allow the changes to take effect. Second, customers also have an option to reconfigure the management network of the VDS per host through DCUI. Customers have to connect to each host and through DCUI can change the management network parameters of the VDS

LACP

Link Aggregation Control Protocol (LACP) is a standard based link aggregation method to control the bundling of several physical network links together to form a logical channel for increased bandwidth and redundancy purposes. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to the peer. As part of the vSphere 5.1 release, VMware now supports this standard based link aggregation protocol.

Single Root IO Virtualization is a standard that allows one PCI express (PCIe) adapter to be presented as multiple separate logical devices to the VMs. The hypervisor manages the physical function (PF) while the virtual functions (VFs) are exposed to the VMs. In the hypervisor SR-IOV capable network devices offer the benefits of direct I/O, which includes reduced latency and reduced host CPU utilization. VMware vSphere ESXi platform’s VM Direct Path (pass through) functionality provides similar benefits to the customer, but requires a physical adapter per VM. In SR-IOV the pass through functionality can be provided from a single adapter to multiple VMs through VFs.

SR-IOV

Single Root IO Virtualization is a standard that allows one PCI express (PCIe) adapter to be presented as multiple separate logical devices to the VMs. The hypervisor manages the physical function (PF) while the virtual functions (VFs) are exposed to the VMs. In the hypervisor SR-IOV capable network devices offer the benefits of direct I/O, which includes reduced latency and reduced host CPU utilization. VMware vSphere ESXi platform’s VM Direct Path (pass through) functionality provides similar benefits to the customer, but requires a physical adapter per VM. In SR-IOV the pass through functionality can be provided from a single adapter to multiple VMs through VFs.

BPDU Filter

BPDUs are data messages or packets that are exchanged across switches to detect loops in a network. These packets are part of the Spanning Tree Protocol (STP) and are used to discover the network topology. The VMware virtual switches (VDS and VSS) do not support STP and thus do not participate in BPDU exchange across external physical access switches over the uplinks.

The BPDU filter feature available in this release allows customer to filter the BPDU packets that are generated by virtual machines and thus prevents any Denial of Service attack situation. This feature is available on VMware vSphere Standard and Distributed switches, and can be enabled by changing the advanced “Net” settings on ESXi host.

Port Mirroring and NetFlow Enhancements

To address the network administrator’s need for visibility into virtual infrastructure traffic, VMware introduced port mirroring and NetFlow features as part of the vSphere 5.0 release. These features provide necessary and familiar tools to network administrators that help them in monitoring and troubleshooting tasks. In vSphere 5.1, the port-mirroring feature is enhanced through the additional support for RSPAN and ERSPAN capability.

IPFIX or NetFlow version 10 is the advanced and flexible protocol that allows customer to define the NetFlow records that can be collected at the VDS and sent across to a collector tool. Following are some key attributes of the protocol:

Customers can use templates to define the records

Template descriptions are communicated by the VDS to the Collector engine

Can report IPv6, MPLS, VXLAN flows.

VDS Management Plane Scalability

Following are the scalability numbers for VDS management plane

Static dvPortgroups goes up from 5 K to 10 K

Number of dvports goes up from 20 K to 60 K

Number of Hosts per VDS goes up from 350 to 500

Number of VDS supported on a VC goes up from 32 to 128

Netdump

Netdump is a vSphere ESXi platform debug feature that helps dump the vmkernels core dump to a server on the network. In this release of vSphere 5.1 the netdump support is extended to the ESXi host without local disks or also termed as stateless ESXi or Auto deploy environments.

In vSphere 5.0, enabling netdump on an ESXi host with the management network configured on a VDS was not allowed. In vSphere 5.1, this limitation has been removed. Users now can configure netdump on ESXi hosts using management network on VDS.