SR-IOV: A Trend in Enhanced Performance

In my previous blog, I briefly discussed the Network Functions Virtualization (NFV) movement and the reasons why traditional Service Providers (SPs) are adopting NFV. One of the major qualifications for NFV will be to achieve relatively higher virtualized performance and scalability similar to what is now considered the norm in the hardware paradigm. Among the various trends that will define I/O virtualization, the two most distinct ones for consideration in NFV are:

Platform or server performance is increasing with the availability of multi-core CPUs.

10 GbE is being widely adopted as the basic I/O.

In the virtualized environment, Layer 2 (L2) performance and throughput matches line rate even at 10G due to the L2 switching performance improvements for the virtual switches. With NFV and in this blog, the performance discussion is with reference to Layer 3 (L3) services - firewall, routing, load balancers, etc. The quest is to find how the higher packet throughput can be achieved when a combination of such services are configured on the virtual networking device, i.e. a virtual machine that supports services such as firewall, routing, load balancing, to name a few.

The performance degradation for L3 services in the virtual environment can be attributed to the components that sit between the Virtual Machine (VM) interface and the server’s hardware Network Interface Card (NIC).

Figure 1: A generic depiction of the layers between the VM interface and physical NIC.

These layers generally consists of the hypervisor’s virtual switches, system level drivers, hypervisor OS drivers, and so on. I/O techniques such as Peripheral Component Interconnect passthrough (PCI passthrough) and Single-Root I/O Virtualization (SR-IOV) are ways to bypass these components and tie the NIC directly to the interfaces of the virtual machine for higher throughput and performance.

Let’s look at SR-IOV as an I/O technique with Intel’s architecture to achieve the target 10G performance for NFV. SR-IOV is defined by a PCI-SIG specification with Intelacting as one of the major contributors to the specification. The main idea is to replicate the resources to provide a separate memory space, interrupts, and DMA (Direct Memory Access) streams per VM, and for each VM to be directly connected to the I/O device so that the main data movement can occur without hypervisor involvement.

In traditional I/O architectures, a single core has to handle all the Ethernet interrupts for all the incoming packets and deliver them to the different virtual machines running on the server. Two core interrupts are required - one to service the interrupt on the NIC (incoming packet) and determine which VM the packet belongs to, and the second on the core assigned to the VM, to copy the packet to the VM where the packet is destined. This results in increased latency as the hypervisor handles every packet destined for the VMs.

Figure 2: SR-IOV yields higher packet throughput and lower latency

To achieve some of the stated benefits in Figure 2, SR-IOV introduces the idea of a Virtual Function (VF). Virtual Functions are ‘lightweight’ PCI function entities that contain the resources necessary for data movement. With Virtual Functions, SR-IOV provides a mechanism by which a single Ethernet port can be configured to appear as multiple separate physical devices, each with its own configuration space. The Virtual Machine Manager (VMM on hypervisor) assigns one or more VFs to a VM by mapping the actual configuration space to the VM’s configuration space.

Figure 3: A physical NIC is carved into multiple VFs are assigned to a guest VM (Intel).

When a packet comes in, it is placed into a specific VF pool based on the MAC address or VLAN tag. This lends to a direct DMA transfer of packets to and from the VM bypassing the hypervisor and the software switch in the VMM. The hypervisor is not involved in the packet processing to move the packet (between the hardware interface and the actual VM) thus removing any bottlenecks in the path.

SR-IOV is supported on most hypervisors such as Xen, KVM, HyperV 2012, and ESXi 5.1. Newer servers and 10G NICs with SR-IOV support is required (and virtualization is a must for NFV). While SR-IOV is one way to achieve high packet throughput, the actual deployment will give rise to discussions regarding hardware support (hypervisor, NICs), traffic separation, etc. Another issue is that even though the physical NIC can be carved into multiple VFs (number depends on the NIC and the hypervisor), there are practical limitations on how many VMs can be deployed to share this NIC. Techniques for VM mobility while moving from one VF on one server to a VF on another server have to be explored. The open question is whether customers will require VM mobility or if we can assume that these VMs are immobile and tied to the SR-IOV server that they are initially deployed on. While SR-IOV promises to deliver high packet throughput, the exact nature of these improvements for L3 services will ultimately depend on a vendor’s software networking architecture or vendor-enforced license throttling.

Like NFV, SR-IOV is a new and exciting paradigm shift. In the months to come, there will be lots of activity to define the solutions and use cases that will lend itself to the deployment of virtual networking software. As NFV gathers momentum, it will be interesting to learn what the DevOps environment will look like; perhaps that will be the topic of my next blog.

It's very important to find a prominent software that can help you enhance your virtualization process, something that is flexible and meets your expectations on how a deployment of virtual networking software should work.

Please note: Certain product lines referenced on this website have been acquired by third party buyers and may no longer be supported, offered or sold by Brocade, A Broadcom Limited Company. These product lines include, but are not limited to, the Virtual Router (vRouter), Virtual Application Delivery Controller (vADC), the Virtual Evolved Packet Core (vEPC) and the Software Defined Networking (SDN) Controller. Any mention of these product lines, including associated services and support on this site, as it relates to Brocade should now be considered historical reference only. Ongoing use of such products may be subject to terms and conditions of the buyer.

Some, but not all the content on this site is provided, reviewed, approved or endorsed by Brocade but in any case, is provided solely as a convenience of our customers. All postings and use of the content on this site are subject to the BROCADE WEBSITE USE TERMS AND CONDITIONS. BROCADE ASSUMES NO LIABILITY WHATSOEVER, MAKES NO REPRESENTATION AND DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THE CONTENT PROVIDED HEREIN, INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, CORRECTNESS, APPROPRIATENESS OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED. THIRD PARTIES USE THIS CONTENT AT THEIR OWN RISK. Content on this site may contain or be subject to specific guidelines or limitation on use. Third parties using this content agree to abide by any limitation or guidelines and to comply with the BROCADE WEBSITE USE TERMS AND CONDITIONS. Brocade may make changes to this content, to specifications, or product design or descriptions at any time, or may remove content at its sole discretion without notice.