HPE Helion Carrier Grade (HCG) OpenStack enables Virtual Machines (VM) to take advantage of advanced hardware capabilities to improve VM performance and operation. Workloads requiring particular CPU, I/O, memory capabilities can run on the most appropriate platforms and gain additional benefits from features built into the system. Such Enhanced Platform Awareness features of recognizing and platform features are made available at the VM level for efficient workload placement.

HPE demonstrated the Enhanced Platform Awareness at Mobile World Congress Barcelona in February 2016 via the placement of Brocade vRouter VNF on HCG OpenStack platform, running on HPE ProLiant servers with Intel Xeon processors and Ethernet Controllers. Spirent traffic generator was used as professional tool to generate load that was sent to two Brocade VNFs deployed with optimal and sub-optimal placements to show the performance difference between the two placements. As demonstrated, predictable performance and significant improvements are realized for the VNF when there is enhanced platform awareness during the VNF deployment by utilizing the advance hardware features of the compute server.

Helion Carrier Grade OpenStack fulfills a user’s request to provision a virtual machine by installing it onto a server from a pool of compute servers. The resources allocated to the VM are decided by “flavors” specified by parameters such as desired memory, storage space, virtual CPUs and NUMA zone. All the compute servers periodically publish their hardware capabilities, available resources and its status to Nova database. The Nova filter scheduler uses this data to match the flavor to the available server with the required characteristics.

The experimentation showcases the enhanced performance metrics for a routing VNF provisioned intelligently by selecting the target compute server with a specific PCI device and NUMA (Non-Uniform Memory Architecture) zone. Additionally, another vRouter instance is provisioned with no EPA consideration on the same target compute server. A physical Spirent box connected to the environment is used to generate traffic to both vRouter instances.

NUMA is a method of configuring a cluster of microprocessor so that they can share memory locally, improving performance. NUMA means that a processor will access its own local memory faster than non-local memory, such as memory local to another processor. Here, the CPU (or socket) and memory are combined to form a NUMA node.

The data NICs of the compute server are connected to NUMA-0 node. Two of the cores on NUMA-0 are assigned to Accelerated vSwitch. The vRouter instance with EPA is launched on the NUMA-0 by defining the flavor extra-spec and the other vRouter instance is launched randomly. All other characteristics such as vCPU, huge page configuration (needed for Data Plane Development Kit accelerating Brocade vRouter), memory, storage are the same for both the vRouter instances. Thus, the only difference between the two instances is that one is properly placed on NUMA-0 and the other is randomly placed on NUMA-1. This placement is ensured by using appropriate Heat scripts.

Features in the demo include

1. Accelerated vSwitch

2. Huge Pages

3. NUMA

4. vCPU pinning to cores

Instance details below:

Instance

Image

Flavor

Brocade-Numa-0

Vyatta-kvm_4.0R1_amd64

5vCPUs, 2048MB Memory, 50GB disk space

Brocade-Numa-1

Vyatta-kvm_4.0R1_amd64

5vCPUs, 2048MB Memory, 50GB disk space

Workload placement: The cores on the socket-0 are assigned to the Brocade VNF with proper placement whereas the cores on the socket-1 are assigned to the Brocade VNF with random placement. The traffic from the Spirent TestCenter to the properly placed Brocade VNF will not go through Intel Quick Path Interconnect (QPI) and hence there wouldn’t be any loss of traffic whereas, for the randomly placed Brocade VNF, it has to use QPI twice. This can be pictorially described as below:

QPI is point-to-point interconnect used in modern Intel servers

Performance: Spirent ports are reserved and configured to send the traffic to the two Brocade VNFs. The traffic flowing through NUMA-0 to the VNF launched accurately on NUMA-0 has 0 packet drops while the traffic sent to the randomly placed Brocade VNF has packet drops, an average of 20%. Refer the statistics from the Spirent TestCenter, which showcases the drop in traffic for NUMA-1 flows.

DETAILED TRAFFIC STREAM RESULTS

Name/ID

Tx port Name

Rx port Name

Tx count(Frames)

Rx count (Frames)

Dropped count (Frames)

Dropped Frame Percent

Numa0/65545

Port //1/5

Port //1/6

9830442

9830442

0

0.000

Numa0/65546

Port //1/5

Port //1/6

9830442

9830442

0

0.000

Numa1/65547

Port //1/5

Port //1/6

9830442

7677968

2152474

21.896

Numa1/65548

Port //1/5

Port //1/6

9830443

7863666

1966777

20.007

Numa1/65549

Port //1/5

Port //1/6

9830443

7673152

2157291

21.945

Numa1/65550

Port //1/5

Port //1/6

9830443

7619281

2211162

22.493

Numa1/65551

Port //1/5

Port //1/6

9830444

7555188

2275256

23.145

Numa1/65552

Port //1/5

Port //1/6

9830444

7500334

2330110

23.703

The feature such as I/O PCIe based NUMA-aware scheduling is supported from “Kilo” release of OpenStack. This adds efficiency and performance for NFV workloads. EPA implementation merely requires a simple addition to the VM flavor, in order to enable the launch of specific VMs with enhanced hardware capabilities. With EPA being considered in OpenStack, a higher level of control and configuration is available for IT administrators, Telco’s and CSPs.

To explore the other possibilities / features benefited from EPA, engage with HPE which provides a standard platform, enabling rapid delivery of applications and services that speed time-to-value.

Want to learn more? Follow us on Twitter at @HPE_NFV & @HPE_CSP

About the Author:

“My name is Shree Duth Awasthi, and I enjoy travelling new places, meeting people and finding ways to help them, have an uplifting experience. I attribute my success to ability to plan, schedule, handle different tasks at once and work keeps the fire burning for me. I see work as life and throw myself into it with absolute involvement. This helps me to face life more vibrantly"

*Disclaimer: This is my personal blog. The opinions expressed here represent my own and not those of my employer. All data and information provided here on this site is for informational / demonstration purpose only and not tied to doing any performance benchmarking. HPE or blogger makes no representations as to accuracy, completeness, correctness, suitability, or validity of any information on this site and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis. Blog posts are not edited or reviewed by the presenters or the respective companies prior to publication.