Troubleshooting packet drop with NFV DPDK

This blog is based on a support case in which we observed a steady frame loss rate of 0.01% on Data Plane Development Kit (DPDK) interfaces on Red Hat OpenStack Platform 10.

Frame loss occurred at DPDK NIC receive (RX) level in a situation where the traffic can run for a few seconds/minutes and then we experience a burst of lost packets. The traffic will then be stable for a few seconds/minutes before another burst of lost packets. From following logs, we can confirm that we are losing packets in the RX queue but are not losing packets being transmitted (TX).

As you can see from this screenshot, no errors with TX packets, but quite a few dropped RX packets on port 1 and 2.

Openvswitch

Figure: An example of OVS-DPDK configured for a dual NUMA system

PMD threads Affinity

If the PMD thread doesn't consume the traffic quickly enough, the hardware NIC queue overflows and packets are dropped in batches. There is nothing much the OS can do with it, as this queue is within every single NIC board. The pmd-cpu-mask is a core bitmask that sets which cores are used by OVS-DPDK for datapath packet processing.

After pmd-cpu-mask is set, OVS-DPDK will poll DPDK interfaces with a processor core that is on the same NUMA node as the interface. The pmd-cpu-mask is used directly in OVS-DPDK and it can be set at any time, even when traffic is running.

Configure the 'pmd-cpu-mask' to enable PMD threads to be pinned to mask cores.

For example in a 24-core system where cores 0-11 are located on NUMA node 0 and 12-23 are located on NUMA node 1, set the following mask to enable one core on each node.

# ovs-vsctl set Open_vSwitch other_config:pmd-cpu-mask=1001

dpdk-socket-mem

The dpdk-socket-mem variable defines how hugepage memory is allocated across NUMA nodes. It is important to allocate hugepage memory to all NUMA nodes that will have DPDK interfaces associated with them. If memory is not allocated to a NUMA node that is associated with a physical NIC or VM, they cannot be used. The following command is a comma-separated list of memory to pre-allocate from hugepages on specific sockets.

Multiple Poll Mode Driver Threads

With PMD multi-threading support, OVS creates one PMD thread for each NUMA node by default, if there is at least one DPDK interface from that NUMA node added to OVS. However, when there are multiple ports/RX queues producing traffic, performance can be improved by creating multiple PMD threads running on separate cores. These PMD threads can share the workload by each being responsible for different ports/rxq's. Assignment of ports/rxq's to PMD threads is done automatically.

Enable HyperThreading

With HyperThreading, also known as simultaneous multithreading (SMT), enabled, a physical core appears as two logical cores. SMT can be utilized to spawn worker threads on logical cores of the same physical core thereby saving additional cores.

With DPDK, when pinning PMD threads to logical cores, care must be taken to set the correct bits of the pmd-cpu-mask to ensure that the PMD threads are pinned to SMT siblings.

In order to check what are the siblings of a given logical CPU, run this command:

$ cat /sys/devices/system/cpu/cpuN/topology/thread_siblings_list

DPDK Physical Port RX Queues

The following command sets the number of RX queues for DPDK physical interface. The RX queues are assigned to pmd threads on the same NUMA node in a round-robin fashion.

# ovs-vsctl set Interface <DPDK interface> options:n_rxq=<integer>

DPDK Physical Port Queue Sizes

If the VM isn't fast enough to consume the packets, the VirtIO ring buffer runs out of space and the vhost-user (DPDK) drops packets. We can try to increase the size of those rings.

Different n_rxq_desc and n_txq_desc configurations yield different benefits in terms of throughput and latency for different scenarios. Generally, smaller queue sizes can have a positive impact for latency at the expense of throughput. The opposite is often true for larger queue sizes.

# ovs-vsctl set Interface dpdk0 options:n_rxq_desc=<integer>

# ovs-vsctl set Interface dpdk0 options:n_txq_desc=<integer>

The above command sets the number of rx/tx descriptors that the NIC associated with dpdk0 will be initialised with.

Linux Tuning

Isolated/tuned cores for PMD/VM

If the VM isn't fast enough to consume the packets, the virtio ring runs out of buffers and the vhost-user drops packets. Isolated cores can then be used to dedicatedly run vm and PMD. CPU pinning is the ability to run a PMD/VM on a specific physical CPU, on a given host. vCPU pinning provides similar advantages as task pinning on bare-metal systems. Since virtual machines run as user space tasks on the host operating system, pinning increases cache efficiency.

Configure hugepage for OVS/VM

Choosing a page size is a trade off between providing faster access times (quick I/O?) by using larger pages and ensuring maximum memory utilization by using smaller pages. Notice that using HugePages for backing the memory for your guest will also inhibit the overcommit savings by KSM (Kernel Same Page Merging). Learn more about KSM in our Product Documentation.

Understand DPDK Port statistics

Let's start with how to get port statistics. To get port statistics, we're going to use the ovs-vsctl utility. This is part of Open vSwitch.

From the source code of OSP10, we understand statistics as below:

rx_errors : "total number of erroneous received packets" - "Total of RX packets dropped by the HW"

The following is an example of how to troubleshoot DPDK packet drop issue. Firstly, we dump the statistics of DPDK port using the ovs-vsctl utility.

From the statistics, there are packet drop and errors on RX_Req. Packet drop can be caused by performance or hardware. There are lots of aspects that affect DPDK performance, like not enough forwarding cores, not using multiqueue, slow VM response, MTU settings, bad queue to core assignment, etc. Here are some steps to help in troubleshooting.

PMD threads polling on dpdk1 with one core (2 threads-4&30). One CPU Core (2PMD) isolated for each PHY. The PHY NIC has two queues (even only one queue has packet drop issue). We try to add more cores to the rx_req to confirm that packet drop is not caused by insufficient fwding cores.

# ovs-vsctl set Interface dpdk1 options:n_rxq=4

The above command set the number of DPDK Physical Port Rx Queues as 4. So PMD Threads polling with 2 core (4 threads). But there is no improvement in packet loss.

Tuning compute node

The compute nodes are tuned by Tuned using the CPU-Partitioning Profile. Put all of the cores you want to isolate at /etc/tuned/cpu-partitioning-variables.conf. The remaining CPU cores are dedicated to guest VMs. Additionally, isolcpus and hugepages should be configured in the kernel boot cmdline. Refer to our solution on access.redhat.com.

Confirm that cores for VM & PMD are isolated from host.

Check VM CPU affinity

The PMD core should not be included in VMs CPU affinity and VMs CPU should be isolated from host.