For the above examples to work, an appropriate server socket must be created
at the paths specified (/tmp/dpdkvhostclient0 and
/tmp/dpdkvhostclient1). These sockets can be created with QEMU; see the
vhost-user client section for details.

vHost User uses a client-server model. The server creates/manages/destroys the
vHost User sockets, and the client connects to the server. Depending on which
port type you use, dpdkvhostuser or dpdkvhostuserclient, a different
configuration of the client-server model is used.

For vhost-user ports, Open vSwitch acts as the server and QEMU the client. This
means if OVS dies, all VMs must be restarted. On the other hand, for
vhost-user-client ports, OVS acts as the client and QEMU the server. This means
OVS can die and be restarted without issue, and it is also possible to restart
an instance itself. For this reason, vhost-user-client ports are the preferred
type for all known use cases; the only limitation is that vhost-user client
mode ports require QEMU version 2.7. Ports of type vhost-user are currently
deprecated and will be removed in a future release.

To use vhost-user ports, you must first add said ports to the switch. DPDK
vhost-user ports can have arbitrary names with the exception of forward and
backward slashes, which are prohibited. For vhost-user, the port type is
dpdkvhostuser:

where vhost-user-1 is the name of the vhost-user port added to the switch.

Repeat the above parameters for multiple devices, changing the chardev path
and id as necessary. Note that a separate and different chardev path
needs to be specified for each vhost-user device. For example you have a second
vhost-user port named vhost-user-2, you append your QEMU command line with
an additional set of parameters:

In addition, QEMU must allocate the VM’s memory on hugetlbfs. vhost-user ports
access a virtio-net device’s virtual rings and packet buffers mapping the VM’s
physical memory on hugetlbfs. To enable vhost-user ports to map the VM’s memory
into their process address space, pass the following parameters to QEMU:

The vhost-user interface will be automatically reconfigured with required
number of Rx and Tx queues after connection of virtio device. Manual
configuration of n_rxq is not supported because OVS will work properly only
if n_rxq will match number of queues configured in QEMU.

A least two PMDs should be configured for the vswitch when using multiqueue.
Using a single PMD will cause traffic to be enqueued to the same vhost queue
rather than being distributed among different vhost queues for a vhost-user
interface.

If traffic destined for a VM configured with multiqueue arrives to the vswitch
via a physical DPDK port, then the number of Rx queues should also be set to at
least two for that physical DPDK port. This is required to increase the
probability that a different PMD will handle the multiqueue transmission to the
guest using a different vhost queue.

If one wishes to use multiple queues for an interface in the guest, the driver
in the guest operating system must be configured to do so. It is recommended
that the number of queues configured be equal to $q.

For example, this can be done for the Linux kernel virtio-net driver with:

To use vhost-user-client ports, you must first add said ports to the switch.
Like DPDK vhost-user ports, DPDK vhost-user-client ports can have mostly
arbitrary names. However, the name given to the port does not govern the name
of the socket device. Instead, this must be configured by the user by way of a
vhost-server-path option. For vhost-user-client, the port type is
dpdkvhostuserclient:

Once the vhost-user-client ports have been added to the switch, they must be
added to the guest. Like vhost-user ports, there are two ways to do this: using
QEMU directly, or using libvirt. Only the QEMU case is covered here.

where vhost-user-1 is the name of the vhost-user port added to the switch.

If the corresponding dpdkvhostuserclient port has not yet been configured
in OVS with vhost-server-path=/path/to/socket, QEMU will print a log
similar to the following:

QEMUwaitingforconnectionon:disconnected:unix:/path/to/socket,server

QEMU will wait until the port is created sucessfully in OVS to boot the VM.
One benefit of using this mode is the ability for vHost ports to ‘reconnect’ in
event of the switch crashing or being brought down. Once it is brought back up,
the vHost ports will reconnect automatically and normal service will resume.

vhost IOMMU is a feature which restricts the vhost memory that a virtio device
can access, and as such is useful in deployments in which security is a
concern.

IOMMU support may be enabled via a global config value,
`vhost-iommu-support`. Setting this to true enables vhost IOMMU support for
all vhost ports when/where available:

$ ovs-vsctl set Open_vSwitch . other_config:vhost-iommu-support=true

The default value is false.

Important

Changing this value requires restarting the daemon.

Important

Enabling the IOMMU feature also enables the vhost user reply-ack protocol;
this is known to work on QEMU v2.10.0, but is buggy on older versions
(2.7.0 - 2.9.0, inclusive). Consequently, the IOMMU feature is disabled by
default (and should remain so if using the aforementioned versions of
QEMU). Starting with QEMU v2.9.1, vhost-iommu-support can safely be
enabled, even without having an IOMMU device, with no performance penalty.

Post-copy migration is the migration mode where the destination CPUs are
started before all the memory has been transferred. The main advantage is the
predictable migration time. Mostly used as a second phase after the normal
‘pre-copy’ migration in case it takes too long to converge.

DPDK Post-copy migration mode uses userfaultfd syscall to communicate with
the kernel about page fault handling and uses shared memory based on huge
pages. So destination host linux kernel should support userfaultfd over
shared hugetlbfs. This feature only introduced in kernel upstream version
4.11.

Post-copy feature supported in DPDK since 18.11.0 version and in QEMU
since 2.12.0 version. But it’s suggested to use QEMU >= 3.0.1 because
migration recovery was fixed for post-copy in 3.0 and few additional bug
fixes (like userfaulfd leak) was released in 3.0.1.

DPDK Post-copy feature requires avoiding to populate the guest memory
(application must not call mlock* syscall). So enabling mlockall and
dequeue zero-copy features is mis-compatible with post-copy feature.

Note that during migration of vhost-user device, PMD threads hang for the
time of faulted pages download from source host. Transferring 1GB hugepage
across a 10Gbps link possibly unacceptably slow. So recommended hugepage
size is 2MB.

The DPDK testpmd application can be run in guest VMs for high speed packet
forwarding between vhostuser ports. DPDK and testpmd application has to be
compiled on the guest VM. Below are the steps for setting up the testpmd
application in the VM.

Note

Support for DPDK in the guest requires QEMU >= 2.2

To begin, instantiate a guest as described in vhost-user or
vhost-user-client. Once started, connect to the VM, download the
DPDK sources to VM and build DPDK:

Normally when dequeuing a packet from a vHost User device, a memcpy operation
must be used to copy that packet from guest address space to host address
space. This memcpy can be removed by enabling dequeue zero-copy like so:

With this feature enabled, a reference (pointer) to the packet is passed to
the host, instead of a copy of the packet. Removing this memcpy can give a
performance improvement for some use cases, for example switching large packets
between different VMs. However additional packet loss may be observed.

Note that the feature is disabled by default and must be explicitly enabled
by setting the dq-zero-copy option to true while specifying the
vhost-server-path option as above. If you wish to split out the command
into multiple commands as below, ensure dq-zero-copy is set before
vhost-server-path:

A limitation exists whereby if packets from a vHost port with
dq-zero-copy=true are destined for a dpdk type port, the number of tx
descriptors (n_txq_desc) for that port must be reduced to a smaller number,
128 being the recommended value. This can be achieved by issuing the following
command:

$ ovs-vsctl set Interface dpdkport options:n_txq_desc=128

Note: The sum of the tx descriptors of all dpdk ports the VM will send to
should not exceed 128. For example, in case of a bond over two physical ports
in balance-tcp mode, one must divide 128 by the number of links in the bond.

The reason for this limitation is due to how the zero copy functionality is
implemented. The vHost device’s ‘tx used vring’, a virtio structure used for
tracking used ie. sent descriptors, will only be updated when the NIC frees
the corresponding mbuf. If we don’t free the mbufs frequently enough, that
vring will be starved and packets will no longer be processed. One way to
ensure we don’t encounter this scenario, is to configure n_txq_desc to a
small enough number such that the ‘mbuf free threshold’ for the NIC will be hit
more often and thus free mbufs more frequently. The value of 128 is suggested,
but values of 64 and 256 have been tested and verified to work too, with
differing performance characteristics. A value of 512 can be used too, if the
virtio queue size in the guest is increased to 1024 (available to configure in
QEMU versions v2.10 and greater). This value can be set like so: