+The NFVbench tool provides an automated way to measure the network performance for the most common data plane packet flows on any OpenStack based NFVi system viewed as a black box (NFVi Full Stack).

+An NFVi full stack exposes the following interfaces:

+- an OpenStack API

+- an interface to send and receive packets on the data plane (typically through top of rack switches)

+

+The NFVi full stack does not have to be supported by the OPNFV ecosystem and can be any functional OpenStack system that provides the aboce interfaces. NFVbench can be installed standalone (in the form of a single Docker container) and is fully functional without the need to install any other OPNFV framework.

+

+It is designed to be easy to install and easy to use by non experts (no need to be an expert in traffic generators and data plane performance testing).

+- 2 ethernet cables between the NIC and the OpenStack pod under test (usually through a top of rack switch)

+

+The DPDK-compliant NIC must be one supported by the TRex traffic generator (such as Intel X710, refer to the `Trex Installation Guide <https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_download_and_installation>`_ for a complete list of supported NIC)

+

+To run the TRex traffic generator (that is bundled with NFVbench) you will need to wire 2 physical interfaces of the NIC to the TOR switch(es):

+ - if you have only 1 TOR, wire both interfaces to that same TOR

+ - 1 interface to each TOR if you have 2 TORs and want to use bonded links to your compute nodes

+

+.. image:: images/nfvbench-trex-setup.svg

+

+

+Switch Configuration

+--------------------

+For VLAN encapsulation, the 2 corresponding ports on the switch(es) facing the Trex ports on the Linux server should be configured in trunk mode (NFVbench will instruct TRex to insert the appropriate vlan tag).

+

+For VxLAN encapsulation, the switch(es) must support the VTEP feature (VxLAN Tunnel End Point) with the ability to attach an interface to a VTEP (this is an advanced feature that requires an NFVbench plugin for the switch).

+

+Using a TOR switch is more representative of a real deployment and allows to measure packet flows on any compute node in the rack without rewiring and includes the overhead of the TOR switch.

+

+Although not the primary targeted use case, NFVbench could also support the direct wiring of the traffic generator to

+a compute node without a switch (although that will limit some of the features that invove multiple compute nodes in the packet path).

+

+Software Requirements

+---------------------

+

+You need Docker to be installed on the Linux server.

+

+TRex uses the DPDK interface to interact with the DPDK compatible NIC for sending and receiving frames. The Linux server will

+need to be configured properly to enable DPDK.

+

+DPDK requires a uio (User space I/O) or vfio (Virtual Function I/O) kernel module to be installed on the host to work.

+The create an alias to make it easy to execute nfvbench commands directly from the host shell prompt:

+

+.. code-block:: bash

+

+ alias nfvbench='docker exec -it nfvbench nfvbench'

+

+The next to last "nfvbench" refers to the name of the container while the last "nfvbench" refers to the NFVbench binary that is available to run in the container.

+

+To verify it is working:

+

+.. code-block:: bash

+

+ nfvbench --version

+ nfvbench --help

+

+

+4. NFVbench configuration

+-------------------------

+

+Create a new file containing the minimal configuration for NFVbench, we can call it any name, for example "my_nfvbench.cfg" and paste the following yaml template in the file:

+

+.. code-block:: bash

+

+ openrc_file:

+ traffic_generator:

+ generator_profile:

+ - name: trex-local

+ tool: TRex

+ ip: 127.0.0.1

+ cores: 3

+ interfaces:

+ - port: 0

+ switch_port:

+ pci:

+ - port: 1

+ switch_port:

+ pci:

+ intf_speed: 10Gbps

+

+NFVbench requires an ``openrc`` file to connect to OpenStack using the OpenStack API. This file can be downloaded from the OpenStack Horizon dashboard (refer to the OpenStack documentation on how to

+retrieve the openrc file). The file pathname in the container must be stored in the "openrc_file" property. If it is stored on the host in the current directory, its full pathname must start with /tmp/nfvbench (since the current directory is mapped to /tmp/nfvbench in the container).

+

+The required configuration is the PCI address of the 2 physical interfaces that will be used by the traffic generator. The PCI address can be obtained for example by using the "lspci" Linux command. For example:

+The 2 edge interfaces for all service chains will share the same 2 networks.

+

+

+Other Misc Packet Paths

+^^^^^^^^^^^^^^^^^^^^^^^

+

+P2P (Physical interface to Physical interface - no VM) can be supported using the external chain/L2 forwarding mode.

+

+V2V (VM to VM) is not supported but PVVP provides a more complete (and mroe realistic) alternative.

+

+

+Supported Neutron Network Plugins and vswitches

+-----------------------------------------------

+

+Any Virtual Switch, Any Encapsulation

+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

+

+NFVbench is agnostic of the virtual switch implementation and has been tested with the following virtual switches:

+

+- ML2/VPP/VLAN (networking-vpp)

+- OVS/VLAN and OVS-DPDK/VLAN

+- ML2/ODL/VPP (OPNFV Fast Data Stack)

+

+SR-IOV

+^^^^^^

+

+By default, service chains will be based on virtual switch interfaces.

+

+NFVbench provides an option to select SR-IOV based virtual interfaces instead (thus bypassing any virtual switch) for those OpenStack system that include and support SR-IOV capable NICs on compute nodes.

+- service fully parameterized aynchronous run requests using the HTTP protocol (REST/json with polling)

+- service fully parameterized run requests with interval stats reporting using the WebSocket/SocketIO protocol.

+

+Start the NFVbench server

+-------------------------

+To run in server mode, simply use the --server <http_root_path> and optionally the listen address to use (--host <ip>, default is 0.0.0.0) and listening port to use (--port <port>, default is 7555).

+

+

+If HTTP files are to be serviced, they must be stored right under the http root path.

+This root path must contain a static folder to hold static files (css, js) and a templates folder with at least an index.html file to hold the template of the index.html file to be used.

+This mode is convenient when you do not already have a WEB server hosting the UI front end.

+If HTTP files servicing is not needed (REST only or WebSocket/SocketIO mode), the root path can point to any dummy folder.

+

+Once started, the NFVbench server will be ready to service HTTP or WebSocket/SocketIO requests at the advertised URL.

+

+Example of NFVbench server start in a container:

+

+.. code-block:: bash

+

+ # get to the container shell (assume the container name is "nfvbench")

+ docker exec -it nfvbench bash

+ # from the container shell start the NFVbench server in the background

+ nfvbench -c /tmp/nfvbench/nfvbench.cfg --server /tmp &

+ # exit container

+ exit

+

+

+

+HTTP Interface

+--------------

+

+<http-url>/echo (GET)

+^^^^^^^^^^^^^^^^^^^^^

+

+This request simply returns whatever content is sent in the body of the request (only used for testing)

+

+<http-url>/start_run (POST)

+^^^^^^^^^^^^^^^^^^^^^^^^^^^

+

+This request will initiate a new NFVbench run asynchornously and can optionally pass the NFVbench configuration to run in the body (in JSON format).

+See "NFVbench configuration JSON parameter" below for details on how to format this parameter.

+

+The request returns immediately with a json content indicating if there was an error (status=ERROR) or if the request was submitted successfully (status=PENDING). Example of return when the submission is successful:

+

+.. code-block:: bash

+

+ {

+ "error_message": "nfvbench run still pending",

+ "status": "PENDING"

+ }

+

+<http-url>/status (GET)

+^^^^^^^^^^^^^^^^^^^^^^^

+

+This request fetches the status of an asynchronous run. It will return in json format:

+

+- a request pending reply (if the run is still not completed)

+- an error reply if there is no run pending

+- or the complete result of the run

+

+The client can keep polling until the run completes.

+

+Example of return when the run is still pending:

+

+.. code-block:: bash

+

+ {

+ "error_message": "nfvbench run still pending",

+ "status": "PENDING"

+ }

+

+Example of return when the run completes:

+

+.. code-block:: bash

+

+ {

+ "result": {...}

+ "status": "OK"

+ }

+

+

+

+WebSocket/SocketIO events

+-------------------------

+

+List of SocketIO events supported:

+

+Client to Server

+^^^^^^^^^^^^^^^^

+

+start_run:

+

+ sent by client to start a new run with the configuration passed in argument (JSON).

+ The configuration can be any valid NFVbench configuration passed as a JSON document (see "NFVbench configuration JSON parameter" below)

+This repo will build a centos 7 image with testpmd and VPP installed.

+The VM will come with a pre-canned user/password: nfvbench/nfvbench

+

+BUILD INSTRUCTIONS

+==================

+

+Pre-requisites

+--------------

+- must run on Linux

+- the following packages must be installed prior to using this script:

+ - git

+ - qemu-utils

+ - kpartx

+

+Build the image

+---------------

+- cd dib

+- update the version number for the image (if needed) by modifying __version__ in build-image.sh

+- setup your http_proxy if needed

+- bash build-image.sh

+

+IMAGE INSTANCE AND CONFIG

+=========================

+

+Interface Requirements

+----------------------

+The instance must be launched using OpenStack with 2 network interfaces.

+For best performance, it should use a flavor with:

+

+- 2 vCPU

+- 4 GB RAM

+- cpu pinning set to exclusive

+

+Auto-configuration

+------------------

+nfvbench VM will automatically find the two virtual interfaces to use, and use the forwarder specifed in the config file.

+

+In the case testpmd is used, testpmd will be launched with mac forwarding mode where the destination macs rewritten according to the config file.

+

+In the case VPP is used, VPP will set up a L3 router, and forwarding traffic from one port to the other.

+

+nfvbenchvm Config

+-----------------

+nfvbenchvm config file is located at ``/etc/nfvbenchvm.conf``.

+

+.. code-block:: bash

+

+ FORWARDER=VPP

+ TG_MAC0=00:10:94:00:0A:00

+ TG_MAC1=00:11:94:00:0A:00

+ VNF1_GATEWAY_CIDR=1.1.0.2/8

+ VNF2_GATEWAY_CIDR=2.2.0.2/8

+ TG1_NET=10.0.0.0/8

+ TG2_NET=20.0.0.0/8

+ TG1_GATEWAY_IP=1.1.0.100

+ TG1_GATEWAY_IP=2.2.0.100

+

+

+Launching nfvbenchvm VM

+-----------------------

+

+Normally this image will be used together with NFVBench, and the required configurations will be automatically generated and pushed to VM by NFVBench. If launched manually, no forwarder will be run. Users will have the full control to run either testpmd or VPP via VNC console.

+

+To check if testpmd is running, you can run this command in VNC console: