- <p>This presentation discussed an optimized packet distributor for core to core. OPDL decentralizes the distributor, all packets are maintained in order and atomic. It well addresses the high volume distribution needs for small packets. </p>

- <p>This presentation discussed the new 25Gbe Ethernet feature in DPDK, how to transit from 10Gbe to 25Gbe using Intel Ethernet, device personalization, NFV use case such as VF Daemon and the Adaptive VF Guest Interface.</p>

- <p>This presentation discussed Tencent cloud data center's security needs, why move from the dedicated chip to x86/DPDK paths, how to use the multiple process model to design the security service, which lead to thousands of server adoption.</p>

- <p>This presentation discussed the evolved path on how to develop the network appliance in multiple generations, from kernel to user space, from MIPS to x86, from integrated to distributed model. In addition, this presentation discussed how to construct NFV system on dual-socket server.</p>

- <p>This talk is an extension of a talk presented in DPDK Summit Userspace 2016 in Dublin, where NXP presented a case for expanding DPDK towards non-standard (SoC) devices. That required a large number of fundamental changes in the DPDK framework to untangle from PCI specific code/functionality. In this talk we delve into current upstream design of 1) the bus 'driver', 2) the mempool 'driver', 3) the device driver, and how these layers tie up together to provide the device model in DPDK framework. </p>

- <p>This presentation is about using DPDK as firmware on an Intelligent NIC (OCTEON TX). It will cover the firmware architecture and how DPDK fits in that architecture. It will discuss the hurdles faced and solutions used as part of this exercise.</p>

- <p>The Ethernet speed upgrade path was clearly defined as 10G->40G->100G. However, new developments in data center indicate the latest path for server connections will be 10G->25G->100G with potential for 10G->25G->50G->100G. This is because 25G provides a more efficient use of hardware and a more logical upgrade path to 100G.</p>

- <h3>Implementation of Flow-Based QoS Mechanism with OVS and DPDK</h3>

- <p>The project objective is to implement 'Flow based QoS' for SDN-NFV platform using OVS and DPDK on Intel architecture. We will apply this QoS mechanism on Wipro vCPE platform and demonstrate performance improvement of real time traffic.</p>

- <p>This session is a primer on the prominence of P4 as a high-level, domain-specific language for data path applications. While there are a few ASIC vendors like Barefoot Networks who are coming up with compilers for their platforms, we are looking at expanding the reach of P4 for virtual infrastructure / software based data path by showcasing how P4 can become a choice for writing DPDK applications and thus enhanced portability. </p>

- <p>Subscriber gateways, such as BNG nodes, have unique requirements and challenges as compared to traditional routers. They need to be feature rich while also supporting high scale and throughput. This talk will provide an overview of a typical dataplane for subscriber gateways and highlight some of the design challenges in realizing the goals and the trade-offs to be considered.</p>

- <p>The topic begins with an introduction for developing data plane feature rich Virtual Network Function (VNF) using optimized DPDK libraries including ip-pipeline packet framework and taking advantage of basic x86 architecture. It covers concept of developing data plane applications for running with RTC (Run To completion) mode or Pipeline mode with just configuration change. It also covers the generic Best Known Methods for developing optimized data plane application on x86 architecture with specific code examples from samplevnf project from OPNFV. Finally concludes with the call for action to community to contribute in the samplevnf project in OPNFV for application development. </p>

- <p>FD.io (Fast Data) is architected as a collection of sub-projects and provides a modular, extensible user space IO services framework that supports rapid development of high-throughput, low-latency and resource-efficient IO services. At the heart of fd.io is Vector Packet Processing (VPP) technology. This session will give an overview of VPP, its architecture and how it pushes packet processing to extreme limits of performance and scale.</p>

- <p>DPDK has become the ubiquitous user space framework on which prominent open source switching software, Open vSwitch and FD.io run, and is widely integrated in OPNFV. This session discusses Open DayLIght (ODL) based SFC on both OVS-DPDK and FD.io with DPDK, and provides a comparative study on architecture, performance and latency of SFC use case on ARM SoCs.</p>

- <p>In this talk, we would like to take you through the Red Hat's effort to provision the OpenStack cluster with OVS-DPDK/SR-IOV datapath with the needed EPA parameters. We will describe the deployment steps, and the need for composable roles to handle today's VNF deployment scenarios. </p>

- <p>This presentation addresses the question of how packets must be steered from the kernel bypass mechanism to the user space applications. We investigate the following two questions: (i) Should packets be distributed to cores in hardware or in software? (ii) What information in the packet should be used to partition packets to cores?</p>

- <p>This presentation describes the cryptodev API, a framework for processing crypto workloads in DPDK. The cryptodev framework provides crypto poll mode drivers as well as a standard API that supports all these PMDs and can be used to perform various cipher, authentication, and AEAD symmetric crypto operations in DPDK. The library also provides the ability for effortless migration between hardware and software crypto accelerators.</p>

- <p>DPDK bus infrastructure has been updated for last a few releases. Although these changes should not affect the user application, it worth mentioning the changes. In this talk, I will summarize the bus changes and mention from required modifications in drivers. </p>

- <p>There are various kind of HW accelerators available with SoCs. Each of the accelerator may support different capabilities and interface. Many of these accelerators are programmable devices. In this talk we will discuss various ways to support such accelerators in a generic manner.</p>

- <h3>Let’s hot plug: use uevent mechanism to practice it in DPDK</h3>

- <p>Hot plug is a key requirement for live migration. So far, the hot plug and fail-safe implementation is still not friendly for PCIe devices. This talk proposes to add a general uevent mechanism in DPDK which include the uevent monitor and failure handler, to make it easy for DPDK users to implement hot plug.</p>

- <p>Devices on the PCI bus are found by the bus probe function. For each device, the list of registered drivers (PMDs) is searched until one (only) is found for the device. This presentation proposes a mechanism to share a pci device between multiple PMDs. It may also be extendable to non-pci devices. </p>

- <p>This is talk about the current status and planned development of VMBus support for DPDK. This talk also gives an overview of how DPDK applications are enabled on Azure Accelerated Networking using the Fail-Safe, TAP and existing drivers. It will cover some of the requirements and plans for the future.</p>

- <p>Encryption in today's networks is becoming ubiquitous. However, running crypto on general purpose CPUs is costly. In this talk we present joint work by NXP, Intel and Mellanox on offloading protocol processing to hardware providing better utilization of host CPU for packet processing.</p>

- <p>This presentation focuses on the new QoS Traffic Management API for Ethernet devices that was introduced by DPDK release 17.08, as well as the new QoS Traffic Metering and Policing API planned for DPDK release 17.11. We describe the API, device drives currently supporting it and software fall-back strategy using the SoftNIC PMD. </p>

- <p>Service cores is a library that abstracts the platform, providing an app with a consistent environment. Service cores allows switching of SW and HW PMDs with no application threading changes. This talk introduces service-cores, and opens discussion on how to enable DPDK with service cores.</p>

- <p>There are many large InfiniBand clusters in the HPC market, they too would like to gain the DPDK user space high packet rate processing advantage, in addition to the RDMA capabilities. I will present the basic InfiniBand and IPoIB differences from Ethernet, and present results from a live POC on a 20 node cluster with DPDK using IPoIB </p>

- <p>The userspace summit is a good place to make a yearly summary of community changes and interactions. It is also important to describe how DPDK interact with other communities. The last part would be about community processes (repositories, distributed CI, bugs tracking, tooling, website, mailing lists and Linux Foundation). </p>

- <p>This session will be a panel discussion of the future direction of ABI stability & LTS/Stable releases. In particular it will look at the request for a yearly xx.11 LTS release with a 2 year duration.</p>

- <p>In the presentation we will describe VFd, a hypervisor for SRIOV NICs jointly developed by AT&T and Intel, which uses DPDK and acts as policy enforcement software allowing advanced configuration of SR-IOV capable Network Interfaces. We will provide overview of the use cases and new DPDK API’s to support them. </p>

- <p>When working in SRIOV mode, we would prefer to let majority of the traffic to pass in HW directly from/to wire to/from VF, while the OVS-DPDK application only needs to handle exception packet flows on the PF. To support this mode we want to show a new Representor Ports model of the HW switch, which can be controlled from DPDK.</p>

- <p>This presentation introduces a port representor framework to DPDK. The framework based around a virtual representor PMD and representor broker plugin for physical function devices, provides the infrastructure to allow SR-IOV virtual function ports to be configured, managed and monitored within a single control application.</p>

- <p>This talk will focus on improving VNF safety with Virtio and Vhost-user backend. Maxime will first describe VNF architecture relying on Virtio/Vhost-user. Then, he will talk about IOMMU support for the Vhost-user backend. Finally, Maxime will provide benchmarks results and discuss ways to improve both performance & safety. </p>

- <p>The packed ring layout is the next generation ring layout standard for Virtio, which is designed for high performance and still in the proposal stage. This talk will give a quick introduction to this new ring layout definitions and summary the current status, findings and benchmark results of the prototype in DPDK. </p>

- <p>A drive to deliver OPEX saving and performance where and when it's needed. Enter a new era of power optimized packet processing. This talk reviews new & existing dpdk extensions for policy based power control proposed in August and the associated performance benefits. </p>

- <p>pfSense is a open source firewall/vpn appliance, based on FreeBSD, started in 2006 with over 1M active installs. We are basing pfSense release 3.0 on FD.io's VPP, leveraging key DPDK components including cryptodev, while adding a CLI and RESTCONF layer, leveraging FRRouting and Strongswan.</p>

- <p>This talk is about our framework libmoon, a wrapper for DPDK that makes building DPDK prototypes simple and fast. We've used it for multiple research prototypes as well as our packet generator MoonGen (presented last year here). </p>

- <p>In our presentation, we share the lesson learned from our experience using DPDK with Go in order to implement a software router Lagopus2 (https://github.com/lagopus/vsw). We'll explain how we carefully designed DPDK binding in Go to guarantee the type safeness and the performance at the same time. </p>

- <p>T4P4S is a P4 compiler supporting flexible re-targetability without sacrificing high performance packet processing. To achieve this goal, it is split into hardware dependent and independent components. This talk will show the architecture of T4P4S and the design decisions made to support DPDK. </p>

- <p>Our advanced Container Network Interface combines the benefits of containers with DPDK‘s ultra-low latency and fast packet processing and the results show 28x more performance with SRIOV, DPDK using Vhost-User with OVS-DPDK and VPP.</p>

- <p>Introduction to the event, including a review of the agenda, logistics and expectations. An update from the Governing Board on who the Governing Board are, what their responsibilities are, progress to date, future priorities/challenges for the project.</p>

- <p>We conducted a survey of the DPDK community, soliciting input on a variety of topics including DPDK usage, roadmap, performance, patch submission process, documentation and tools. This session will present the results of the survey, which will help to guide the future direction of the project.</p>

- <p>While DPDK is a widely-adopted software package for high-performance networking applications, there are a number of ways in which it is harder to use than it otherwise needs to be. This is especially true when it comes to integrating DPDK with an existing legacy codebase. This presentation will look at some of the issues and provide an update on current development and prototyping work to simplify DPDK integration with existing code. </p>

- <p>The current command line interface for DPDK called cmdline has a number of limitation and a complex user design. The next command line for DPDK called CLI is more dynamic with a simple directory style design. The directory style design allows for commands to be placed in a hierarchy for easy integration, plus supporting a simple argc/argv function interface. Using these features reduced the LOC in test-pmd cmdline file from 12K to ~4K. The presentation includes an example usage. </p>

- <p>Recently, the DPDK has enabled applications to use dynamically load balanced pipelines with the introduction of libeventdev. In addition to using eventdev for CPU to CPU pipelines, devices such as ethdev, cryptodev and timers need to be able to inject events into eventdev. Currently, we are in the process of upstreaming extensions to eventdev called eventdev adapters for each of these devices that would allow applications to configure event input from these devices to the event device. We will discuss each of the adapter APIs and show example code that allow event based applications to be written in a platform independent manner.</p>

- <p>A major part of packet processing has to be done on a per-packet basis, such as switching and TCP/IP header processing. The overhead of the per-packet routines, however, exerts a significant impact on the performance of network processing. Generic Receive Offload (GRO) and Generic Segmentation Offload (GSO) are two effective techniques for mitigating the per-packet processing overhead by reducing the number of packets to be processed. Specifically, GRO merges the receiving packets of the same flow in RX, while GSO delays packet segmentation in TX.</p>

- <p>A drive to deliver OPEX saving and performance where and when it's needed. Enter a new era of power optimized packet processing. This talk reviews new & existing DPDK extensions for policy based power control proposed in August and the associated performance benefits.</p>

- <p>Network bandwidth is precious and milliseconds matter for many user-mode applications and virtual appliances running on both Linux and Windows. In order to get the best network throughput to process and forward packets, developers need direct access to the NIC without going through the host networking stack. Until now, only developers on Linux and FreeBSD platforms were able to use DPDK to obtain these performance benefits but, we are happy to announce that we have an implementation of DPDK for the Windows platform!</p>

- <p>Unbinding Linux kernel drivers to allow userland IO through VFIO has a number of disadvantages such as another large touchy code base to deal with the hardware, loss of standard Linux tools (ifconfig, ethtool, tcpdump, SNMPd...) and impossibility to accelerate container networking. Mediated device introduced in Linux kernel 4.10 for GPUs and provisions for additional devices hold the promise of collaboration between kernel drivers and userland application in need of direct datapath steering.</p>

- <h3>DPDK with KNI – Pushing the Performance of an SDWAN Gateway to Highway Limits! </h3>

- <p>An SDWAN gateway is usually built with an x86 commercial off-the-shelf (COTS) hardware that often runs a variant of Linux Operating System and requires high throughput for connecting a corporate’s branch network with its Data Centers. However owing to the inherent limitations of standard 4K sized pages without dedicated resource allocations in a general-purpose Linux kernel, it has been seen that even a high-end SDWAN gateway hardware cannot forward traffic to its full potential.</p>

- <p>To provide high performance for ICT (Information Communications Technology) area, we use DPDK as a micro service in container networking. We used primary/secondary mode, rte_ring, sharing meory and so on, to promote the performance of datapath. We achieved bidirectional zero-copy between containers in contrast to only dequeue zero copy in vhost-user/virtio-user.</p>

- <p>Clear Container is a great technology to secure a container with a fast and lightweight hypervisor, and there might be very different type of workloads running inside Clear Containers, e.g. some workloads require high packet processing rate (PPS) and some workloads require massive data transfer (BPS), given Clear Container’s much higher density than Virtual Machine, a high performance virtual switch is very critical and demands is highly emerged, but current available virtual switches is still far behind those demands.</p>

- <p>DPDK revolutionized software packet processing initially for discrete appliances and then for Virtual Network Functions. Containers and µServices technology are extensively used as a means to scale up and out in the Cloud. These technologies now include Comms Service Providers among their advocates, and embracing these technologies with their scaling model and resiliency is the new frontier in software packet processing.</p>

- <p>SPDK (storage performance development kit, http://spdk.io) is an open source library used to accelerate the storage service (e.g., file, block) especially for PCIe SSDs (e.g., 3D Xpoint SSDs). The foundation of SPDK is the user space, asynchronous and polled mode drivers (e.g., IOAT and NVMe), and the idea of which is similar to DPDK.</p>

- <p>Debugging network problems is often hard, and further complicated when a guest O/S is provided with an SR-IOV VF bound to a DPDK driver because tools running on the physical host (e.g. tcpdump) lose visibility to the interface. Hardware mirroring of traffic to another VF provides the ability to regain visibility and to help facilitate the troubleshooting process.</p>

- <p>NFV promise is to be able to instantiate or even live migrate VMs on different platforms and have applications benefit from whatever acceleration is available. As a result, the application developer shall not make compilation or define application architecture based on what he/she expects from the runtime environment. ODP and DPDK have in common the concept of "device" APIs (Ethernet, crypto, events, IPsec, compression...) with distinct approaches.</p>

- <p>SafetyOrange is a portable (4.3 liter) and silent Xeon computer. Well, it is larger than 'DPDK in a box' but it supports two NICs (as of now sporting 2 XL710 cards), has 32G of memory and 14 cores. We have been using it for testing both native and virtualized DPDK appliances also whole virtual routers and served as a traffic generator for performance tests (DPDK pktgen), too. It is a brilliant development environment, too. And at the end of the day it still fits into a regular backpack.</p>

- <p>There are various kinds of HW accelerators available with SoCs. Each of the accelerators may support different capabilities and interfaces. Many of these accelerators are programmable devices. In this talk we will discuss the rte_raw_device and implementing a sample driver with it for NXP AIOP generic programmable accelerator. </p>

- <p>Fully programmable SmartNICs allow new offloads like OVS, eBPF, P4 or vRouter, and the Linux kernel is changing for supporting them. Having these same offloads when using DPDK is a possibility although the implications are not clear yet. We present Netronome’s perspective for adding such a support to DPDK mainly for OVS and eBPF.</p>

- <h3>Flexible and Extensible support for new protocol processing with DPDK using Dynamic Device Personalization </h3>

- <p>Dynamic Device Personalization allows a DPDK application to enable identification of new protocols, for example, GTP, PPPoE, QUIC, without changing the hardware. The demo showcases a DPDK application identifying and spreading traffic on GTP and QUIC. Dynamic Device Personalization can be used on any OS supported by DPDK, for example we showcase a QUIC protocol classification demo on Windows OS. </p>

- <p>Cloud architectures and business models are driving the need to ensure that all server compute resources have a revenue tie-in, heralding the march towards the serverless dataplane. This session presents a unique way to harness the power of DPDK to accelerate packet processing by pushing the data plane into a SmartNIC. We will discuss the motivation, benefits and challenges of implementing a DPDK based data plane running on the compute resources embedded in a SmartNIC.</p>

- <p>This presentation will look at the challenges faced in leveraging hardware acceleration in DPDK enabled applications, addressing some of the problems posed in creating consistent hardware agnostic APIs to support multiple accelerators with non-aligned features, and the knock implications this can have to application designs.</p>

- <p>In this talk we present a joint work of NXP, Intel and Mellanox on offloading security protocol processing to hardware providing better utilization of host CPU for packet processing. This talk provides the overview of new enhancement in the rte_security APIs to support various features of IPSEC offloads as inline or lookaside offload.</p>

- <p>The FPGA allows a wide variety of features to be supported in DPDK.

-We observe that programmable HW is useful for packet-processing pipelines.

-For example, consider a pipeline of multiple match-action operations, in which actions may also specify generic packet modifications that are carried out by accelerators. In this case, the CPU is only involved at the beginning (transmission) or end (reception) of the pipeline, while the accelerator invocations are initiated by NIC matching operations.</p>

- <p>Although packet forwarding with VPP and DPDK can now scale to tens of millions of packets per second per core, lack of alternatives to kernel-based sockets means that containers and host applications cannot take full advantage of this speed. To fill this gap, VPP was recently added functionality specifically designed to allow containerized or host applications to communicate via shared-memory if co-located, or via a high-performance TCP stack inter-host.</p>

- <p>To have apple to apple comparisons, developers need a common ground of base level metrics. That common ground is ability to identify the basic DPDK building block of importance (as well as relevance to the work load) e.g., producer/consumer rings and measure the cycle cost associated with basic operation like enque/dequeing – bulk versus single.</p>

- <p>SDN is at the foundation of all large scale networks in the public cloud, such as Microsoft Azure. But how do we make a software network scale to an era of 40/50+ gigabit networks and provide great performance for network applications and NFV in VMs? In this presentation, Daniel Firestone and Madhan Sivakumar will detail Azure Accelerated Networking for Linux with DPDK, using Azure's FPGA-based SmartNICs to accelerate Linux workloads using SR-IOV. </p>

- <p>To truly achieve the vision of a high-performance software-based network that is flexible, lower-cost, and agile, a fast and carefully designed NFV platform along with a comprehensive SDN control plane is needed. Our high-performance NFV platform, OpenNetVM, exploits DPDK and enables high bandwidth network functions to operate at near line speed, while taking advantage of the flexibility and customization of low cost commodity servers.</p>

- <p>Achieving network functions parity across purpose-built ASIC implementation and virtual implementation is not straightforward. Irrespective of differences in performance capability between purpose-built and virtual environments. Functional disfiguration represents a significant obstacle in operators’ adoption of virtualization as it implies a dependency on access/aggregation network topology and configuration.</p>

- <p>Telcos and Cloud providers are looking for higher performance and scalability when building nextgen datacenters for NFV & SDN deployments. While running OVS over DPDK reduces the CPU overload of interrupt driven packet processing, CPU cores are still not completely freed up from polling of packet queues.</p>

- <p>Network Functions Virtualization (NFV) deployments are happening at a rapid pace. This is driving the need to more efficiently consolidate compute, storage and communication workloads. NFV enables Communications Service Providers to migrate their fixed function networking elements to a general purpose server; however there is the need preserve the existing performance and latency. To support such workloads a vSwitch that enables both high throughput and low latency is a must. </p>

- <p>In this talk we will present the new DPDK Membership Library, this library is used to create what we call a “set-summary” which is a new data structure that is used to summarize large set of elements. It is the generalization and extension to the traditional filter structure, e.g. bloom filter, cuckoo filter, etc to efficiently test if a key belongs to a large set.</p>

- <p>Some applications are written from the ground up with DPDK in mind, but Open vSwitch is not one of them. This talk will look at how Open vSwitch integrated and uses DPDK. It will look at various aspects such as DPDK initialization, threading, and the usage of DPDK PMD's and libraries. It will also talk about DPDK usability aspects such as LTS and API/ABI stability and the effect they have on Open vSwitch with DPDK. </p>

- <p>In this talk, we introduce a new open source router implementation called Lagopus Router. It is an extensible microservice architecture router that consists of a DPDK router dataplane, router agents, and a pub/sub-based centralized configuration manager. These modules are written in Go and C and are loosely coupled to each other by gRPC.</p>

- <p>You must be registered to post on these mailing lists. It is a spam countermeasure.</p>

- <p>If you are having trouble using the lists, please contact <a href="mailto:admin@dpdk.org">admin@dpdk.org</a>.</p>

- <h3>Best practices</h3>

- <p>As advised by Eric S. Raymond in "<a href="http://www.catb.org/~esr/faqs/smart-questions.html">How To Ask Questions The Smart Way</a>", a good title convey what you're asking while not being too long. Then try to be descriptive in the email body.</p>

- <p>In-line replies are preferred because it's easier to read. See <a href=http://en.wikipedia.org/wiki/Posting_style>posting styles</a> for more details.</p>

- <p>HTML emails are forbidden because they are difficult to view in the archives.</p>

- <p>As this is a public mailing list, confidential disclaimers are not allowed.</p>