Nowadays users expect to experience highly dependable network connectivity and services. However, several recent episodes demonstrate that software errors and operator mistakes continue to cause undesired disturbances and outages. SDN (Software Defined Networking) is a new kind of network architecture that decouples the control plane from the data plane---a vision currently embodied in OpenFlow. By logically centralizing the control plane computation, SDN provides the opportunity to remove complexity from and introduce new functionality in our networks. On the other hand, as the network programmability enhances and software plays a greater role in it, risks that buggy software may disrupt an entire network also increase.
In this talk, I will present efficient, systematic techniques for testing the SDN software stack at both its highest and lowest layers. That is, our testing techniques target at the top layer, the OpenFlow controller programs and, at the bottom layer, the OpenFlow agents---the software that each switch runs to enable remote programmatic access to its forwarding tables.
Our NICE (No bugs In Controller Execution) tool applies model checking to explore the state space of an unmodified controller program composed with an environment model of the switches, and the hosts. Scalability is the main challenge, given the diversity of data packets, the large system state, and the many possible event orderings. To address this, we propose a novel way to augment model checking with symbolic execution of event handlers (to identify representative packets that exercise code paths on the controller), and effective strategies for generating event interleavings likely to uncover bugs. Our prototype tests Python applications on the popular NOX platform. In testing three real applications, we uncover eleven bugs.
Our SOFT (Systematic OpenFlow Testing) tool automates testing the interoperability of OpenFlow switches. Our key insight is in automatically identifying the testing inputs that cause different OpenFlow agent implementations to behave inconsistently. To this end, we first symbolically execute each agent under test in isolation to derive which set of inputs causes which behavior. We then crosscheck all distinct behaviors across different agent implementations and evaluate whether a common input subset causes inconsistent behaviors. Our evaluation shows that our tool identified several inconsistencies between the publicly available Reference OpenFlow switch and Open vSwitch implementations