At the last Open Networking User Group (ONUG) meeting in Boston organized with partner Fidelity Investments, it became very clear that there are two killer SDN applications: network virtualization and visualization. Some argue that network virtualization used for VM-VM networking is not an SDN technology, but I beg to differ. Sure, there are closed and open approaches to network virtualization, but over the next business cycle, the integration of OpenStack and OpenFlow will make it clear that network virtualization is an SDN application, especially as it’s extended to physical networks. The second killer app is network visualization—that is, the ability to monitor network traffic and tweak application performance. At ONUG Rich Groves, Principal Architect at Microsoft presented his SDN based approach to network visualization he designed with Big Switch Networks, cPacket, IBM, Arista and other suppliers that everyone is now using. Why is network visualization an SDN killer app? Because it uses SDN technology, and most importantly, lowers the capital cost significantly from existing network visualization approaches while also delivering an entirely new level of self-server flexibility. Companies, such as Big Switch Networks, Gigamon, Arista, Cisco, NetScout, Ixia/Anue, cPacket and others, are now positioning their network visualization offers within an SDN context and for good reason. In this Lippis Report Research Note, I focus on network visualization as an SDN killer app.

Mike Hatfield, President and co-founder of Cyan on Open Networking at ONUG

The network visualization market has been on a tear over the past four years as IT business leaders seek to monitor and optimize application performance by gaining visibility into the networks they flow over. The first network visualization generation of products tapped into networks, either via mirror ports or port mirroring called Switched Port Analyzer or SPAN. These taps allowed access to traffic for monitoring, collection and traffic steering toward analytic engines, such as network intrusion detection systems, VoIP recording, traffic analytics, packet sniffers, Big Data analytics, etc. The devices that collected traffic provided processing functions, such as terminating mirror ports, filtering, de-duplication of packets, load balancing, etc.

But as the need to know what’s flowing inside networks grew, the ability to tap and access traffic became limited with the number of available SPAN ports. Network visualization vendors offered 1 and 10GbE packet brokers that sit in the line of traffic to capture flows, but this is expensive. Enter Software-Defined Networking or Software-Defined Monitoring.

A new model has emerged, where SDN lowers capital cost, allows traffic to be captured broadly without consuming switch ports, takes monitoring packet brokers out of the line of traffic, speeds the flow of data to troubleshooters and enables business groups self service to data.

Presentations at ONUG showed that Software-Defined Monitoring (SDM) architectures repurpose commodity Ethernet Switches to provide basic functionalities for monitoring and visibility, which lowers capital cost by 95% –yes that’s 95%–or existing in-line monitors are 20x more expensive. SDM attempts to aggregate and distribute traffic to troubleshooters quickly at ultra scale, affordable and enable business unit self service to monitoring data. A SDM network consists of the following components:

Overlay Network: One of the key aspects of SDM is that packet brokers are not in line with traffic, meaning that they are not part of the production network. This allows SDN experimentation without potential disruptions to the production network if something was to go wrong.

The Virtual Blind Spot Best Practices for Monitoring Virtual Environments

OpenFlow Controller: An overlay of OpenFlow switches is programmed via an OpenFlow controller, such as the Big Switch Controller, which everyone is using. Big Switch also offers Big Tap, which, again, everyone is using, to program port-based roles, such as filter, delivery, service and core. This model scales economically to the highest of cloud computing facilities. Topology information is obtained by sampling network control traffic, and statistics are provided, such as flow per policy, etc., are provided. In short, Big Tap defines flow filters while Big Switch Controller programs the OpenFlow switches.

Mux Layer: At the multiplexer (Mux) layer, all filter switches’ traffic is aggregated and direct traffic to either “service nodes” or delivery interfaces. This is where service chaining per policy occurs, meaning that flows with multiple policies are processed.

The Visibility Fabric Architecture–A New Approach to Traffic Visibility

Service Nodes: Aggregated traffic from the Mux layer is forwarded to service nodes to perform specialized packet processing. In essence, packet brokers are service nodes. Packet processing include deep layer 7 filtering, time stamping, frame slicing, encapsulation removal for tunnel inspection, payload removal for compliance purposes, etc. Most of these functions are so specialized that custom ASIC are required and are where existing network visualization firms offer tremendous value. Post service node flows are sent back to the Mux layer for delivery to tooling, network analytics, Big Data analytics, etc.

Delivery Layer: The delivery layer is flexible in that either 1:n or n:1 delivery is accommodated for local or tunneling of remote tools, analytics and security devices.

The beauty of the SDM architecture is that through Big Tap, network engineers can easily program monitoring services, such as sending certain traffic in a Wireshark system, an open sourced network packet analyzer. For example, a policy can be created using CLI or an API to “forward all traffic matching TCP dest 80 on port1 of filter 1 to port 1 of delivery1.” The SDM application creates flows through the Big Switch controller API. The controller pushes a flow entry to filter1, Mux and delivery to output using available downstream links. Traffic is then directed to Wireshark. The above is easily switch filter location independent so that a specific switch need not be identified too. The program environment is very simple so as to support self-service provision of business units.

The service node function becomes increasingly important as self-service enables multiple, and potentially, duplicate filters and monitoring requests. The service node function then needs to support advanced filtering, such as parallel flow maps, action triggers and multi-tenancy. Packet slicing, masking de-duplication, tunneling and header stripping of MPLS, VLAN, ISL, VNTAG, GTP, VXLAN, NVGRE, SST, et al, become important to manage a large set of overlapping and redundant monitoring request.

So what do existing network visualization firms do in the age of SDM? Focus on service nodes and software. There are many packet-processing functions that cannot be done via merchant silicon Ethernet switches today, such as the list above. This a core value proposition for network visualization firms. Most firms such as Gigamon, Ixia/Anue, VSS, cPacket, NetScout, et al, are focusing on software value add so that network monitoring can span both physical and virtual networking and continue to be network vendor independent as to provide the widest network visibility possible. SDN offers programmability, not only for networks but also for network monitoring. All of the above firms are focusing their software efforts on business unit self-service, automated monitoring directly by analytic engines, and providing a common network management and monitoring view of both physical and virtual networks.

With SDM being off the production network, IT organizations are free to experiment with their first SDN project that will not do harm to their production network. This affords them many benefits, such as SDN skill development and a lower cost approach to network visibility with greater programming flexibility. That’s why it’s an SDN killer application.