NETWORKING

A Brief Introduction To OpenFlow

OpenFlow is a specification now managed by the Open Networking Foundation, which defines the functions and protocols used to centrally manage switches via a centralized controller.

OpenFlow is a specification now managed by the Open Networking Foundation, which defines the functions and protocols used to centrally manage switches via a centralized controller. OpenFlow is a command and control protocol that includes communication over SSL/TLS protected channels, feature discovery and configuration of devices by the controller, and managing the forwarding tables on the switches. The OpenFlow protocol doesn't stipulate how the network is designed or managed. That is up to implementers and vendors to decide.

OpenFlow was also designed to work with existing products--no specialized hardware is required. A number of vendors are offering experimental hardware that runs OpenFlow today and can run both OpenFlow and their native switching/routing software on the same switch by dedicating specific ports to OpenFlow and native switching/routing. Two vendors, Fujitsu and NEC, are shipping OpenFlow switches. Fujitsu's switch is OpenFlow-only, while NEC's is a hybrid.

The problem with today's network stems from how Ethernet was originally designed as a simple framing protocol. As LANs became more complex, the Spanning Tree had to be introduced to remove the possibility of broadcast storms from loops in the network, thus reducing the network to a single rooted tree (not all paths could be used).

Quality of service is implemented on a per-device basis with no context of the neighborhood, resulting in inefficient traffic management. VLANs were introduced to segment traffic and extend Layer 2 networks across a campus or wide area. Link Aggregation (LAG), a.k.a. bonding, was developed to increase capacity between switches over multiple physical interfaces, but often less than 75% total capacity could be used.

OpenFlow controllers have a holistic view of the network from edge to edge and know all the paths between any two points. OpenFlow can use most fields from Layer 2 to 4 headers to match flows (a unidirectional set of frames between two points) and look up the path through the network. You can use multiple forwarding mechanisms to get better load balancing and processing without being limited to the physical topology. The controller is a piece of software and can be dynamically programmed based on changing needs, hence the term software-defined networking. That is the promise anyway.

The switch as position 1 is added to the network and configured with the credentials and address for the controller. It contacts the controller, which queries the switch for its capabilities and configuration. Then the controller pushes any configuration it has to the switch. The controller updates its view of the network and forwarding policies and updates the existing switches forwarding tables if necessary, making the new switch available to carry traffic.

Think of defining how network traffic is forwarded much like you think about defining access policies in a firewall. You specify the conditions to match traffic on, such as source and destination addresses, which includes wild cards, and an action to take on a match. In OpenFlow, you define forwarding policies limited by the capabilities of the controller and your own needs, and as your needs change, so can your policy.

In general, we always want traffic to go via the fastest, shortest path. However, when congestion occurs, the fastest, shortest path becomes oversubscribed and we want to prioritize some traffic over others. With an SDN, you can set a policy that prioritizes time-sensitive traffic over bulk traffic. As congestion occurs, you can move some or all of the bulk traffic to a different path, reducing congestion on the shortest path for your time-sensitive traffic.

While OpenFlow has a centralized controller, that doesn't mean that each new flow has to result in a controller lookup. If a new flow matches an existing rule, it will be processed according to that rule's actions. Rules can be pre-populated, reducing the number of lookups that occur. Intelligent policy development should mean a reduced number of controller lookups. In addition, rules have a time to live associated, so if the switch is disconnected from the controller for some reason, it can still process existing and new flows. Only those flows that result in a controller lookup would fail.

Controller technology is not new either. Enterprises have been using controller-based wireless and network access control for years successfully.

Naturally, any critical system will have to be built with high availability in mind, and this requirement is not lost on OpenFlow controller vendors. The HA functions are not part of the Openflow protocol definition and will have to be implemented independently of the protocol. I'd expect to see HA implemented in other than active/standby, but how HA will actually be implemented will vary by vendor.

Obviously, a network controller is a potentially high-value target for attackers, because if they get control of the controller, then they manage your network. However, an OpenFlow controller really doesn't present more of a target than any other critical network, system or hypervisor management system.

The controller needs to be protected from attack and needs to have strong authentication built in, rights management to control who can do what, an audit log to track and roll back changes, and all the other features you'd expect to protect a controller. Not having those features is a non-starter.

Do you need OpenFlow to manage your network? No. Can OpenFlow controllers provide features and functions better than what are available today via existing standards? Absolutely. An OpenFlow controller can, within its OpenFlow management network, potentially replace most of the management protocols running in your existing network. You don't need to worry about loops, VLANs can lose meaning if needed, and you can use all of your capacity between any OpenFlow-enabled switches. You can potentially design the network of your dreams completely in software and deploy with a push of a button. If we sound breathless, it's because the potential to unlock the power of your network is very real. The breadth of that power depends on the capabilities of the controller.

An OpenFlow controller simply defines how frames are forwarded through the network, and the controller has an end-to-end view. It can potentially make more intelligent decisions based on the goals you want to achieve and the capabilities of the switch hardware, and can respond to changes in demand.

Not all applications are created equal. We already showed how VoIP traffic can maintain its SLA requirements, even under congestion, by dynamically moving lower-priority traffic to other paths. Similarly, you can define multiple paths with varying priorities so that if a primary path fails, a secondary path can be selected immediately with a lower failover time than with traditional L2/L3 methods.

Since the OpenFlow controller controls the network, it becomes the integration point for anything network-related, such as hypervisors, applications, security functions and load balancing. Integration moves from individual switches to the controller.

This all leads to software-defined networking, where the network is designed and deployed in software. This leads to rapid changes with potentially fewer configuration errors and faster recovery times from errors. More importantly, all of your software can signal the network of the parameters it requires, and the controller can set forwarding policies based on business need versus technical need. Lastly, your network engineers can spend more time on engineering and less time tapping a CLI.

OpenFlow does not commoditize switching. The LAN edge still needs intelligence to configure the switches and switch ports for things like VLAN assignment, management of Power over Ethernet budgets, authentication of hosts via 802.1X, and possibly integration with external switches using traditional Ethernet protocols. In the data center, there is a potential to commoditize switching if all you need is Ethernet access. However, data center networking demands usually involve lower latency overall and tighter control of the interconnections and less intelligence at the edge.

Social engineering, ransomware, and other sophisticated exploits are leading to new IT security compromises every day. Dark Reading's 2016 Strategic Security Survey polled 300 IT and security professionals to get information on breach incidents, the fallout they caused, and how recent events are shaping preparations for inevitable attacks in the coming year. Download this report to get a look at data from the survey and to find out what a breach might mean for your organization.