Traditionally, sophisticated programs had always been “built like cathedrals, carefully crafted by individual wizards or small bands of mages working in splendid isolation.” An open source project, in contrast, was the product of a large and informal community of volunteers who in aggregate “seemed to resemble a great babbling bazaar of differing agendas and approaches. What was amazing was that the open source projects such as Linux not only didn’t fly apart in confusion but seemed to go from strength to strength at a speed barely imaginable to cathedral-builders.

Virtualization has had a profound impact on the ability to quickly and cost-effectively deliver and scale applications in the datacenter. The impact was initially on compute & storage resources but is now impacting network resources with the advent of Software Defined Networking (SDN) technologies.

Recently we have noticed that there is an 8th myth floating around, namely that SDN requires a forklift upgrade. In actuality, SDN does not require a forklift upgrade: you can deploy SDN on your existing network.

In the early days, the SDN approach was to create reactive end-to-end networks. Reactive means that every first packet of every flow is punted to a centralized controller which decides whether or not the flow is allowed and if so what the optimal end-to-end flow for the path is. End-to-end means that the centralized controller programs flow into each switch on the chosen path.

Unfortunately, the reactive end-to-end architecture is not scalable. Punting the first packet of every flow to the centralized controller introduces latency and creates a choke point in the network. Installing fine grained flows on each switch on the path introduces even more latency and requires massive forwarding tables on the aggregation switches.

There are various tricks to alleviate these problems, for example using fine grained flows at the edge and coarser grained flow in the core. However, these tricks are simply small steps towards a better solution which is a proactive overlay networks.

Proactive means that the centralized controller installs the forwarding state a-priori before the flows arrive, avoiding the need to punt packets to the controller. Overlay means that the virtual network is separated from the physical network by encapsulating packets into tunnels such as VXLAN or MPLS over GRE.

Proactively installing forwarding state rather than punting packets to the controller for a reactive decision greatly reduces latency and improves the stability and scalability of the network.

The service layer (the virtual overlay) is cleanly separated from the infrastructure layer (the physical underlay). Any changes to the service layer, such as adding or removing a virtual network or a virtual machine, only affects the virtual switches in the overlay. Not touching the physical underlay for service changes makes service deployment faster and makes the physical infrastructure more stable.

The amount of state in the physical switches is greatly reduced. For example, in a data center the physical switches only need forwarding state for the physical servers; they do not need any forwarding state for virtual machine flows. This greatly improves scaling.

The protocols for the virtual overlay and the physical underlay can be independently chosen.

This last point is the key to avoiding forklift upgrades. It make it possible to reap the benefits of SDN by using SDN protocols such as XMPP or OpenFlow in the virtual overlay while continuing to use existing routing and switching protocols in the underlay thereby avoiding disruption to existing services. Note that the physical underlay can be greatly simplified by eliminating protocol features which are no longer in the presence of an overlay.

To find out more about the advantages of the "proactive overlay" approach over the "reactive end-to-end" approach see our new white paper.