Dodging open protocols with open software

I have a hunch. Pure speculation, in fact, that there may be an even more interesting story developing here with the unveiling of networking software startup Nicira Networks. The story being that of open source networking software minimizing the role of network “protocols” and the diminishing role of standards bodies in building next generation networks.

Nicira Network’s network virtualization platform (NVP) leverages the Open vSwitch (much of which was developed by Nicira engineers) as the data path for whatever edge networking device it’s installed on. In this case, its a server with a hypervisor and a bunch of virtual machines. Though, as Nicira points out in their literature, there’s nothing preventing the Open vSwitch from being installed on other networking devices and form factors, such as firewalls, load balancers, and physical switches. With the Open vSwitch as the workhorse, an open source software project, one can certainly make the claim that the solution is “open” and not proprietary. Furthermore, the data path of the Open vSwitch uses established and well understood open protocols; 802.1Q, GRE, etc.

Now that you have these Open vSwitches everywhere you need something to centrally configure and control their data path with a forwarding policy. This is where Nicira’s clustered controller comes in, or perhaps a controller provided by some other vendor. The Nicira central controller will control all of the edge Open vSwitches in an elegant way (perhaps a gross understatement). That’s where they’ll make money — selling their controller software and all the professional services you might need to get things working right in your environment.

This is where things get interesting. Most people think that you would need an open “protocol” for the controller to interface with the edge Open vSwitch. And absolutely, you certainly should have that. That way any vendor can supply the controller while using the same Open vSwitches. Right? And if you take a cursory look at the Open vSwitch documentation, as expected you’ll see OpenFlow as the protocol for this purpose.

When you use a protocol, you obviously need to follow the rules of the protocol otherwise you’re not adhering to the standard, and people tend to get really upset about that kind of stuff. So, somebody has to set the rules for others to follow. Which usually involves getting a group of people with inflated egos together to agree on something, be it vendor-led standards bodies such as the IETF, or in the case of OpenFlow a customer-led “foundation” such as the ONF. All of this takes time to get right. Lots of time. Meanwhile, there’s a market out there willing to pay for a solution now.

Think about this for a second — Why do we need to use an open “protocol” for a controller to program a switch? “Well, Brad, that’s obvious, because otherwise the solution would be deemed proprietary, heaven forbid!” True, perhaps, if you’re thinking in terms of the usual paradigm where Vendor-A’s box is running Vendor-A software, connected to Vendor-B’s box running Vendor-B software. This is obviously where we need protocols. But what if Vendor-A’s box was running open source software, and Vendor-B’s box was running the same open source software? Or at least, the communication path between Vendor-A and Vendor-B is through an open source software module. Do you need a “protocol” then?

With that in mind, take a closer look at the Open vSwitch documentation, dig deep, and what you’ll find is that there are other means of controlling the configuration of the Open vSwitch, other than the OpenFlow protocol.

Take for example the ovs-vsctl “component” of the Open vSwitch. This component can be used to remotely configure the Open vSwitch at a granular level — such as editing tables and records in the vswitch configuration database. It’s one piece of open software talking to another piece of open software over a standard TCP connection — you can’t call that proprietary. And guess what, you don’t need a dinosaur standards body to decide what goes in the code. Is ovs-vsctl a “protocol”? No.

Nicira may or may not be using the OpenFlow “protocol” to control the Open vSwitch in their current deployments. There’s enough evidence to suggest they had that choice to make. Perhaps the OpenFlow 1.0 spec was just too limited for what their customers needed at that time. If so, what’s wrong with coding around the limitations in an open software platform?

The point here isn’t to blow a standards dodger whistle, but rather to observe that, perhaps, a significant shift is underway when it comes to the relevance and role of “protocols” in building next generation virtual data center networks. Yes, we will always need protocols to define the underlying link level and data path properties of the physical network — and those haven’t changed much and are pretty well understood today.

However, with the possibility of open source software facilitating the data path not only in hypervisor virtual switches, but many other network devices, what then will be the role of the “protocol”? And what role will a standards body have in such case when the pace of software development far exceeds that of protocol standardization.

Disclaimer: The author is an employee of Dell, Inc. However, the views and opinions expressed by the author do not necessarily represent those of Dell, Inc. The author is not an official media spokesperson for Dell, Inc.

About Brad Hedlund

Brad Hedlund is a member of the technical staff at Amazon Web Services. Brad’s background in data center networking begins in the mid-1990s with a variety of experience in roles such as IT customer, systems integrator, architecture and technical strategy roles at Cisco, Dell, and VMware, and speaker at industry conferences. CCIE Emeritus #5530.

Comments

Certainly agree with you on the protocol, but wouldn’t ovs-vsctl be considered an API or “language” of sorts. If you wanted to add VendorB to your VendorA controller network, VendorB would at least need to speak the same control langauge. No?

Daniel,
Yes! This underscores the point I attempted to make — you need the same control language on both ends. In networking today, that language has always been defined by standards bodies (love’em or hate’em). However in a world where you have open source software controlling the network, the control language is defined by the programmers working on the code. Big difference.

Brad, nice write up. Now I fully understand where you were going with your tweets earlier. Great perspective.

Some thoughts…

When we normally think protocols (in the network world), they are most of the time L2/L3 protocols, right? If it’s a control plane or management plane “interface,” i.e. protocol or API, it’s higher up in the stack obviously, but we still have standards today. Maybe it’s these standards that go away in the shorter term.

But, is it less important to be a standard if it’s at the management plane of a network? Maybe we use a new way to manage/configure/update flows to an OVS day 1 (today)? That’s okay because this just means really cool management systems. After all, what great NMSs are out there today? Network guys will like that…and they’ll even get to keep standards like OSPF/BGP to calculate the FIB keeping the same guys semi-happy without too much change at once.

Shortly after, it will be VERY easy to modify OSPF or create a totally new “flow protocol” for the lack of a better term, which will be another new proprietary protocol, this time at the control plane. This is where it gets interesting from a standards perspective. So the more I think about this, you’re definitely onto something and standards bodies may have less of a role here?

Brad, I think you’re confusing Protocols with Standards.
Manipulating oss-vctl over TCP is a protocol. Unofficial, de facto, but still a protocol.
GRE and 802.11Q are official, de jure, standards.
s/protocol/standard/ in your article, and it makes more sense. It also ceases to be a big deal – lots of things aren’t standardized, this is just one more.

Adam,
I see your point. The word “protocol” can mean slightly different things in network speak vs. programmer speak. In the new era of blending networking with programming, this won’t be the first time we see this.
Thanks for the comment.

Great post, as usual, Brad. I was actually having a similar conversation with somebody just the other day. I guess the main purpose of many “Protocols” as we use them today is to allow independent network components to communicate state and to determine an agreement on that state.

Within an autonomous device it is generally up to the manufacturer to determine how to handle the state information. In the OpenFlow model that determination is a function of the OpenFlow controller. It would make sense for the communications channel between the controller and the switch in an SDN environment to use OpenFlow as the protocol of choice, though if the spec doesnt allow for a full set of features then an open platform and an available specification is the next best thing. Who knows, having “features in the field” may be exactly what is needed to get them worked into the OpenFlow standards…

I think it’s worth remembering that “built on open source” always means “proprietary” and “arguably open” also means “proprietary”, because if something is really open then it doesn’t need to hedge. I don’t mean to diminish what Nicira has accomplished; I think it can survive on its merits and doesn’t need openwashing.

We’ve seen the “who needs standards” attitude quite a bit in the open source world; in many cases it works well but in other cases it creates a sort of “Linux ghetto” where free stuff can’t interoperate with other stuff. For example, people with Cisco boxes probably don’t appreciate hearing “IPSec is bloated, just install OpenVPN”. I think Nicira has been careful to set expectations that you need their gateway and you shouldn’t expect any third-party switches to be able to join the party. Ultimately I expect a standard overlay protocol will be developed (see NVO3) and at that point there will still be an advantage to being standard-compliant instead of “inspired by the standard”.

I think the difference between an API and a protocol is largely a matter of scale acceptance. The impact of standards bodies on the development of relevant protocols is generally overstated and forms long after the natural evolution of the protocol has developed. Protocols developed by committee have not usually fared so well. The future of SDN is bright and the protocol, whether openflow or not will follow.

And how can you make the Nicira controller talk with a controller of another network that is not Nicira? How do you make it talk to existing infrastructure? How do you cross administrative domain boundaries? This is where standards are needed ….The reason the Internet works is that there is a well defined set of protocols for building networks of networks, and adversaries can still exchange traffic and routes. If you get rid of those you build islands …and if you are an enterprise you might be ok with that (you need a network for yourself). But this is not the Internet. This is SNA.

The interface between the controller and the switch is not interesting by itself and it can be anything … Within a single administrative domain, life is simple.

It is sad that people are forgetting why the design choices of the Internet (not just IP) were made, and how big of an impact this had in its exponential growth.

JS,
If the same underlying open source software exists on both ends, presumably each end has the same tools and APIs to interface with. You don’t need a bloated standards body to decide what goes in the code. That’s the difference.

Was voice doomed by IP Telephony? Maybe for some, but did the TDM world that had been around for even longer than IP networks simply die? No, there are gateways and devices alike that are used to inter-connect them. We would need something similar for SDN. Similar, but different

Whenever I hear about new protocols, I always recall reading in a book that “The world is not happy when a new protocol is born”; and I believe that to stand true no matter how open is the protocol. (The book was “Computer Networks” by A.Tanenbaum I guess)

From my limited perspective, I see Open Source components and open APIs as a way for building virtualized networks. As cited by the author Quantum is an example; however the very ovs-vsctl CLI utility cited in this post uses an API exposed by the ovsdb-server process, which is a simple JSON-RPC interface to manipulate the OVS layout – and by extension the topology of your virtual networks; in the open source space, Cloudstack has a rich API for building virtual networks as well.

Nevertheless, I do believe in the concept of separating data forwarding, management and control planes, which is at the foundation of OpenFlow and Open vSwitch; it is my humble opinion anyway that this will happen independently of OpenFlow.

It is probably like telnet. While telnet is a networking protocol defined in some rfc , the implementation of telnet server and client applications differ from OS to OS and is dependent on programming languages, os policies and a host of other things. So it is possible for Linux open source telnet client to connect to a windows telnet server , if configured on the windows side. I therefore think protocols will always have a place but maybe new protocols will be adopted in a more democratized setup than in IETF, if such a system ever comes up. Maybe the existing networking protocols are enough to start building open API s on top of them without going to the standard bodies.

Btw thanks a bunch for all the great USC videos you have put up and the excellent writings on your blog. Wish you all the very best.

Hello sir …
Can you help me please? I am looking for records of detailed architectures of LAN and WAN networks with different types of interconnection between them and the protocols and security standards … But I find any information.
Thank you in advance

What do you do when people disagree on which piece of code gets integrated in the code, or on project governance, or licensing, and eventually the open source projects forks ? What kind of interoperability can you expect between two forked variants of a same inital open source project that was using an internal defacto protocol ?

Well, Networking could have started off with SDN rather than Protocols even from the very begginning…

Software preceeds Network standardization..why then it took so looonng for networks to be implemented in software? SDN holds good even without considering virtulaization…and what about the support when network issues arise? When the DC is on fire will the developer go thru the code and analyse the issue? what would be the TAT? Who would be held accountable for network issues? The SDN programmer?

Also I dont see the need of hand coding when the dust and din around SDN has been settled.

Most Compaines will go for commercial software like VMware or Microsoft and there would not be a killing need for in-house SDN programmer..per se…each and every company using SDN won’t need programmers…

Also the payscale would be a lot lower than what CCIE’s/JNCIEs are getting now…

I agree that software giants such as Microsoft and VMware will be at the center of the data center networking universe.
Per the salaries of CCIEs, I don’t see that changing drastically. If the salaries do go down its only because the value has shifted more to a different role — such as the virtualization admin — so from a macro viewpoint its not net loss.

Also…do you forsee a produciton Data center network purely on Software and Server farms with zero network hardware (read phy-switches)? Like a software defined data center?
If no, then when we anyway need phy switches..that would obviously point to Cisco/Juni?

Then we need the traditional data center network at least for the physical part?

And who will own and manage the IP space? The Netamdins or the vmadmins?

Also who is more positoned for a fillip? Netadmins learining VMs or VMadmins learning network?

Trackbacks

[…] vibrant exchange on Twitter), Brad Hedlund asks whether Nicira’s Open vSwitch is open – Dodging open protocols with open software ? The story being that of open source networking software minimizing the role of network […]

[…] but it is quite possible to work around its absence if there is open source, a fact once again highlighted by this brilliant post by Brad Hedlund. No, I am nowhere close to claiming that we don’t need to focus on open standards but I am […]

[…] very interesting points of view from my distinguished ex-colleague Brad Hedlund entitled “Dodging Open Protocols with open software“. In his post he tries to dissect both the intentions and impact of a new breed of […]

[…] to define interoperability. But in today’s world, there are as many best-practices being defined via open-source projects as in any standards-body, so companies now need to decide if they value pace-of-change over standardization. IT […]