We started with the obvious question “why would someone want to have NETCONF on a campus switch”, continued with “why would you use NETCONF and not REST API”, and diverted into “who loves regular expressions”. Teasing aside, we discussed:

Imagine a Flatworld in which railways are the main means of transportation. They were using horses and pigeons in the past, and experimenting with underwater airplanes, but railways won because they were cheaper than anything else (for whatever reason, price always wins over quality or convenience in that world).

As always, there were multiple railroad tracks and trains manufacturers, and everyone tried to use all sorts of interesting tricks to force the customers to buy tracks and trains from the same vendor. Different track gauges and heptagonal wheels that worked best with grooved rails were the usual tricks.

Just FYI: a week after I wrote this (don't forget to go through the comments), VMware made it official:

…we’ve found that VMware’s native virtual switch implementation has become the de facto standard for greater than 99% of vSphere customers today. … Moving forward, VMware will have a single virtual switch strategy that focuses on two sets of native virtual switch offerings – VMware vSphere® Standard Switch and vSphere Distributed Switch™ for VMware vSphere, and the Open virtual switch (OVS).

Ansible network modules (at least in the way they’re implemented in Ansible releases 2.1 and 2.2) were one of the more confusing aspects of my Building Network Automation Solutions online course (and based on what I’m seeing on various chat sites we weren’t the only ones).

One of the engineers watching my Data Center 3.0 webinar asked me why we need session stickiness in load balancing, what its impact is on load balancer performance, and whether we could get rid of it. Here’s the whole story from the networking perspective.

You mentioned 3-tier architecture was dictated primarily by port count and throughput limits. I can understand that port density was a problem, but can you elaborate why the throughput is also a limitation? Do you mean that core switch like 6500 also not suitable to build a 2-tier network in term of throughput?

As always, the short answer is it depends, in this case on your access port count and bandwidth requirements.

In autumn 2016 I embarked on a quest to figure out how TCP really works and whether big buffers in data center switches make sense. One of the obvious stops on this journey was a chat with Thomas Graf, Linux Core Team member and a founding member of the Cilium project.

When Cisco ACI was launched it promised to do everything you need (plus much more, and in multi-hypervisor environment). It was quickly obvious that you can’t do all that on ToR switches, and need control of the virtual switch (the real network edge) to get the job done.

We are very much committed in automation and use Ansible to create configuration and provision our SP and data center network. One of our principles is that we do rely solely on data available in external resources (databases and REST endpoints), and avoid fetching information/views from the network because that would create a loop.

The featured webinar in March 2017 is the SDN Use Cases webinar describing over a dozen different real-life SDN use cases. The featured videos cover four of them: a data center fabric by Plexxi, microsegmentation (including VMware NSX), SDN-based Internet edge router built by David Barroso, and Fibbing - an OSPF-based traffic engineering developed at University of Louvain.

To view the videos, log into my.ipspace.net, select the webinar from the first page, and watch the videos marked with star.

It’s uncommon to find an organization that succeeds in building a private OpenStack-based cloud. It’s extremely rare to find one that documented and published the whole process like Paddy Power Betfair did with their OpenStack Reference Architecture whitepaper.

One of the challenges of designing a controller-based solution is the transport network used to exchange information between controller and controlled devices. Can you do that in-band or is it better to have an out-of-band network (built with traditional components)? Terry Slattery explained some of the pros and cons in the Monitoring SDN Networks webinar.

I often get questions from engineers wondering whether my webinars or courses would be too tough for them. Here’s a question I got from an engineer who wanted to attend my Building Next-Generation Data Center course: “What specific prior experience do you expect for this workshop?”

Last year Cisco launched a new series of Nexus 9000 switches with table sizes that didn’t match any of the known merchant silicon ASICs. It was obvious they had to be using their own silicon – the CloudScale ASIC. Lukas Krattiger was kind enough to describe some of the details last November, resulting in Episode 73 of Software Gone Wild.

The author

Ivan Pepelnjak (CCIE#1354 Emeritus), Independent Network Architect at ipSpace.net, has been designing and implementing large-scale data communications networks as well as teaching and writing books about advanced internetworking technologies since 1990.