Juniper recently launched their Tomahawk-based switch (QFX5200) and included a lot of information on the switching hardware in one of their public presentations (similar to what Cisco did with Nexus 9300), so I got a non-NDA glimpse into the latest Broadcom chipset.

You’ll get more information on QFX5200 as well as other Tomahawk-based switches in the Data Center FabricsUpdate webinar in spring 2016.

You're an old school, disciplined networking leader that architects networks based on rock-solid, time-tested designs. But it seems that the prevailing fashion in network design and availability go against your traditional design principles: inter-site firewall clustering, inter-site vMotion, DCI, etc.

Not so fast, my young padawan.

Let’s define prevailing fashion first. You might define it as Kool-Aid id peddled by snake oil salesmen or cool network designs by people who know what they’re doing. If we stick with the first definition, you’re absolutely right.

Now let’s look at the second camp: how people who know what they’re doing build their network (Amazon VPC, Microsoft Azure or Bing, Google, Facebook, a number of other large-scale networks). You’ll find L3 down to ToR switch (or even virtual switch), and absolutely no inter-site vMotion or clustering – because they don’t want to bet their service, ads or likes on the whims of technology that was designed to emulate thick yellow cable.

This isn't the first time that readers have asked you about these technologies, and it won't be the last. Vendors will continue to market them despite their shortcomings, and customers will continue to eat them up.

As long as there will be someone willing to believe in fairy tales and Santa Claus, there will be someone dressed in red coat and fake beard yelling “Ho, Ho, Ho!”

Enterprise IT managers sometimes act like small kids. They don’t want to hear that they have people- and process problems, and love to believe that the next magical bit of technology will solve whatever it is that bothers them. Vendors obviously love to explore these cravings and sell them ever-more-complex solutions.

I'd like to think that vendors will also continue to work out the kinks and over time the technology will become rock solid and time-tested.

I am positive you can make any technology almost-rock-solid. You can also make pigs fly (see RFC 1925 sect. 2.3). However, have you included the fuel costs in your TCO?

Also, the more complex a technology is, the likelier it is to crash down like a house of cards, and you’ll be left with an incomprehensible mix of bits and pieces that will be impossible to put back together (see also: You can’t reformat your data center).

Nino concluded his comment with a question:

Are you too stuck on past, traditional designs and not being open to new ways of building IT? I get that IT is very cyclical, and these new trends may die in the future...or thrive, and the customers may either fail...or succeed.

Marek Majkowski published an awesome real-life story on CloudFlare blog: users experienced occasional short-term sluggish performance and while everything pointed to a network problem, it turned out to be a garbage collection problem in Linux kernel.

Takeaway: It might not be the network's fault.

Also: How many people would be able to troubleshoot that problem and fix it? Technology is becoming way too complex, and I don’t think software-defined-whatever is the answer.

If you’re a host running on an IPv6-only network, you might want to detect the IPv6 prefix used for NAT64 (for example, to transform IPv4 literals a clueless idiot embedded into a URL into IPv6 addresses).

I don’t like to correct my friends in public, but if someone says “I still believe that flow-based technologies will exceed the capabilities of packet-based technologies” (see Network Break 53), it’s time to revisit the networking fundamentals.

Wow, another year swooshed by. I can’t believe it’s almost gone. Maybe it’s all the travels I had throughout the year, and I MUST start with a huge THANK YOU to whoever is watching after me – there wasn’t a single major SNAFU.

Next, I’d like to thank the people who caused all that travel: attendees of my workshops.

I love stumbling upon new networking-focused blogs. Many of my old friends switched to the dark side vendors and stopped blogging, others simply gave up, and it seems like there aren’t that many engineers that would like to start this experiment.

One of the obvious first questions is always “what should I write about” and my reply is always “it doesn’t really matter – make sure it’s useful.”

I love listening to the Datanauts podcast (Ethan and Chris are fantastic hosts), starting from the very first episode (hyper-converged infrastructure) in which Chris made a very valid comment along the lines of “with the hyper-converged infrastructure it’s possible to get so many things done without knowing too much about any individual thing…” and I immediately thought “… and what happens when it fails?”

One of the engineers listening to my DMVPN webinars sent me a follow-up question (yes, I always try to reply to them) asking how to implement direct Internet access from the spoke sites (aka local exit) in combination with split default routingyou have to use in DMVPN Phase 2 or Phase 3 networks.

It’s really simple: either you have a design requirement that requires split default routing, or you don’t.

Not surprisingly, someone already did it. A quick google search found this tutorial which explains how to run IP stack over Gnuradio (at speeds that were last experienced with dial-up modems 30 years ago).

Imagine you’d design your network by documenting the desired traffic flow across the network under all failure conditions, and only then do a low-level design, create configurations, and deploy the network… while being able to use the desired traffic flows as a testing tool to verify that the network still behaves as expected, both in a test lab as well as in the live network.

PI multihoming demonstrably works, but PA multihoming when the upstreams implement BCP 38 filtering requires the deployment of some form of egress routing - source/destination routing in which the traffic using a stated PA source prefix and directed to a remote destination is routed to the provider that allocated the prefix. The IETF currently has no such recommendation, or consensus that it should have.

Here are a few really old blog posts just in case you don’t know what I’m talking about (and make sure you read the comments as well):

One of the reasons I started creating ipSpace.net webinars was to help networking engineers grasp the basics of adjacent technologies like virtualization and storage. Based on feedback from an attendee of my Introduction to Virtual Networking webinar it works:

I am completely on the Network side of the house and understand what I need to build for Storage/Data replication, but I really never thoroughly understood why. This allowed me to have a coherent discussion with my counterparts in DB and Storage and some of the pitfalls that can occur if we try to cowboy the network design.

I got this question from one of my readers (and based on these comments he’s not the only one facing this challenge):

I was wondering if you can do a blog post on Cisco's new ASA 5585-X clustering. My company recently purchased a few of these with the intent to run their cross data center active/active firewalls but found out we cannot do this without OTV or a layer 2 DCI.

Content providers were using centralized traffic flow optimization together with MPLS TE for at least 15 years (some of them immediately after Cisco launched the early MPLS-TE implementation in their 12.0(5)T release), but it was always hard to push the results into the network devices.

PCEP and BGP-LS all changed that – they give you a standard mechanism to extract network topology and install end-to-end paths across the network, as Julian Lucek of Juniper Networks explained in Episode 43 of Software Gone Wild.

I had fun times participating in a discussion focused on whether it makes sense to deploy OTV+LISP in a new data center deployment. Someone quickly pointed out the elephant in the room:

How many LISP VM mobility installs has anyone on this list been involved with or heard of being successfully deployed? How many VM mobility installs in general, where the VMs go at least 1,000 miles? I'm curious as to what the success rate for that stuff is.

I think we got one semi-qualifying response, so I made it even simpler ;)

Many vendors talk about network automation these days, and almost all of them gloss over an important detail: automation works best when you manage to simplify things to the bare minimum needed to get the job done.

One of the vendors that focus on simplifying the network device configuration is Cumulus Linux.

Professor Comer mentioned that IP choose a network attachment address model over an endpoint model because of scalability. He said if you did endpoint addressing it wouldn’t scale. I remember reading a bunch of your blog posts about CNLP (I hope I’m remembering the right acronym) and I believe you liked endpoint addressing better than network attachment point addressing.

During my recent SDN workshops I encountered several networking engineers who use Nexus 1000V in their data center environment, and some of them claimed their organization decided to do so to ensure the separation of responsibilities between networking and virtualization teams.

There are many good reasons one would use Nexus 1000V, but the one above is definitely not one of them.

One of my regular subscribers wondered whether it makes sense to attend a live workshop (like the one we’re running in Miami in a few weeks) instead of listening to my webinars:

I am following your blog posts quite regularly, I’ve been a yearly subscriber for more than 3 years now and I’m even trying to attend as many webinars as I can in real time. Is there a real benefit to participate in this classroom event if we are almost aware of all your slide decks and videos?

Absolutely. Here’s what one of the attendees of a recent SDN workshop wrote when asking me whether I would be willing to do an on-site event for his company:

Whenever a vendor approaches me touting the benefits of their new gizmo, they want to give me a product demo, or offer me access to online labs… and I always tell them I’m not interested until I see their design and configuration guides.

Needless to say, the feature is totally proprietaryrather unique and not available in most other database products. Improved locking behavior ⇒ improved lock-in.

Moral of the story: Stop yammering. Networking is no different from any other field of IT.

Update: Yep, I goofed up on the proprietary bit (it was one of those “I don’t think this word means what you think it means” gotchas). However, if you think open source product can't have proprietary features or you can’t get locked into an open-source product, I congratulate you on your rosy perspective. Reality smudged mine years ago.

Every time I’m explaining the intricacies of new technologies to networking engineers, I try to use analogies with older well-known technologies, trying to make it simpler to grasp the architectural constraints of the shiny new stuff.

Unfortunately, most engineers younger than ~35 years have no idea what I’m talking about – all they know are Ethernet, IP and MPLS.

Last week I ran two SDN workshops, and in both of them the participants were busy taking notes as I explained the intricacies of concepts like SDN, NFV and network automation, and tools like OpenFlow or BGP.

However, how often did you revisit notes taken at a presentation and kept wondering “what exactly was he trying to say?” … or felt like the training you attended was like drinking from a fire hose and you missed most of the good stuff?

Another week, another ExpertExpress session, as is often the case focusing on two data centers with stretched VLANs spanning both of them. However, this one was particularly irksome, as the customer ran a firewall cluster stretched across two locations.

SD-WAN is all the rage these days (at least according to software-defined pundits), but networking engineers still build DMVPN networks, even though they are supposedly impossibly-hard-to-configure Rube Goldberg machinery.

To be honest, DMVPN is not the easiest technology Cisco ever developed, and there are plenty of gotchas, including the problem of default routing in Phase 2/3 DMVPN networks.

When the situation was manageable it was neglected, and now that it is thoroughly out of hand we apply too late the remedies which then might have effected a cure. There is nothing new in the story. It is as old as the Sibylline Books. It falls into that long, dismal catalogue of the fruitlessness of experience and the confirmed unteachability of mankind. Want of foresight, unwillingness to act when action would be simple and effective, lack of clear thinking, confusion of counsel until the emergency comes, until self-preservation strikes its jarring gong -these are the features which constitute the endless repetition of history.

Obviously mr. Churchill wasn't talking about IPv6 but about way more serious matters… but it's also obvious he was right abut the unteachability of mankind.

Jim Small asked me what I thought about the Future of Networking Packet Pushers podcast with Douglas Comer. I decided to listen to it while driving toward one of my recent hikes, and it was a great decision– it was the best Packet Pushers podcast I listened to in a long while.

A while ago I started discussing the intricate technical details of fibbing (an ingenious way of implementing traffic engineering with traditional OSPF) with Laurent Vanbever and other members of his group, and we decided to record a podcast on this topic.

Things never go as planned in a live chat, and we finished talking about another one of his projects – software defined Internet exchange point (SDX), the topic of Episode 41 of Software Gone Wild.

A year ago I was a firm believer in the unlimited powers of Software-Defined Data Centers and their ability to simplify workload migrations. After all, if you can use an API to create any data center object, what’s stopping you from moving the workload running in a data center to another location.

I got into an interesting discussion with a fellow networking engineer trying to understand the impact of a switch failure in a L2/L3 data center fabric (anything from Avaya’s fabric or Brocade’s VCS Fabric to Cisco’s FabricPath, ACI or Juniper’s QFabric) on MAC and ARP tables.

Bryan would love to get hands-on SDN experience and sent me this question:

I was recently playing around with Arista vEOS to learn some Arista CLI as well as how it operates with an SDN controller. I was wondering if you know of other free products that are available to help people learn.

If you’ve been a networking engineer (or a sysadmin) for a few years, you must be pretty familiar with DHCP and might think you know everything there is to know about this venerable protocol. So did I… until I read the article by Chris Marget in which he answers two interesting questions:

How does the DHCP server (or relay) send DHCP offer to the client that doesn’t have an IP address (and doesn’t respond to ARP)?

How does the DHCP client receive the DHCP responses if it doesn’t have an IP address?

When I wrote my stretched VSAN post, I thought VSAN uses asynchronous replication across WAN. Duncan Epping quickly pointed out that it uses synchronous replication, and I fixed the blog post.

The “What about latency?” question immediately arose somewhere in my subconscious, but before I could add that thought to the blog post (because travel), Anders Henke wrote a lengthy comment that totally captured what I was thinking, so I’m including it in its entirety:

One of my subscribers asked me: “My subscription is valid till early December. How could I renew it now (due to budgetary reasons)?”

While I already had the process to do just that, there was no link that one could use (you had to know the correct URL). I’ve fixed that – you’ll find the renewal link on the first page of my.ipSpace.net

When I asked “Are there any truly QoS-aware routing protocols out there?” in one of my SD-WAN posts, Marcelo Spohn from ADARA Networks quickly pointed out that they have one – Dynamic Link-State Routing Protocol.

He also claimed that DLSP has no scalability concerns – more than enough reasons to schedule an online chat, resulting in Episode 40 of Software Gone Wild. We didn’t go too deep this time, but you should get a nice overview of what DLSP is and how it works.

Whenever I talk about the various definitions of SDN (ending with the “SDN provides an abstraction layer”), old-timers sitting quickly realize that the SDN products that you can deploy in real life aren’t that different from what we did in the past – an SDN controller is often just an overhyped glorified network services orchestration system.

OK, so why didn’t we have that same functionality for the last 20 years?

One of my kids recently asked me whether I plan to travel somewhere during the autumn. The answer was “a bit” surprising: Boston (just got back), Zurich, Bern, Stockholm, Ljubljana, Heidelberg, Nuremberg, Rome, Miami, Ljubljana, Helsinki, and maybe Munich and/or another trip to Zurich… so I might not be able to blog as frequently as usual.

Most of those trips are public events (hyperlinked). If you’re anywhere close one of those cities, check them out and drop by.

Another Friday, another short IPv6 video (didn’t have time to create anything more substantial this week). This one describes the basics of IPv6 addressing – I know most of you don’t need it, but do forward the link to friends who are still struggling with IPv6 basics.

How do you capture all the flows entering or exiting a data center if your core Nexus 7000 switch cannot do it in hardware? You take an x86 server, load nProbe on it, and connect the nProbe to an analysis system built with ELK stack… at least that’s what Clay Curtis did (and documented in a blog post).

Sometimes it seems like the networking vendors try to (A) create solutions in search of problems, (B) boil the ocean, (C) solve the scalability problems of Google or Amazon instead of focusing on real-life scenarios or (D) all of the above.

Bryan Stiekes from HP decided to do a step in the right direction: let’s ask the customers how complex their data centers really are. He created a data center complexity survey and promised to share the results with me (and you), so please do spend a few minutes of your time filling it in. Thank you!

Robin Harris described an interesting problem in his latest blog post: while you can reduce the storage access time from milliseconds to microseconds, the whole software stack riding on top still takes over 100 milliseconds to respond. Sometimes we’re optimizing the wrong part of the stack.

SDN will give more control and flexibility over the network to the customer/user/network-admin. They will be able to program their equipment themselves, they will be able to tweak routing algorithms in the central controller. They get APIs to hook into the heart of the intelligence. They get more config-knobs. It's gonna be awesome.

One of my readers wondered how long my NFV webinar is supposed to take (and I forgot to add that information to my web site), so he sent me this question: “How long is this webinar? An hour? Two hours? If it says "webinar" does that imply a 60 minute duration, so I shouldn't ask?”

Short answer: live webinar sessions usually take between 90 minutes and 2 hours depending on the breadth of the topic, however…

With the advent of layer-3 leaf-and-spine data center fabrics, it became (almost) possible to build pure layer-3-only data center networks… if only the networking vendors would do the very last step and make every server-to-ToR interface a layer-3 interface. Cumulus decided to do just that.

One of my readers sent me a heartfelt email that teleported me 35 years down the memory lane. He wrote:

I only recently stumbled upon your blog and, well, it hurt. It's incredible the amount of topics you are able to talk about extensively and how you can dissect and find interesting stuff in even the most basic concepts.

May I humble ask how on earth can you know all of the things you know, with such attention to detail? Have you been gifted with an excellent memory, magical diet, or is it just magic?

A few weeks ago I decided to join the SDN group on LinkedIn and quickly discovered the biggest problem of SDN – many people, who try to authoritatively talk about it, have no idea what they’re talking about. Here’s a gem (coming from a “network architect”) I found in one of the discussions:

The SDN local controller can punt across to remote datacenters using not only IP, but even UDP over MPLS

It’s hard to visit an IT journal web site without stumbling upon an SDN fairy tale. Here’s another one:

The idea is to cut away the manual process of setting up new firewalls, load balancers and other network appliances, and instead open the door to provisioning a new network infrastructure within a few minutes.

35 years ago, mainframes, single-protocol networks (be it SNA or DECnet), and centralized architectures that would make hard-core SDN evangelists gloat with unbridled pride were all the rage. If you’re old enough to remember IBM SNA, you know what I’m talking about.

I found a great explanation for hodgepodge of kludges found in "organically grown" solutions (legacy precursors to SD-WAN come to mind):

In a long-lived project, components are being replaced. Nice reusable components are easy to replace and so they are. Ugly non-reusable components are pain to replace and each replacement means both a considerable risk and considerable cost. Thus, more often then not, they are not replaced. As the years go by, reusable components pass away and only the hairy ones remain. In the end the project turns into a monolithic cluster of ugly components melted one into another.

One of my readers sent me an interesting reliability design question. It all started with a catastrophic WAN failure:

Once a particular volume of encrypted traffic was reached the data center WAN edge router crashed, and then the backup router took over, which also crashed. The traffic then failed over to the second DC, and you can guess what happened then...

Obviously they’re now trying to redesign the network to avoid such failures.

A month ago I was asked to deliver a short presentation on “something interesting about networking” at my local university. The temptation to talk about network automation and SDN was huge, but I quickly figured out that would make no sense (the audience were students in their freshman year) and decided to talk about a fundamental question: why should a programmer care about networking.

SDx Central is usually a pretty good web site that I love to read, but even they occasionally manage to publish a gem like this one:

The problem with MPLS and similar technologies is that they weren’t designed with today’s business challenges in mind. Today, a company may need to launch an overseas R&D office overnight, or it may acquire a startup and want to immediately network with offices in distant regions and countries. Older technologies just don’t have the flexibility to do this on the fly.

Not surprisingly, the above paragraph triggered a severe case of Deja-Moo.

I can still hardly imagine the business case behind SD-WAN. Any thoughts?

This question is really easy to answer. There’s a huge business case that SD-WAN products are aiming to solve: replacing traditional MPLS/VPN networks with encrypted transport over public Internet. However…

At least a dozen engineers sent me emails or tweets mentioning Project Calico in the last few weeks – obviously the project is getting some real traction, so it was high time to look at what it’s all about.

TL&DR: Project Calico is yet another virtual networking implementation that’s a perfect fit for a particular use case, but falters when encountering the morass of edge cases.

Yesterday I was arguing with someone who works for a large MPLS provider about LDP label allocation. He kept saying that LDP assigns a label to each next-hop, not to each prefix. Reading your blog, I believe this is the default behavior on Juniper but on Cisco LDP assigns a unique label for each IGP (non-BGP) prefix.

One of the Software Defined Evangelists has declared 2015 as the Year of SD-WAN, and my podcast feeds are full of startups explaining how wonderful their product is compared to the mess made by legacy routers, so one has to wonder: is SD-WAN really something fundamentally new, or is it just another old idea in new clothes?

Geoff Huston published an interesting number-crunching exercise in his latest IPv6-focused blog post: 8% of the value of the global Internet (GDP-adjusted number of eyeballs) is already on IPv6, and a third of the top-30 providers (which control 43% of the Internet value) have deployed large-scale IPv6.

The message is clear: The big players have moved on. Who cares about the long tail?

Christoph Jaggi has just published the third part of his Metro- and Carrier Ethernet Encryptor trilogy: the 2015 market overview. Public versions of all three documents are available for download on his web site:

Elisa Jasinska, Bob McCouch and I were scheduled to record a NetOps podcast with a major vendor, but unfortunately their technical director cancelled at the last minute. Like good network engineers, we immediately found plan B and focused on Elisa’s specialty: open-source tools.

Every other week I stumble upon a high-level SDN article that repeats the misleading SDN is centralized control plane mantra (often copied verbatim from the Wikipedia article on SDN, sometimes forgetting to quote the source).

A while ago someone working for an IT-focused media site approached me with a short list of high-level questions. Not sure when they’ll publish the answers, so here they are in case you might find them interesting:

What can enterprises do to ensure that their infrastructure is ready for next-gen networking technology implementations emerging in the next decade?

Next-generation networks will probably rely on existing architectures and forwarding mechanisms, while being significantly more uniform and heavily automated.

A few days ago I stumbled upon an interesting blog post by my friend J Metz in my RSS feeds. As with all blog posts published on Cisco’s web site, all I got in the feed was a teaser (I know, I shouldn’t complain, I’m doing the same ;), but when I wanted to read more, I was greeted with a cryptic 404 (not even a fancy page full of images saying “we can’t find what you’re looking for).

What happens when network engineers with strong programming background and focus on open source tools have to implement network automation in a multi-vendor network? Instead of complaining or ranting about the stupidities of traditional networking vendors and CLI they write an abstraction layer that allows them to treat all their devices in the same way and immediately open-source it.

The proponents of microsegmentation are quick to explain how the per-VM-NIC traffic filtering functionality replaces the traditional role of subnets as security zones, often concluding that “you can deploy as many tenants as you wish in a flat network, and use VM NIC firewall to isolate them.”

During the Cumulus Linux presentation Dinesh Dutt had at Data Center Fabrics webinar, someone asked an unexpected question: “Do you have In-Service Software Upgrade (ISSU) on Cumulus Linux” and we both went like “What? Why?”

Dinesh is an honest engineer and answered: “No, we don’t do it” with absolutely no hesitation, but we both kept wondering, “Why exactly would you want to do that?”

Network Address Translation (NAT) is one of those stateful services that’s almost impossible to scale out, because you have to distribute the state of the service (NAT mappings) across all potential ingress and egress points.

If you’re a regular reader of this blog, you know I always prefer knowledge over recipes. Unfortunately, it’s pretty hard to build that knowledge using the widely available training materials, which often just blast you with a barrage of facts that you’re supposed to memorize and deliver at the certification exam.

I helped several customers design scale-out private or public cloud infrastructure. In every case, I tried to start with a reasonably small pod (based on what they’d consider acceptable loss unit – another great term I inherited from Chris Young), connected them to a shared L3 backbone (either within a data center or across multiple data centers), and then tried to address the inevitable desire for stretched layer-2 connectivity.

A while ago Chris Young sent me a few questions about network management in the brave new SDN world. I never focused on network management, but I know a few people who do, including Terry Slattery and Matt Oswalt. Interop brought us all together, and we sat down one evening after the presentations to chat about the challenges of monitoring and managing SDN networks.

We started with easy things like comparing monitoring results from virtual and physical switches (and why they’ll never match and do we even care), and quickly diverted into all sorts of potential oscillations caused by overly-dynamic load balancing caused by flow label-based ECMP and flowlets.

I was running the first part of the Data Center Fabrics Update webinar last week, mentioned that Brocade VDX 6740 supports Flex ports (a port you can use as Fibre Channel or 10GE port), and someone immediately wrote a comment saying “so does VDX 6940”. I was almost sure Flex ports aren’t available on VDX 6940 yet, and as always turned to vendor documentation to figure it out.

As expected, the data sheet is a bit vague, somewhat reflecting reality, but also veering into the realm of futures instead of features. Here’s what they say:

Open vSwitch Database Management Protocol (OVSDB, RFC 7047) is often mentioned together with other semi-magic SDN tools that will bring everlasting peace to the chaotic world of networking. In reality, it’s just a database access/update protocol (think SQL with JSON encoding) with an interesting twist: a client can request notifications about table or row updates, replacing periodic database polling with a pub-sub solution.

One of my readers couldn’t figure out how to combine Link Aggregation Groups (LAG, aka Port Channel) with OpenFlow:

I believe that in LAG, every traditional switch would know how to forward the packet from its FIB. Now with OpenFlow, does the controller communicate with every single switch and populate their tables with one group ID for each switch? Or how does the controller figure out the information for multiple switches in the LAG?

As always, the answer is “it depends”, and this time we’re dealing with a pretty complex issue.

With all the hype around Segment Routing we said: “let’s chat about it, what could possibly go wrong”. The result: Episode 33 of Software Gone Wild. We didn’t get very far into the technical details, but you might still find the overview useful (or not – do tell me how good or useless it is).

In June 2013 I wrote a rant that got stuck in my Evernote Blog Posts notebook for almost two years. Sadly, not much has changed since I wrote it, so I decided to publish it as-is.

In the meantime, the only vendor that’s working on making generic network deployments simpler seems to be Cumulus Networks (most other vendors went down the path of building proprietary fabrics, be it ACI, DFA, IRF, QFabric, Virtual Chassis or proprietary OpenFlow extensions).

Arista used to be in the same camp (I loved all the nifty little features they were rolling out to make ops simpler), but it seems they lost their mojo after the IPO.

What is your opinion on NAC and 802.1x for wired networks? Is there a better way to solve user access control at layer 2? Or is this a poor man's way to avoid network segmentation and internal network firewalls.

Unless you can trust all users (fat chance) or run a network with no access control (unlikely, unless you’re a coffee shop), you need to authenticate the users anyway.

Security groups (or Endpoint Groups if you’re a Cisco ACI fan) are a nice traffic policy abstraction: instead of dealing with subnets and ACLs, define groups of hosts and the rules of traffic control between them… and let the orchestration system deal with IP addresses and TCP/UDP port numbers.

Several of the conversations I had at the recent RIPE70 meeting were focused on career advice (usually along the lines of “which technology should I focus on next”) and inevitably we ended up discussing the benefits of T-shaped skills versus I-shaped skills… and I couldn’t resist drawing a few graphs illustrating them.

It turned you might use Ravello Systems’ solution to implement disaster recovery, but I got way more excited about the possibility to use their solution for labs or testing. To learn more about that, listen to Episode 32 of Software Gone Wild.

From the automation perspective, the RIPE conference is a dream come true – 30 seconds after you upload your presentation, it appears on the RIPE web site, it’s automatically updated on the podium computer, and the video recording of your talk is published before you even manage to get off the podium – so you can already watch my “SDN - 4 years later (aka Quo Vadis, SDN?)” presentation if you missed it yesterday.

I just went through some Cisco webinar where they were showcasing the use of NX-OS API and Python to add a VLAN. I do some Python myself and have used that API for some simple DevOps-like uses, but for the most part if you are an enterprise and use Prime DCIM to add VLANs, why should you go through the coding process?

Great news for everyone trying to deploy IPv6 in OpenStack: the Kilo release has full support for IPv6 in the tenant networks, including SLAAC, stateless and stateful DHCPv6. For more details, read an extensive blog post by Shannon McFarland.

When I finished my SDN workshop @ Interop Las Vegas (including a chapter on OpenFlow limitations), some attendees started wondering whether they should even consider OpenFlow in their SDN deployments. My answer: don’t blame the tool if people use it incorrectly.

Two days later, I discovered HP is one of those companies that knows how to use that tool.

The author

Ivan Pepelnjak (CCIE#1354 Emeritus), Independent Network Architect at ipSpace.net, has been designing and implementing large-scale data communications networks as well as teaching and writing books about advanced internetworking technologies since 1990.