Thinking beyond Network Automation

Thinking beyond Network Automation

Network architects often seem to be at a loss for words when asked to describe Network Automation. This is understandable though. Automation is a broad subject. From provisioning, to telemetry and monitoring, there are several faces to automation.

What is Network Automation?

For many small to medium enterprises, automation is nothing more than scripting a few repetitive tasks. A bunch of YAML files, a cookbook or two, and the deed is done. Sure, this covers parts of provisioning, but is that all there is to automation?

Bigger enterprises, typically automate a lot more – There is usually a baseline automation piece which gets called when any piece of physical hardware goes into the network. The job of this piece, is to ensure that the hardware is initialized in a way that the rest of the automation systems can communicate with it at a later time.

Then there is a provisioning automation piece, which takes over from that point onwards. Anytime a device requires modification – whether you need to reconfigure a switch port, turn up a protocol, or slap a snippet of BGP configuration on a TOR switch, the provisioning-piece takes care of that.

Need for unified automation frameworks

Larger enterprises witness network automation in several areas. However, there is sometimes a lack of forward-thought, and more importantly, lack of a well structured framework which enables code-reuse, thereby negating out several benefits of said automation.

Here is a very common scenario in an enterprise network. A new Architect walks in, finds a problem area that lacks automation, and writes a tool to address that. Another Architect, comes by the following week, identifies another area of need, and builds a second tool. A year later, you have an array of tools that do different tasks in the network. The network is chugging along fine…

And then, the vendor identifies a security vulnerability, a code-upgrade is required. But the new code alters a few things – say it changes the way the device responds to a particular request. This breaks all of your tools that rely on that particular response.

Not only did we just break all our automation tools, but we’ve also got to invest developer time and resources (read COST) in fixing all of those individual tools. How can we solve this?

This is where the ‘forethought’ that I mentioned earlier comes into play. If the suite of automation tools had used common building blocks, and shared the framework/code they used, then it would’ve been just one network-facing module to fix. All the remaining tools would’ve leveraged these network-facing modules as building blocks, so fixing the it automatically addresses all the remaining tools. Thats it – one place to modify, one bug to fix, and everything else falls into place. See, wasn’t that easy?

Thats it – one place to modify, one bug to fix, and everything else falls into place. See, wasn’t that easy?

Automation Modules as Building-Blocks?

Let me give you an example to clarify this. You have one tool that queries LLDP-Neighbors, parses the output and uses this to drive some cabling-automation. You have a second tool that queries LLDP-Neighbors and uses the output to throw those MAC-Addresses into a DHCP server. Instead of having your architects write two separate libraries, you first wrote just one network-facing piece, which was responsible for querying LLDP – Call this your lldp-module.

Then you have architects to go about writing their tools, but the difference is that their tools interface with this lldp-module whenever they need any lldp-information. Now if a code-upgrade alters the way that your network node responds to LLDP, then the changes you make to accomodate that, will be just in the one lldp-module. And as a side effect, everything else is fixed. Period.

Beyond Network Automation?

The difficult part lies in developing the internal culture to re-use code… To build upon common modules… and to avoid future costs.

Its hard to provide a convincing answer to a new engineer who says “Why do I have to use modules from that tool? Its better and quicker if I write my own stuff from the ground up.” Some of these questions are usually justified. Everyone has projects – those come with deadlines.

My answer usually starts with “True. But you are thinking of just THE PRESENT. When a new platform comes along, we now have to find engineers to modify both your tool, and the existing tools to work with the new platform.”

But I agree, this takes time for engineers to realize, and explaining is laborious, but eventually fruitful. We have a network to run, don’t we?