I always point that out when reviewing their solutions and suggest how to minimize or eliminate duplicate data. Not surprisingly, doing that is hard, and one of the attendees started wondering whether the extra effort makes sense:

I’m finding it’s a fine balance between exposing the complexity to the operator (by asking them to specify values in multiple places) or pushing that complexity into the data model rendering and removing “flexibility” for the user.

There’s difference between flexibility and duplicate data. For example, asking the operator to enter interface description for every link instead of using default value “link to X:Y” when the data model already contains the information that the interface is connected to port Y on device X is unnecessary data duplication.

On the other hand, removing the operator’s ability to overwrite default interface description with a more meaningful text (where needed) is reducing flexibility and might be undesired… keeping in mind that flexibility (aka “nerd knobs”) increases complexity, requires more thorough testing, and in the end increases the development costs. The fine balance is thus “do we really need this flexibility and what are we getting for the increased complexity?”

When asking the operator to deal with more complexity, I typically mitigate this risk with documentation and/or a walk through session.

One of the goals of introducing network automation should be increased reliability and consistency. Documenting data duplication instead of eliminating it doesn’t bring us closer to that goal, as it still permits operator mistakes. The very minimum you should do in this case is:

The author

Ivan Pepelnjak (CCIE#1354 Emeritus), Independent Network Architect at ipSpace.net, has been designing and implementing large-scale data communications networks as well as teaching and writing books about advanced internetworking technologies since 1990.