Why Automate?

My name is Todd Law, and I am the Vice-President of NTAF, as well as a senior product manager at Spirent Communications. I’m happy to have this chance to communicate through the NTAF blog.

Most of the blogs posted to date in this space have been primarily focused on NTAF’s formation and organizational progress in getting new members or standards written. In this entry, I wanted to take a step backwards and talk about something slightly more fundamental, namely, the reasons for automating in the first place. Put another way, what are the problems that NTAF is attempting to solve?

Of course, the "motherhood and apple pie" reasons to automate are the same classic reasons as many other business motivations: save money, increase efficiency, increase quality, reduce time to market, and so on. But those high-level reasons often mask some of the other underlying reasons why test engineers choose to automate. To shed more light on this topic, I’ve compiled a list of those underlying reasons of why test engineers go down the automation path.

To go beyond functionality that is in a GUI – Automation is often (but not always) equated with using the API of a product. And of course, if you write a script or a program, you can get it to do many things besides drive a piece of test equipment; for example, you can have a script also control the device under test (DUT), or have it send you an e-mail with the results.

To achieve a level of scale that would be difficult or impossible manually – when tests must scale up to thousands of ports or thousands of emulated protocol sessions, the configuration can become tedious and prone to error, or simply too slow.

To do time-dependent testing – Some test cases require an orchestration of events that are best handled via automation. For example, in the access network, a test case might require that a number of PPP sessions come up before traffic is sent over those sessions. Automating makes such tests repeatable.

To see trends over the long term – following from the previous reason, for results to mean something from run to run, the test must be applied in a consistent way. The classic example is a regression test bed, where test managers want to see that existing functionality is not broken when a new software release comes out. Or conversely, managers want to see where bugs are clustered with a view to identifying root causes.

To optimize investment in equipment – automation can be used to connect multiple traffic generators to multiple devices under test in various configurations using programmable layer 1 switches.

To achieve re-use – a test case written once can be re-used over and over again. If it’s written well with a properly designed environment, it can be used in a different context. In contrast, a script written for a very specific purpose for a very specific environment is essentially a one-time investment that gets thrown away quickly.

To achieve a consistent workflow – if an automation team is working with many different tools, efficiency can be gained by providing a consistent structure to automation components. A typical strategy is to wrap up native API commands as commands which, to some degree, achieve some consistency (but unlikely full inter-operability) across vendors.

To consolidate low-level functionality – some APIs can be very granular and low level in nature, which can mean that many steps are required to achieve basic tasks. As with the previous point, wrappers around native API commands can be used to achieve basic tasks, reducing, for example, 20 lines of low-level code down to a single line of code.

To insulate against change – Most APIs do not change from release to release, because most vendors have learned that customers have invested a great deal in scripts that are expected to continue to work well into the future. However, tool vendors new to automation may still be learning this lesson…

To insulate against variations in vendor implementation – The last point was about intra-vendor variation; this point is about inter-vendor variation. In reality, all test tools are quite different, and there is only partial overlap in functionality. For that part which is common, in some cases, automation can be used to increase re-use.

To create solutions consisting of multiple products working together – in many test cases it requires more than one test tool to provide complete testing. For example, one tool may excel at (or be cheaper at) providing background traffic, while another tool may be excel at simulating complex user behaviour with realistic applications. Automation can be used to get the two tools to work together to provide the best of both worlds.

To integrate into a bigger system – tools are often part of a bigger ecosystem that includes other components. So not just other test tools or devices under test, but also test management and reporting systems, version control software, bug-tracking software, and inventory management systems are all part of the environment.

As we can see, automation is motivated by many different underlying reasons! No wonder it is a topic where confusion can reign and discussions quickly come to cross-purposes. But these are the problems that NTAF is solving – or has already solved in part!