14 Automation Testing Myths, Debunked | Facts and the Fiction

Automation testing is a concept that is heavily marketed today. There has been a real convergence of tools and approaches in automation in recent years. It’s increasingly considered as integral to project delivery, rather than something that exists to cover business-as-usual regression testing after project completion. Faster releases, increased test coverage, frequent test execution, faster feedback to development team, just to name a few are being counted as some of the Test automation benefits. Automation is being portrayed as the silver-bullet in testing technology. But everything is not so ideal. Not every organization (or client) is reaping the actual benefits of Test automation. Certain Automation testing myths must be addressed in order to correctly apply it in the most effective & efficient manner. In this article we shall examine some of the most common automation testing myths and how these prevent organizations from succeeding in Test automation.

Note: This article is not about bashing test automation. If you are a tester frustrated at having to do test automation, reading this will not bring you any solace. However, if you are a CEO/ CTO/ Product manager or someone genuinely interested in finding a value proposition through Test automation, you may find these automation testing myths a good read.

Automation Testing myths

Test Automation = Automating Test cases execution

While a lot of what we do in automation is focused on automating test execution, it can also provide benefits in supporting other project work. For example – Creating, sourcing and verifying test data, creating macros for executing repetitive or complex tasks in Microsoft Office, accelerating manual test execution by automating time-consuming processes and generating reporting data and graphs. All these are examples of automation that support you in your work and provide a lot of value even if they only cover a small part of your role.

Automate everything | Setting Realistic Expectations

Possibly the most difficult and challenging aspect of any test automation endeavor is to understand the limitations of automated testing and setting realistic goals and expectations to avoid disappointments. 100% test automation is not possible. We can increase test coverage by running automated tests with more data, more configurations, covering a whole variety of operating systems, browsers, but achieving 100% is still an unrealistic goal. E.g. it cannot test for user experience or for unpredictable results that require human cognitive skills to assess correctness. Instead of aiming for a full coverage, instead focus on the most important area of functionality which is crucial to the business.

Quick ROI | Analyze the cost-effectiveness

Implementing a test automation solution involves many interrelated development activities than just scripting. Normally an automation framework needs to be developed to support bespoke operations such as data driven, reusability, scripting, reporting, etc. The development of automation framework is a project on its own and requires skilled developers and takes time to build. Even when a fully functional framework is in place, scripting automated checks initially takes longer than executing the same test manually. However, the ROI is returned in the long run when we need to execute the same tests in regular intervals.

Higher Rate of Defect Detection | Define the Goals

As stated earlier, automation testing is confirmatory whereas manual testing is exploratory. Ironically, people expect automated testing to find lots of bugs because of allegedly increased test coverage, but in reality this is not the case. Any automation tool is just as smart as the human tester. Automated checks only check what they have been programmed to check by the person who wrote the script. No tool can compete with the intelligence of a human tester who can spot unexpected anomalies in the application while exploring or executing a set of tests (including ad-hoc tests). Automated tests are good at catching regression issues but, the number of regression issues, in most cases, tends to be far less than new functionality that’s being developed.

Test Automation is worthless | Identify the best practices

This is one of the side effect when test automation goes wrong. You spend many hours developing a perfect test automation solution, using best tools and best practices, but if the automated checks do not help the team it is worthless. The Manual test team and the management thinks that Test automation is worthless. Always aim for a clean and reliable automation suite that can give correct indications of the health of the application.

Automation is better than Manual Testing | Strategize

Automated testing is confirmatory, i.e. checking of facts. By running the automated checks, we confirm our understanding (checking) of the system. Manual Testing on the other hand, is an exploratory investigation exercise where the aim is to identify the defects for better software quality. None of it is superior to other. Both methods are required to get insight to the quality of the application.

Test Automation is Cheap & easy | Calculate the costs

Many managers think that every project should implement automation. But automation has significant upfront costs that are difficult to avoid. Proprietary frameworks require large licensing fees. Open source tools require consultants or in-house expertise to realize their full value. But realistically, to extract value from test automation, you need to look at the long-term goal. Automation yields cost-effective quality checks only in the long-run when tests are executed repeatedly.

Talking about record-and-play, every time I’ve used it, the resulting scripts wouldn’t perfectly execute the test, and I’ve had to write custom code to make it work. Automation tests can be brittle: you have to keep modifying scripts to keep them running as the system under test changes. Given the cost of automating, automate only what you are going to run over and over again, because otherwise the investment doesn’t pay.

Faster software release cycles | Maintain the Quality

Test automation requires diligence and time to inscribe, manage and interpret the results. It is not the automation that is faster but the execution. In an agile development environment practicing continuous delivery, end-to-end automation does indeed speed up delivery. But expecting test automation to speed up release cycles misses a key point. The goal of automation should be improving software quality rather than push software out the door faster. Properly applied, automation accomplishes deeper and broader code coverage.

Anyone can automate | Focus on Training

Automation testing require training. Often, users without sufficient training use the tool for a brief time to record and play back test scenarios, then discontinue using the tool without ever taking advantage of its more powerful capabilities. Every automation tool provides a set of modules and components to ease the process. But believe it or not, there is no tool that can do everything. Automation scripts are nothing but software code. Hence it needs development talent and software engineering approach. Writing a test case is different from scripting it in a particular computer language.

Click & Done | Automation vs. Automagic

General perception about automation is “Automation suite should be like – I click on run >> it executes all the test scripts >> log defects with screenshots >> and then give me the report”. Everything is not so simple when you configure the tool, build a framework, script the test cases, prepare test data and then execute. Quite often automation is an ongoing activity which required a lot of maintenance with respect to automation framework and scripts. It’s Automation, not Automagic.

Automation to replace manual testing | Strategize

A machine can find a problem, if it is programmed to find it, and it can do it without any intervention, with efficiency and reliability. A human will be in-efficient and unreliable. But a human being can do what a machine can’t; it can ideate and anticipate. Human being can find a problem, which was not thought of in the product specification. Even the most thoughtful and comprehensive frameworks, automated tests with the fullest coverage do not still make it possible to find all the bugs and some things are still done by hand. No matter how complete automation is, it does not negate the fact that manual testing is also needed.

Automation is only for regression testing | Identify the key areas

This opinion of automation has persisted from the early days of test automation. While there will always be cases where the project environment limits the scope of automation to regression testing only, advances in test automation tools and approaches and changes in development methodologies are enabling automation for a far wider range of work. E.g. Highly collaborative agile teams developing small chunks of functionality allows for sharing of code, automation work and for automation to develop in parallel with application development.

Automation is always ready-to-execute | Plan accordingly

If you are an automation engineer, many times after software changes you must have heard “Can we run the automation suite and report the results?” And automation testers are thinking “What! Just now the software has been changed. How can I run the suite? First I need to update my test scripts, which unfortunately will take some time. They think automation is always ready-to-execute.”

Automation needs a lot of maintenance. Even if the test environment is same, it might not be ready to go at a moment’s notice. There are often setup tasks to perform before the automation can run. Or test scripts needs to be updated with latest changes.

Running automation suite is enough | Plan accordingly

Many automation testers wish that when automated tests are run, they all pass. That’s because when there’s a failure, they have to comb through the automation logs to find what happened, figure out what the automation was doing when it failed, and log into the software and try to recreate the problem manually. Mostly the failure is a bug in the automation that must be fixed. When it’s a legitimate product bug, only then it has to be logged in the bug tracker. Just running the automation isn’t the end of it. Every failure needs debugging and analysis.

Conquering the Automation Testing myths

There is no doubt that automation adds a great value to overall QA process but, short of knowledge and understanding about automation can also cause a nightmare. Automated software testing is not a cure-all. Expecting a test automation tool to act as a “silver bullet” to circumvent staffing, schedule, or budget issues virtually guarantees failure.

There is a need to educate QA managers about these automation testing myths. There is a need to separate the facts from the fictions. To be honest, comprehending the limitations and setting realistic expectations is imperative in conquering such automation testing myths. Test Automation is a long term investment. It will take time and expertise in developing and maintaining test automation frameworks and automated scripts. Test automation is not a one-off effort where you deliver a solution and let it run. It needs constant monitoring and updating. Spend some time up front doing research about the tools, benefits, techniques and limitations of automation.

We have mentioned just a couple of automation testing myths, there are many more. If you know of any other, please mention it in the comments below…

Related Articles

For some projects we should rename the ‘Onshore-Offshore’ model to ‘Manager-Worker’ model 😛 Where Offshore team is taken for granted. Additionally Offshore team is at the back-foot by default. But why? We should learn to ‘prioritize’ the work items and respect each other’s personal time. Learning to say ‘No’ in a constructive way is an art you learn via experience.

Any technology or tool is worthless unless it is being used by ‘some’ organization somewhere. It all starts from organizations adopting the new technology or a tool and then it gets popular slowly. In that sense QA Job Descriptions are a great source of current technology, i.e. practical tech. being used by IT organizations. Be it Selenium, Protractor, Appium, API tools, Big Data Testing, etc. Everything is embedded in the QA Job descriptions, you just need to mine some data 😉 But don’t worry. Continuing on our “JD Talks” series – we mine hundreds of QA Job descriptions to come up with latest tools, technology, languages and concepts. Let’s see what the fourth set of JDs talk about…

I assume ‘every’ fellow Tester in my network have experienced this situation at least once in his/her career. One of the most common Testing situation – How do you handle timeline crunch? Since Testing is the last step before client demo, the Test team has to make-up for the delays encountered till the build is deployed in Test environment. Now how do you handle crunch timelines without impacting the quality?

About STS

Software Testing Studio is an attempt to share some incredible knowledge from industry leaders & experts, which should be helpful for anybody to start his/her career in ‘Software Testing’ or to progress it further. Apart from the technical nitty-gritties, one can also find some intellectual posts by industry experts sharing their Wisdom.