I notice that the tag wiki for the "automated testing" tag contains the following sentence: "Commonly, test automation involves automating a manual process already in place that uses a formalized testing process."

This might be common, but is it good?

What pitfalls are there with directly automating manual test scripts, especially where those writing the automation have no knowledge of the system or domain? Is a test originally designed to be run manually going to be a good design for automation?

8 Answers
8

My considered answer to this is a firm ‘No’ and I’d go further, in that I am of the opinion that blindly automating manual tests may be an automated testing anti-pattern.

Manual test scripts are executed by testers who are able to make decisions, judgements and evaluations. As a result, manual tests are often able to cover large areas of the system under test hitting a large number of test conditions. A manual test can easily become a large sprawling, description covering many areas of the application and this can be a very useful manual test. However, this would not be an advisable design for an automated test.

Automated tests that attempt to cover every point that a manual test covers tend to be brittle. They have a tendency to break more often and also annoyingly, an automated test will often stop completely when it hits a failure or an error meaning that later steps do not get run. This can mean that, in order to complete a testing run, some minor problem with a larger script can need to be resolved. In my experience, it’s far easier to have these assertions within separate tests, so that other tests can be run independently.
Over time I have found that, large automated tests that attempt to literally replicate manual test scripts, become a considerable maintenance headache. Particularly, as they tend to frequently fall over when you really want to run them. The frustration can lead to major disillusionment with the automated test effort itself.

Blindly attempting to automate entire manual tests could also block gaining the maximum benefit from the test automation effort. A manual test in itself may not be easily automatable – but individual parts of it may be and by automating those, the manual test script could be reduced in size.

Manual test scripts tend to be most efficient when hitting as many areas of the application in the shortest amount of time possible. In contrast, automated test scripts tend to be more efficient when hitting as few areas of the application as needed.

It could be argued that one of the reasons why automation often ‘involves automating a manual process already in place that uses a formalized testing process’ is because automation is often introduced for the first time onto existing projects that already have manual tests, this may inevitably lead to the temptation to simply automate existing manual scripts. I would ask myself though, if the test had been automated from the outset, would I have designed the test the same was as I would a manual test? - I feel that my honest answer is ‘No’.

So although existing manual tests clearly can be useful documentation for automated testing in that they can show what is currently being tested and how, they should not dictate the design of the automated tests themselves. In my opinion, good manual test designs explain to a human how to test the features of the application most efficiently; good automated test designs make it easy for machine logic to exercise important features quickly and in my opinion, these are very different engineering problems.

Marking this as the accepted answer because I think it's a great description of problems with automating manual tests directly, and while the wiki answer also covers a number of these points, I think this answer also deserves a bit of attention!
–
testerab♦Jun 24 '11 at 19:46

Not necessarily. Here are some of the reasons why automating manual tests might not be advisable:

Certain tests may be easier to execute manually because they are very quick to execute manually but would take a disproportionate amount of time to code as automated tests

Certain tests may be easier to execute manually because they are in a part of the system that is still changing pretty rapidly, so the cost of maintaining the automated tests would outweigh the ability to execute them quickly

Many tests may be more worthwhile to execute manually because the human brain can (and should!) be asking many kinds of questions that would not be captured in an automated test (e.g., Is this action taking longer than it should? Why does half the screen get inexplicably covered up with a big black square at this point? Hmmm, THAT seemed unusual; I'm going to check it out once I finish executing this test, etc.)

Saying the point above another way, humans can (and should) use "rich oracles" when they are testing (above and beyond the written "Expected Results") more easily than coded logic within automated tests can

Certain tests (many in fact) are, when considered together with the other tests already existing in a test set, are extremely inefficient (e.g., because they are (a) both highly repetitive of other combinations that have already been tested and (b) lacking in much additional coverage that they achieve); such ineffective tests should generally neither be executed manually nor turned into automated tests

Testers may lack the skill or time to automate the manual tests well. In this case, it may be better to continue using manual scripts than invite the maintenance burden of poorly-written automation.

Automated manual tests are vulnerable to slight shifts in the product that aren't actual bugs. They can become a maintainability problem. When they fail, they aren't very diagnostic where tests that are more "white box" can be very diagnostic.

Tests that are easy for humans might be next to impossible to automate. "Is the sound in this video clear?" is a test even a fairly drunk human can do well, but a computer program to do it is nearly science-fiction level programming.

Tests that require hard copy or specific hardware interaction might not be good candidates for automation. Both can be simulated using other software pieces but then you have that other software that needs to be validated to make sure it is properly simulating the hardware.

If you need to do a highly complex set of configuration steps in order to automate a single test case, the manual test may be easiest. It also is probably an indicator of a rather rare test case which might not be worth putting into an automation suite.

Just as with doing development work, any artifacts created during the testing process must be evaluated for their worth as to whether or not they are reusable. If an automated test is not reusable beyond the first run, you've essentially spent resource to create something that is "thrown away" after use. A manual test can probably be executed once with a quick set of instructions in a document far more efficiently than spending the resource to automate it.

I hope this answer helps; I invite others to improve this incomplete list.

Conversely, signs that a manual test might be good to automate directly would be:

If the test has a very detailed script, including precise checks

If interesting bugs are rarely or never found outside of those specific checks when the manual scripts are run by skilled testers, OR if interesting bugs would be generally found much faster through ad-hoc testing without scripts

If the feature doesn't change frequently in ways that would disrupt automation

If executing the manual scripts takes many hours of tester time

If automating those scripts won't take longer than the amount of time expected to be spent running them manually, and there are currently no higher priority tasks

If executing the manual scripts is described as BORING by the testers running them, but not running them is not an option, even with ad-hoc testing for that feature. This is a strong sign that SOME sort of automation should be considered, possibly supplemented with ad-hoc testing, since bored testers are often bad testers. However, look to the other points to determine if it should be a direct port of the manual cases or a new test suite designed with automation in mind.

If the manual test is a mission/product critical item that must be regressed beyond the initial release date. Note that not all automation is regression automation but regression tests are one place where automation gets a great boost in ROI.

Along with the above point, even if the test is not a regression item, if the creation of an automated version of the manual test will add value down the line for other projects and/or processes, it's worth creating the automation. Any artifacts created by the testing process that are re-usable more than once are not wasted effort.

If the manual tests are a list of smoke tests to execute with every build

The first point here is EXPECIALLY important when it comes to automating GUI validations. Does the check box have the right color, is everything spelled right, does the tab order on screen work right, etc. All this stuff takes a LONG time to code for when a human can quickly popup the screen, do a few simple tasks, and then sign off.
–
TristaanOgreMay 24 '11 at 15:11

There's nothing good about simply automating a set of manual test cases, particularly without domain knowledge.

Some of the factors that go into deciding whether or not to automate are:

Return on investment. There's a high ROI on automatic tests that are tedious, highly repetitive, cover actions users will perform frequently, mission critical, involve heavy data validation, or any combination of these factors. By comparison, GUI checks and set-and-forget configuration are much lower ROI.

Data dependency. If the tests are highly data dependent but require relatively little change to how the data is entered, they're usually good candidates for automation. For instance, it makes sense to automate SQL data validation after performing a series of tests. If the tests can be made data-driven, that makes them even more effective, since a new test case can be added by simply adding to the test data stores.

External interfaces. While it's possible to simulate input from external devices, it's not necessarily practical or possible. Similarly, it's often a whole lot easier to print something and run the printout by the Mach 1 Eyeball than to print to file and check programatically, especially when you're dealing with closed interfaces or limited-life test versions of the application you're interfacing to.

Prerequisite setup. Sometimes it can take a great deal of work to automate the setup for what ends up being a simple test that could be done much more easily by manual testing.

Ultimately, the goal should be to first identify test cases that are good candidates for automation (in any situation involving tax calculations, it doesn't take long for automation to start looking very attractive!), and continually re-evaluate based on the list above as well as your organizational and application needs.

I wouldn't advocate automation by someone without domain knowledge because domain knowledge is critical to determining what should be automated. If someone without domain expertise is automating, I'd hope they'd be working with a domain expert to determine what to automate.

I think it's a great starting point, as long as you understand or plan to create a process to manage how they grow, how they're stored and structured and how it'll all link together into your current development and testing practices. This is something I've struggled with in the past.

For example, say you're using cukes and build a huge feature/scenario test case. It's really useful as it configures a whole section full of drop-down, tick boxes, fields and values etc. It also confirms if it shows you the right options on the front-end when configured correctly.

The best thing to do in this case is write and build the test (during QA milestones in the development process), add it to the library (so it's runs and performs useful testing) and make sure that it gets picked up and turned into it's own method called validateOptions (or whatever) down the line. Sort of like a built-in optimisation procedure, with regular reviews of the test cases and structure.

That way, in future, you only have to call one method in your tests instead of a full scenario again and you'll be monitoring and maintianing your new library.

It's not easy to put into practice, but does help in the long run.

If you already have a Manual Test Library stored and documented, I can think of no better place for automation to begin. It also gives you your metrics for delivery and scope.

It sounds like you actually change the tests quite a lot when you automate them, rather than just recreating each manual test as an automated one - does this require understanding of the system under test?
–
testerab♦May 24 '11 at 15:09

Not really, we don't change them 'much', but then saying I have an organised test library is also a bit of a lie. It's often easier for someone to ask me "How do i test this?" and dump them a fresh new test, but thats my personal experience and shouldn't taint the advice.
–
DiscoMcDiscoJun 22 '11 at 18:40

I've just spent the last couple of weeks writing tests for part of our app, based around a test scenario our PO had put together. It was a great manual test scenario, but in order to automate it, I pretty much tore it apart and rewrote as a series of smaller tests, building towards a larger end to end scenario. I could do that because I knew the area extremely well, and the PO too, so I knew what he intended and I knew what I'd want to see tested. I don't think following it "straight" would have worked at all well. (And to be fair, I don't think he expected me to, either).
–
testerab♦Jun 24 '11 at 19:40

I only automate the reptitive tasks that really only require a result, or if I am only looking for text validation in the case of Selenium, Fitnesse or Web tests that do not require much else going on. Automation is good when you don't care about the GUI, although you can do screen compares I do not like to automate those tests since they can be brittle later on, although I am usually fine with automating something like a search, or a User Profile check, that will ONLY EVER bring up a defined text for validation. Those kinds of tests are going to be rare.

I've found Manual Testing to give me exposure to the application to understand what can, and should be automated, I don't script something then make it an automated test. Sure you can, but unless you really understand what it is your goal is for that automated test you should not do it.

I touched on this in my answer to another question. Automating a manual test can give you a good automated test, but there are costs beyond the initial coding effort. For example, automated tests need to be documented. Often, once a test is automated, its intent is unknown to everyone except the original coder. You might be able document them in the same place that you document your manual tests, but then you have to keep the documentation up to date with the code. By default, this won't happen. Alternatively, you can document the tests as comments in the code, but again, the comments have to be kept up to date, and by default they will not be, just like the comments in product code.

By default, if the automation is not scrupulously documented, it becomes a leaky abstraction. Instead of the automation being a "Test of feature X as of release Y", it becomes "Test of Feature X", or even "The Thing that Proves That Feature X Works". Once the test becomes a black box, it will be misused, and in that sense an automated test can be worse, rather than better, than a manual test.

Not necessarily, although I tend to do the opposite- I manually dry run future automated.
Manual tests will usually skip very long repetitive tasks, avoid heavy textual analysis, or can't handle tests with short response time.
Another question is what do you call a "test script"- are those the full detailed steps of execution, or just some higher level description of what to test and expected results, the first will not fit automation the later might fit well.

Its a yes,Automating your manual test cases gives you good automated results but i am here speaking with reference "CONTIZEE" tool which i used in automating test cases where zero coding knowledge is needed and time saving. Executing a simple test case and a complicated test case needs just needs same effort in this tool.

Can you explain your use of the tool to support your answer? As per veteran tester James Back - a good manual test cannot be automated. If you think otherwise, please explain.
–
milinpatel17Nov 2 '14 at 8:26