In every system, we find some of the tests can be automated while others can not. Out of the ones, which can be automated, we need to decide which tests we would like to automate. We will not be able to automate all of them at once, even few of the ones that could be automated.

Here comes the need for thoughtful prioritization of automation depending upon its worth. We need to be more careful in our selection for automation based upon their frequency of execution. Some of the tests would be there in our kitty, automation of which will never pay. It would call for more efforts & time to automate them than the effort & time taken to run them manually simply due to the total number of times they would ever be run.

For instance, if a test takes 10 minutes to run manually, and is normally run once a month, making a total of 120 minutes a year, or just 2 hours. On the other hand if it would take 10 hours to automate this 10-minute test, the same test would have to be run every month for 5 years before the automation of that test would paid for itself.

There may be other logical reasons to automate a short duration test of say10-minutes. If this 10-minute test happens to be quite tricky to type in, and the software testing engineer needs to try it out five to six times before getting it right, then automation would pay for itself within a year. But it might be a good candidate for automation simply due to the 'hassle factor,' reason being the software testing engineers tend to get irritated at having to run such tests, even though it doesn't really take that much time.

Types of tests to be executed:

Several types of tests are executed while testing an application. Some of test are highly amenable to tool support; while others are not. Let us check out the different types that become potential candidate for test automation.

1) Testing the functionality:

Functionality of the software, i.e. what it actually does, is the prime target for testing, may it be manual testing or automated testing.

Functionality tests are generally the most straightforward of tests, where something is done or typed on the screen, and the results are clearly visible on the screen. Such tests can be easily automated.

2) Testing the Performance:

Performance testing in a system may involve measuring response times under various stressful conditions during normal or abnormal traffic on the system. Such type of test is quite difficult to perform manually, especially when there is a need to repeat a test that was performed earlier. For instance, we want to simulate the system conditions with 200 active users, it would become quite difficult or virtually impossible to find 200 volunteers, even if we happen to have the equipment for them to use. Reproducing exactly what even one user did earlier is not possible if we want to repeat the exact timing intervals. This type of testing remains a perfect candidate for automation.

However, a performance test is not the same as a function test. While doing performance testing, we don't care for the individual outcomes of all the transactions. A performance test passes, not on whether it gives correct output, but on whether the system ran quickly enough to cope with the traffic (even if it happen to give erratic results).

3) Testing the Non-functional qualities:

Systems, which perform their functions correctly, even if they do, so within performance criteria, still may not be called successful. Besides performance, the systems are required to meet other non-functional qualities like maintainability, portability, testability, usability etc. if they are to have a long and useful life. Tests need to be designed to test these non-functional quality attributes as well.

Some non-functional tests are perfect candidates for automation; while others, like having survey of user’s opinions on how they like the new interface etc., are not. Usability tests often require manual verification, for instance, to see if the colors & fonts etc. are displayed correctly or not. An automated test could verify that something was displayed as color number 465, but would not be able to specify whether color 465 as displayed on a particular terminal appeared acceptable or not.

How to prioritize the automation?

Which tests should we automate first? Remains a million-dollar question. We need to bear in mind that we don't have to automate everything to draw tangible benefits out of test automation. Going by the Pareto principle, If vital few say 10% of the tests are executed say 90% of the time, automating them alone may be a worthwhile effort.

As a most generic thumb rule, following factors can be given due consideration while deciding, as to what to automate first:

1) Automate the most important tests;2) Automate the set of breadth tests by overall sampling of every system area;3) Automate the tests that are related to important functions;4) Automate the tests that are convenient to be automated;5) Automate the tests, out of which payback can be realized quickly;6) Automate the tests that are frequently run;

As per a general approach software testing engineers tend to automate all the tests in a given set of tests. In an overall scenario, it is better to automate a subset of the tests for each program but have more programs with automated tests.

Software testing experts recommend that the automated tests be required to include in their documentation a clear prescription of the relative importance of every test. Some tests are comparatively more important than others, and they are the ones that should be run every time something changes. Other tests may only need running whenever a particular function changes. Automating the important tests first across many programs will more quickly yield an automated Test Suite with greater potential for payback.

Software testing experts consider many factors while answering the question of what to automate first. It is a fact that what is appropriate for one organization may not be applicable the other. For instance, it may take a long time to automate these tests if the software is relatively unstable, since too many other errors would be encountered while trying to test the field validation. This would seem to indicate that this is not a good set to automate.

On the other hand, if the field validation is changed quite frequently, while the underlying processing remained stable, it may take very little effort to edit an automated set of tests for field validation. If there are only a few fields that are volatile and the rest are stable, then the validation of the stable fields could be automated, leaving the varying validation to be done manually. In this case, the stable field validation is a good set for automation.

As a general rule of thumb, automating a breadth test of core functionality is probably a better set to automate early in the life cycle, as such tests are probably more important to the end users.

Avoid automating too much and too soon:

One of the most common mistakes software testing engineers, tend to commit is to try to automate too much & that too-too early. It is quite tempting to display a rapid progress by automating as much as possible as quickly as possible, but this it is not a good practice. It takes considerable time for the best ways of doing things to be proven in actual practice. If we automate too many tests initially, we would land into series of problems, when we would discover a better way to organize the tests.Identify the areas of quick results:Try to identify the areas where automation of tests would have the greatest impact & that too quickly. This need not be large-scale effort, rather it would be better to mobilize something on a smaller scale that could help in overcoming some frustration that many testers would experience.

For instance, automating performance or soak tests or tests of client / server communication may be quite easy to set up, and we will be able to implement the tests which would be fairly difficult to do manually.

How should we select the automated tests and decide the time when to execute them?

Sometimes software testing engineers may want to execute all of the automated tests in one go. For instance, if it is desired to confirm that all tests still pass after a last-minute fix, then tests may be set to run overnight or over a weekend.

However, generally software testing engineers prefer to be selective about which tests to execute. Even with automation, executing many tests may take a long time.

Ver rarely we would wish to execute all the tests in one go. Most of the time we will want to select a subset, which may be based on several different selection criteria.

Few examples of the different types of selection that we may wish to make:

a) An individual test:

We may need to just reproduce a single test in isolation, possibly to help isolate a fault. Of course a single test could be run manually, but sometimes the software behaves differently under manual testing and the defect may not appear unless it is made to run automatically, if it is due to slight differences in timing.

b) A range of tests:

We may wish to run a set of tests that perform a complete business function, for instance opening an account, adding and withdrawing from it, and closing it, or we may wish to open a series of different accounts.

c) By level

: We may want to re-run the unit tests for a module that has just been changed. We may wish to run the integration tests if we have installed an upgrade to a third-party product, or we may want to run only the acceptance tests to check overall system integrity.

d) Tests specific to a subsystem:

If major enhancements have been done, we may wish to test that area separately, but may not run any other tests now.

e) By type of test:

It could be very useful to be able to run all of the performance tests or stress / volume tests, for instance if new network software has just been installed. It could also be useful to be able to run a breadth test, or selected depth tests, or all the bug fix tests.

f) Based on the length of time to run:

If there is limited time available, we may wish to select the shortest tests to run, since a long test would be more likely to be thrown off the system before it had completed.

g) Running of selected tests:

Run only the tests that have not yet been run in the current test cycle.

h) Failed tests only:

This is a very useful subset to run. When a number of defects have been found by a set of tests, they would typically go back to development for fixing. If a new version of the software with all of these defects supposedly fixed is then supplied, the first thing we need to do is to confirm that by re-running all of the tests that failed last time.

Apart from the above mentioned example criteria for selecting what tests to run, software testing engineers devise many others that could be more appropriate for their situation. Other examples could possibly include database team's tests, or the client-supplied tests etc.