Automating Product Testing Using AutoMate

Applies To:
AutoMate 6, AutoMate 5

Published: 8/2/06 , modified May 9, 2008

Learn how to use AutoMate to perform High Level and Reiterative Testing for Quality Assurance Professionals

Product testing can be a time consuming process for even the most efficient Quality Assurance departments. The resources required for a truly thorough testing pattern can be burdensome to the smaller software development house. However, the importance of testing a product can never be understated, and the quality of the product can be directly tied to the amount of testing the product has undergone. Automating these processes can provide better results in less time and with much less effort, allowing a better product to be delivered in a shorter period of time and with less investment, especially between software revisions of the same product.

Here at Network Automation, we use our product, AutoMate, to automatically test itself. An automation tool such as AutoMate can be configured to run a number of automated tasks (software testing is just one of a multitude of possibilities, but we'll concentrate on this use for this article). Most automation platforms provide a simple means of constructing highly configured macros or tasks that can be used to test the interface, product stability, and system output, to name a few.

Automating High Level Testing

Automation tools can be used at practically any level during the product development and testing cycle. A multitude of other testing tools exist, especially for code unit testing and source code management and verification. However, high level automation tools such as AutoMate excel at testing GUIs and overall product verification. The testing normally begins by verifying the user experience with established rules set out in the product specification and/or Quality Assurance department goals. A set of tasks can be created to test everything from the look and feel of the interface, to assuring quality input and output, to proper handling of errors, both expected and unexpected.

The power of automated testing may be best explained by example. We've created a very simple program that converts degrees of Fahrenheit to degrees of Centigrade. The application contains two edit boxes, one for the Fahrenheit value to convert, and another, read-only edit box which will contain the converted value. The user enters a value, clicks the Convert button, and the "Centigrade" box is populated with the correct Centigrade value.

The first thing to note is since we are testing only the GUI in this example, it doesn't matter which programming language was used to create the application. In fact, most of the testing involves ensuring that input is handled correctly, whether it comes from human interaction, or from another computer program. The testing suite works equally well regardless of programming language, programming environment and, in the case of testing websites and Web Services, the platform itself.

Our small application has merely a few simple requirements. First, the Fahrenheit box must contain text before clicking the Convert button, and the text should be only numbers, with the exception of the - symbol and decimal (this allowing negative and floating-point numbers). If an invalid number is entered, a dialog box should appear to warn the user, and focus should return to the Fahrenheit box so the user can correct the mistake. Secondly, clicking the Convert button should convert the Fahrenheit value to its Centigrade equivalent, and of course, the conversion should be correct. Finally, the user should not be allowed to enter any text into the Centigrade edit box. Thus, verification that the control is read-only should be performed.

Normally, such testing would be done by a QA technician and/or the programmer, who would be forced to sit in front of the machine, start the application, and one by one go through some kind of checklist or compliance document to verify that all the stated requirements were met correctly. Instead, we're going to automate this entire process by creating an Automated Test Suite that not only tests these requirements, but is also flexible and robust enough to add any future test cases or requirements as our project grows in breadth and complexity.

The first iteration of our testing suite will work by starting the test application and running through a series of simulated user interactions, logging each test result to a comma-delimited file. The results are logged using test name and test result pairs separated by a comma. The last step of the test suite will be to open the CSV file using Excel, which will automatically separate the test names and test results into separate columns for easy viewing (of course, any text viewer can be used for this final step - I just chose Excel for simplicity. We could've just as easily posted the test results to a database using SQL statements, but that is beyond the scope of this article). We start by creating a task that starts the application, and waits for it to become available for user input. This is trivial using AutoMate - we simply use the Run action and specify the name of the application. The flow of the automated task will wait until the application becomes ready for processing. We'll use the Set Text action to set the text in the Fahrenheit box to the number 32.

Next we then instruct AutoMate to click the Convert button. Using the Get Text action, we can verify that the contents of the Centigrade box are 0. Depending on the value of this text box, we write Success or Failure to the CSV file, along with the name of this test case. Because this error is not fatal, we instruct the task to continue on to the next set of tests regardless of this test's result.

Our next section of the task ensures that floating point numbers are handled correctly. Using the same methodology above, we set the Fahrenheit edit box to 63.5, click the Convert button, and verify that the resultant that appears in the Centigrade box is 17.5.

Finally, we should ensure that entering invalid text in the Fahrenheit box causes a dialog box informing the user of the invalid value, as laid out in our requirements. The last section once again sets the contents of the edit box, this time to "Test", and clicks the Convert button. Using the "Wait For Window" action, we have the task pause until our dialog box appears with the proper title text. We can then log whether or not the window appeared.

Now we can run the completed test suite. Once it concludes we'll have a report which tells us each of the requirements tested and their result. It is trivial then to run the same set of tests on different versions of Windows at the same time and post the collected results on a network share or database which can then be easily viewed from any workstation for further processing.

Simplifying Reiterative Testing

Automated testing really shines as a project continues along its life cycle. Testing is done best in stages, before small problems grow into behemoths of pain further down the line. Using automated testing suites makes it a snap to add additional test cases, ensuring that from beginning to end the product is fully tested and completely. It also helps ensure that new bugs are not introduced by new features or fixes, and that project requirements are met along the entire development path from inception to end of life.

Let's take our simple example a step further by releasing Sample App v2.0. Now our program should allow the user to clear all the edit boxes with the click of a button. To do this, a button along the bottom has been added labeled "Clear". We've also give the user an "OK" button, which will close the application when clicked. Because our requirements have changed, we'll need to take several new test cases into consideration. Luckily, this can be done by simply adding

Comments (1)

3/27/07 by James Chen

Very nicely explain the importance of automatic test in the perspective of QA with simple example. However, it's sad that the limitation of AutoMate couldn't use the ordered batch test cases tasks(in other weords, AutoMate Administrator does not have the ability for ordered task scheduling), therefore, all the test cases must included in one task suit, making it hard to manage.