Category Archives: Automated Testing Test Plan

This Automated Testing Detail Test Plan (ADTP) will identify the specific tests that are to be performed to ensure the quality of the delivered product. System/Integration Test ensures the product functions as designed and all parts work together. This ADTP will cover information for Automated testing during the System/Integration Phase of the project and will map to the specification or requirements documentation for the project. This mapping is done in conjunction with the Traceability Matrix document, that should be completed along with the ADTP and is referenced in this document. This ADTP refers to the specific portion of the product known as PRODUCT NAME. It provides clear entry and exit criteria, and roles and responsibilities of the Automated Test Team are identified such that they can execute the test. The objectives of this ADTP are:

Describe the test to be executed.

Identify and assign a unique number for each specific test.

Describe the scope of the testing.

List what is and is not to be tested.

Describe the test approach detailing methods, techniques, and tools.

Outline the Test Design including:

Functionality to be tested.

Test Case Definition.

Test Data Requirements.

Identify all specifications for preparation.

Identify issues and risks.

Identify actual test cases.

Document the design point

Test Identification

This ADTP is intended to provide information for System/Integration Testing for the PRODUCT NAME module of the PROJECT NAME. The test effort may be referred to by its PROJECT REQUEST (PR) number and its project title for tracking and monitoring of the testing progress.

Test Purpose and Objectives

Automated testing during the System/Integration Phase as referenced in this document is intended to ensure that the product functions as designed directly from customer requirements. The testing goal is to identify the quality of the structure, content, accuracy and consistency, some response times and latency, and performance of the application as defined in the project documentation.

Assumptions, Constraints, and Exclusions

Factors which may affect the automated testing effort, and may increase the risk associated with the success of the test include:

Completion of development of front-end processes

Completion of design and construction of new processes

Completion of modifications to the local database

Movement or implementation of the solution to the appropriate testing or production environment

Stability of the testing or production environment

Load Discipline

Maintaining recording standards and automated processes for the project

Completion of manual testing through all applicable paths to ensure that reusable automated scripts are valid

Entry Criteria

The ADTP is complete, excluding actual test results. The ADTP has been signed-off by appropriate sponsor representatives indicating consent of the plan for testing. The Problem Tracking and Reporting tool is ready for use. The Change Management and Configuration Management rules are in place.
The environment for testing, including databases, application programs, and connectivity has been defined, constructed, and verified.

Exit Criteria

In establishing the exit/acceptance criteria for the Automated Testing during the System/Integration Phase of the test, the Project Completion Criteria defined in the Project Definition Document (PDD) should provide a starting point. All automated test cases have been executed as documented. The percent of successfully executed test cases met the defined criteria. Recommended criteria: No Critical or High severity problem logs remain open and all Medium problem logs have agreed upon action plans; successful execution of the application to validate accuracy of data, interfaces, and connectivity.

Pass/Fail Criteria

The results for each test must be compared to the pre-defined expected test results, as documented in the ADTP (and DTP where applicable). The actual results are logged in the Test Case detail within the Detail Test Plan if those results differ from the expected results. If the actual results match the expected results, the Test Case can be marked as a passed item, without logging the duplicated results.
A test case passes if it produces the expected results as documented in the ADTP or Detail Test Plan (manual test plan). A test case fails if the actual results produced by its execution do not match the expected results. The source of failure may be the application under test, the test case, the expected results, or the data in the test environment. Test case failures must be logged regardless of the source of the failure. Any bugs or problems will be logged in the DEFECT TRACKING TOOL.
The responsible application resource corrects the problem and tests the repair. Once this is complete, the tester who generated the problem log is notified, and the item is re-tested. If the retest is successful, the status is updated and the problem log is closed.
If the retest is unsuccessful, or if another problem has been identified, the problem log status is updated and the problem description is updated with the new findings. It is then returned to the responsible application personnel for correction and test.
Severity Codes are used to prioritize work in the test phase. They are assigned by the test group and are not modifiable by any other group. The following standard Severity Codes to be used for identifying defects are: Table 1 Severity Codes

The test case or procedure can be completed, but produces incorrect output when valid information is input.

3.

Medium

The test case or procedure can be completed and produces correct output when valid information is input, but produces incorrect output when invalid information is input.(e.g. no special characters are allowed as part of specifications but when a special character is a part of the test and the system allows a user to continue, this is a medium severity)

4.

Low

All test cases and procedures passed as written, but there could be minor revisions, cosmetic changes, etc. These defects do not impact functional execution of system

The use of the standard Severity Codes produces four major benefits:

Standard Severity Codes are objective and can be easily and accurately assigned by those executing the test. Time spent in discussion about the appropriate priority of a problem is minimized.

Standard Severity Code definitions allow an independent assessment of the risk to the on-schedule delivery of a product that functions as documented in the requirements and design documents.

Use of the standard Severity Codes works to ensure consistency in the requirements, design, and test documentation with an appropriate level of detail throughout.

Use of the standard Severity Codes promote effective escalation procedures.

Test Scope

The scope of testing identifies the items which will be tested and the items which will not be tested within the System/Integration Phase of testing. Items to be tested by Automation (PRODUCT NAME …)
Items not to be tested by Automation(PRODUCT NAME …)

Test Approach

Description of Approach
The mission of Automated Testing is the process of identifying recordable test cases through all appropriate paths of a website, creating repeatable scripts, interpreting test results, and reporting to project management. For the Generic Project, the automation test team will focus on positive testing and will complement the manual testing undergone on the system. Automated test results will be generated, formatted into reports and provided on a consistent basis to Generic project management.
System testing is the process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It verifies proper execution of the entire set of application components including interfaces to other applications. Project teams of developers and test analysts are responsible for ensuring that this level of testing is performed.
Integration testing is conducted to determine whether or not all components of the system are working together properly. This testing focuses on how well all parts of the web site hold together, whether inside and outside the website are working, and whether all parts of the website are connected. Project teams of developers and test analyst are responsible for ensuring that this level of testing is performed.
For this project, the System and Integration ADTP and Detail Test Plan complement each other.
Since the goal of the System and Integration phase testing is to identify the quality of the structure, content, accuracy and consistency, response time and latency, and performance of the application, test cases are included which focus on determining how well this quality goal is accomplished.
Content testing focuses on whether the content of the pages match what is supposed to be there, whether key phrases exist continually in changeable pages, and whether the pages maintain quality content from version to version.
Accuracy and consistency testing focuses on whether today’s copies of the pages download the same as yesterday’s, and whether the data presented to the user is accurate enough.
Response time and latency testing focuses on whether the web site server responds to a browser request within certain performance parameters, whether response time after a SUBMIT is acceptable, or whether parts of a site are so slow that the user discontinues working. Although Loadrunner provides the full measure of this test, there will be various AD HOC time measurements within certain Winrunner Scripts as needed.
Performance testing (Loadrunner) focuses on whether performance varies by time of day or by load and usage, and whether performance is adequate for the application.
Completion of automated test cases is denoted in the test cases with indication of pass/fail and follow-up action. Test Definition This section addresses the development of the components required for the specific test. Included are identification of the functionality to be tested by automation, the associated automated test cases and scenarios. The development of the test components parallels, with a slight lag, the development of the associated product components.

Test Functionality Definition (Requirements Testing)

The functionality to be automated tested is listed in the Traceability Matrix, attached as an appendix. For each function to undergo testing by automation, the Test Case is identified. Automated Test Cases are given unique identifiers to enable cross-referencing between related test documentation, and to facilitate tracking and monitoring the test progress.
As much information as is available is entered into the Traceability Matrix in order to complete the scope of automation during the System/Integration Phase of the test. Test Case Definition (Test Design)Each Automated Test Case is designed to validate the associated functionality of a stated requirement. Automated Test Cases include unambiguous input and output specifications. This information is documented within the Automated Test Cases in Appendix 8.5 of this ADTP. Test Data Requirements

The automated test data required for the test is described below. The test data will be used to populate the data bases and/or files used by the application/system during the System/Integration Phase of the test. In most cases, the automated test data will be built by the OTS Database Analyst or OTS Automation Test Analyst.

Automation Recording Standards

Initial Automation Testing Rules for the Generic Project:
1. Ability to move through all paths within the applicable system
2. Ability to identify and record the GUI Maps for all associated test items in each path
3. Specific times for loading into automation test environment
4. Code frozen between loads into automation test environment
5. Minimum acceptable system stability

Winrunner Menu Settings

1. Default recording mode is CONTEXT SENSITIVE
2. Record owner-drawn buttons as OBJECT
3. Maximum length of list item to record is 253 characters
4. Delay for Window Synchronization is 1000 milliseconds (unless Loadrunner is operating in same environment and then must increase appropriately)
5. Timeout for checkpoints and CS statements is 1000 milliseconds
6. Timeout for Text Recognition is 500 milliseconds
7. All scripts will stop and start on the main menu page
8. All recorded scripts will remain short; Debugging is easier. However, the entire script, or portions of scripts, can be added together for long runs once the environment has greater stability.

Winrunner Script Naming Conventions

1. All automated scripts will begin with GE abbreviation representing the Generic Project and be filed under the Winrunner on LAB11 W Drive/Generic/Scripts Folder.
2. GE will be followed by the Product Path name in lower case: air, htl, car
3. After the automated scripts have been debugged, a date for the script will be attached: 0710 for July 10. When significant improvements have been made to the same script, the date will be changed.
4. As incremental improvements have been made to an automated script, version numbers will be attached signifying the script with the latest improvements: eg. XX0710.1 XX0710.2 The .2 version is the most up-to-date

Winrunner GUIMAP Naming Conventions

1. All Generic GUI Maps will begin with XX followed by the area of test. Eg. XX. XXpond GUI Map represents all pond paths. XXEmemmainmenu GUI Map represents all membership and main menu concerns. XXlogin GUI Map represents all XX login concerns.
2. As there can only be one GUI Map for each Object, etc on the site, they are under constant revision when the site is undergoing frequent program loads. Winrunner Result Naming Conventions

1. When beginning a script, allow default res## name to be filed
2. After a successful run of a script where the results will be used toward a report, move file to results and rename: XX for project name, res for Test Results, 0718 for the date the script was run, your initials and the original default number for the script. Eg. XXres0718jr.

1 Winrunner Report Naming Conventions

1. When the accumulation of test result(s) files for the day are formulated, and the statistics are confirmed, a report will be filed that is accessible by upper management. The daily Report file will be as follows: XXdaily0718 XX for project name, daily for daily report, and 0718 for the date the report was issued.
2. When the accumulation of test result(s) files for the week are formulated, and the statistics are confirmed, a report will be filed that is accessible by upper management. The weekly Report file will be as follows: XXweek0718 XX for project name, week for weekly report, and 0718 for the date the report was issued.

Winrunner Script, Result and Report Repository

1. LAB 11, located within the XX Test Lab, will house the original Winrunner Script, Results and Report Repository for automated testing within the Generic Project. WRITE access is granted Winrunner Technicians and READ ONLY access is granted those who are authorized to run scripts but not make any improvements. This is meant to maintain the purity of each script version.
2. Winrunner on LAB11 W Drive houses all Winrunner related documents, etc for XX automated testing.
3. Project file folders for the Generic Project represent the initial structure of project folders utilizing automated testing. As our automation becomes more advanced, the structure will spread to other appropriate areas.
4. Under each Project file folder, a folder for SCRIPT, RESULT and REPORT can be found.
5. All automated scripts generated for each project will be filed under Winrunner on LAB11 W Drive/Generic/Scripts Folder and moved to folder ARCHIVE SCRIPTS as necessary
6. All GUI MAPS generated will be filed under Winrunner on LAB11 W Drive/Generic/Scripts/gui_files Folder.
7. All automated test results are filed under the individual Script Folder after each script run. Results will be referred to and reports generated utilizing applicable statistics. Automated Test Results referenced by reports sent to management will be kept under the Winrunner on LAB11 W Drive/Generic/Results Folder. Before work on evaluating a new set of test results is begun, all prior results are placed into Winrunner on LAB11 W Drive/Generic/Results/Archived Results Folder. This will ensure all reported statistics are available for closer scrutiny when required.
8. All reports generated from automated scripts and sent to upper management will be filed under Winrunner on LAB11 W Drive/Generic/Reports Folder

Ensures all aspects of the project are being addressed from CUSTOMERS’ point of view

Name, Phone

COMPANY NAME Development Manager

Manage the overall development of project, including obtaining resources, handling major issues, approving technical design and overall timeline, delivering the overall product according to the Partner Requirements

Write and receive approval of the ADTP from Generic Project management

Manually test the cases in the plan to make sure they actually work before recording repeatable scripts

Record appropriate scripts and file them according to the naming conventions described within this document

Initial order of automated script runs will be to load GUI Maps through a STARTUP script. After the successful run of this script, scripts testing all paths will be kicked off. Once an appropriate number of PNR’s are generated, GenericCancel scripts will be used to automatically take the inventory out of the test profile and system environment. During the automation test period, requests for testing of certain functions can be accommodated as necessary as long as these functions have the ability to be tested by automation.

The ability to use Generic Automation will be READ ONLY for anyone outside of the test group. Of course, this is required to maintain the pristine condition of master scripts on our data repository.

Generic Test Group will conduct automated tests under the rules specified in our agreement for use of the Winrunner tool marketed by Mercury Interactive.

Results filed for each run will be analyzed as necessary, reports generated, and provided to upper management.

Test Issues and Risks

Issues
The table below lists known project testing issues to date. Upon sign-off of the ADTP and Detail Test Plan, this table will not be maintained, and these issues and all new issues will be tracked through the Issue Management System, as indicated in the projects approved Issue Management Process

Issue

Impact

Target Date for Resolution

Owner

COMPANY NAME test team is not in possession of market data regarding what browsers are most in use in CUSTOMER target market.

Testing may not cover some browsers used by CLIENT customers

Beginning of Automated Testing during System and Integration Test Phase

CUSTOMER TO PROVIDE

OTHER

.

.

.

Risks Risks
The table below identifies any high impact or highly probable risks that may impact the success of the Automated testing process. Risk Assessment Matrix

Risk Area

Potential Impact

Likelihood of Occurrence

Difficulty of Timely Detection

Overall Threat(H, M, L)

1. Unstable Environment

Delayed Start

HISTORY OF PROJECT

Immediately

.

2. Quality of Unit Testing

Greater delays taken by automated scripts

Dependent upon quality standards of development group

Immediately

.

3. Browser Issues

Intermittent Delays

Dependent upon browser version

Immediately

.

Risk Management Plan

Risk Area

Preventative Action

Contingency Plan Action

Trigger

Owner

1. Meet with Environment Group

.

.

.

.

2. Meet with Development Group

.

.

.

.

3.

.

.

.

.

Traceability Matrix The purpose of the Traceability Matrix is to identify all business requirements and to trace each requirement through the project’s completion.
Each business requirement must have an established priority as outlined in the Business Requirements Document.
They are:
Essential – Must satisfy the requirement to be accepted by the customer.
Useful – Value -added requirement influencing the customer’s decision.
Nice-to-have – Cosmetic non-essential condition, makes product more appealing.
The Traceability Matrix will change and evolve throughout the entire project life cycle. The requirement definitions, priority, functional requirements, and automated test cases are subject to change and new requirements can be added. However, if new requirements are added or existing requirements are modified after the Business Requirements document and this document have been approved, the changes will be subject to the change management process.
The Traceability Matrix for this project will be developed and maintained by the test coordinator. At the completion of the matrix definition and the project, a copy will be added to the project notebook. Functional Areas of Traceability Matrix

Definitions for Use in Testing Test Requirement
A scenario is a prose statement of requirements for the test. Just as there are high level and detailed requirements in application development, there is a need to provide detailed requirements in the test development area. Test Case
A test case is a transaction or list of transactions that will satisfy the requirements statement in a test scenario. The test case must contain the actual entries to be executed as well as the expected results, i.e., what a user entering the commands would see as a system response. Test Procedure
Test procedures define the activities necessary to execute a test case or set of cases. Test procedures may contain information regarding the loading of data and executables into the test system, directions regarding sign in procedures, instructions regarding the handling of test results, and anything else required to successfully conduct the test.

Automated Test Cases

NAME OF FUNCTION Test Case

Project Name/Number|Generic Project / Project Request #|Date|

|Test Case Description|Check all drop down boxes, fill in |

boxes and pop-up windows operate|Build #|

according to requirements on the

main Pond web page.| Run #|

|Function / Module| B1.1|Execution ||| Under Test|Retry #|

|Test Requirement #||Case #|AB1.1.1(A for Automated)

|Written by|Goals|Verify that Pond module functions as required

|Setup for Test |Access browser, Go to .. .

|Pre-conditions | Login with name and password.When arrive at Generic Main Menu…

|Step|Action|Expected Results|Pass/Fail|Actual Results if Step Fails

||Go to |From the Generic Main Menu,||click on the Pond gif and go to|

Identify what Servers and Databases the automation will run against.This {Project name} will use the following Servers:
{Add servers}
On these Servers it will be using the following Databases:
{Add databases}

Naming standards for test procedures, cases and plansThe naming standards for this project are: Recording standards and scripting standardsIn order to ensure that scripts are compatible on the various clients and run with the minimum maintenance the following recording standards have been set for all scripts recorded.1. Use assisting scripts to open and close applications and activity windows.
2. Use global constants to pass data into scripts and between scripts.
3. Make use of main menu selections over using double clicks, toolbar items and pop up menus whenever possible.
4. Each test procedure should have a manual test plan associated with it.
5. Do not Save in the test procedure unless it is absolutely necessary, this will prevent the need to write numerous clean up scripts.
6. Do a window existence test for every window you open, this will prevent scripts dying from slow client/server calls.
7. Do not use the mouse for drop down selections, whenever possible use hotkeys and the arrow keys.
8. When navigating through a window use the tab and arrow keys instead of using a mouse, this will make maintenance of scripts due to UI changes easier in the future.
9. Create a template header file called testproc.tpl. This file will insert template header information on the top of all scripts recorded. This template area can be used for modification tracking and commenting on the script.
10. Comment all major selections or events in the script. This will make debugging easier.
11. Make sure that you maximize all MDI main windows in login initial scripts.
12. When recording make sure you begin and end your scripts in the same position. Ex. On the platform browser always start your script opening the browser tree and selecting your activity (this will ensure that the activity window will always be in the same position), likewise always end your scripts with collapsing the browser tree.

Describe what components of the product that will be tested.This project will test the following components:
The objective is to: