Increase the reliability of testing through automation, as it is possible to achieve more coverage of tests.

It is very important to consider the possible opportunities for automation within test projects. Automation needs to be carefully planned and executed using the right tools. We will look at tool support for testing, in the next section.

Tool Support for Testing

Tools used to support test activities are referred as Test Tools.

Generally, tools are designed to address one or more test activities such as test case:

generation

execution

monitoring

analysis.

The two common terms of tools support for testing are CAST and CASE.

Computer Aided Software Testing Tools (CAST)

Tools used for test activities in software life cycle are called Computer Aided Software Testing or CAST Tools.

An example of CAST Tool is Quick Test Professional for automated functional testing.

Computer Aided Software Engineering Tools (CASE)

Tools used for software development process are called Computer Aided Software Engineering or CASE Tools.

Testing Tools

Tools are classified based on the testing activities or supported areas. For example, tools that support management activities or static testing. A few tools perform a specific and limited function called a 'point solution.' Many commercial tools provide support for different functions.

For example, a test management tool may provide support for testing, which includes progress monitoring, configuration management of test ware, incident management, requirements management, and traceability.

A few tools can provide both coverage measurement and test design support. There are tools which are used for testing.

For example, Quick Test Professional to carry out functional testing To support or help in testing. For example, Data Maker Tool creates high-quality test data, which can be used for test execution. To support the testing process. For example, Quality Center. To explore an application. For example, Text Explorer.

Let us discuss the classification of testing tools, in the next section.

Testing Tools—Classification

Testing is supported by multiple tools. Some of the common tools are as follows:

Testing Tools in the V-Model

As seen in the image below, requirement testing tools are used for requirement analysis, functional design, technical design, and coding activities.

These tools often help in capturing requirements using intuitive models. Once requirements are recorded, the tool can generate high-level architecture design and code. The inbuilt logic of the tool helps in identifying gaps in the requirements.

Test design and test data preparation tools can be used once requirements are finalized. These tools can generate automatic test cases from requirements and support automatic creation of test data.

Static analysis tools become useful during design and coding phases by identifying gaps in the code. Coverage measure tools and debugging tools can be used during component testing.

Test harness and drivers, and dynamic analysis tools are used during integration testing. Test run and comparison tools are used at all levels of testing to compare results. Performance measurement tools are used in system and acceptance testing to measure the system performance.

Features provided by requirements management tool include storing requirement statements, generating unique identifiers, checking the consistency of requirements, prioritizing requirements for testing purposes, and managing traceability through levels of requirements.

Examples include Analyst Pro, CaliberRM, and WinA&D.

Configuration Management Tools

Features of configuration management tools include storing information about versions and builds of the software and testware, traceability between software and testware, release management, baselining, and access control.

Examples include Visual Source Safe and Clear Case.

Incident Management Tools

Incident management tool is also known as a defect-tracking, a defect-management, a bug-tracking, or a bug-management tool. Incident management tools make it much easier to keep track of the incidents over time.

Tools Support for Static Testing

Review tools provide support to the review process. Typical features include a plan and process review, store review comments and communicate with relevant people, track review comments, collect metrics, and report key factors.

Static Analysis Tools

Static analysis tools are generally used by developers as a part of the development process and for component testing. The key aspect of these tools is that the code is not executed or run.

Features of static analysis tools include support to calculate metrics such as cyclometric complexity or nesting levels, enforce coding standards, analyze structures and dependencies, aid in code understanding, and identify anomalies or defects in the code.

Examples include Codesonar and Klocwork Suite for Java.

Modeling Tools

Modeling tools support the validation of software or system models. Modeling tools are typically used by developers. An advantage of both modeling and static analysis tools is that they can be used before dynamic tests.

Features include support for identifying inconsistencies and defects within the model, helping to identify and prioritize testing areas of the model, predicting system response and behavior under various situations, helping to understand system functions and identifying test conditions using a modeling language such as Unified Modeling Language or UML.

Example includes Rational Suite for EXtensible Markup Language or XML.

Let us focus on tool support for test specification, in the next section.

Tool Support for Test Specification

The tools that support test specification include test design and test specification tools.

Test Design Tool

Test design tool supports the test design activity by generating test inputs, from a specification that may be held in a CASE tool repository.

For example, if the requirements are kept in a requirements management or test management tool, or in a CASE tool used by developers, then it is possible to identify the input fields, including the range of valid values.

If the valid range is stored, the tool can distinguish between values that accept and generates an error message. If the error messages are stored, then the expected result can be checked. If the expected result is known, then it can be included in the test case.

Other types of test design tool help to select combinations of possible factors to be used in testing. Some of the tools may use orthogonal arrays and can easily identify the tests that exercise all the elements such as input fields, buttons, and branches.

Test Data Preparation Tool

Test data preparation tools enable data to be selected from an existing database; or created, generated, manipulated, and edited for use in tests. The most sophisticated tools can deal with a range of files and database formats.

Tool Support for Test Specification – Characteristics

Following are the characteristics of test design and test data preparation:

In the following section, we will discuss tool support for test execution and logging.

Tool Support for Test Execution and Logging

Test execution tools use a scripting language, which is a programming language to drive the tool. Therefore, any tester who wishes to use a test execution tool needs to use programming skills, to create and modify the scripts.

The advantage of programmable scripting is that tests can repeat actions for different data values; take different routes depending on the outcome of a test, and can be called from other scripts giving some structure to the set of tests.

A test harness is a test environment comprised of stubs and drivers needed to execute a test.

Unit test framework tool provides an environment for a unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides support for the developer, such as debugging capabilities.

The framework or the stubs, and drivers supply information needed by the tested software. For example, an input from a user. They also receive information sent by the software.

For example, a value to be displayed on a section. Stubs can be referred as 'mock objects.' In the following section, we will look at the characteristics of tools that support test execution and logging.

Tool Support for Test Execution and Logging—Characteristics

Following are the characteristics of test execution:

Test Execution

Enables test to be run automatically or semi-automatically; Uses static or stored inputs and expects outcomes for comparing the results; A few tools support record and replay facility; Logs defects automatically.

Characteristics of test harnesses and unit test framework tools include support for testing the application on the whole or at components level; and testing of components through stubs even when the entire program is not available for testing.

In the next section, we will understand the concept of test comparator.

Test Comparator

Test comparator is a test tool used to perform automated test comparison. Test comparison is the process of identifying differences between the actual results produced by the test component or system, and the expected test results.

There are two ways in which actual test results can be compared with the expected test results. It can be performed during test execution, which is called dynamic comparison; or after test execution.

The other way is a post-execution comparison, where the comparison is performed after the test has finished executing and the software under test is no longer running.

Features of test comparator include the following:

It compares files, data and test results.

Test comparators are built in most of the test tools. However, a separate tool is required for results comparison.

Coverage Measurement Tools

Characteristics of coverage measurement tools include support for measuring the code coverage while executing the test cases.

For example, Adatest to measure the code coverage of Ada code. These tools can be intrusive or non-intrusive.

In the next section, we will discuss the uses and features of security testing.

Security Testing

Security Testing includes a set of techniques that are used to check for the security breaches such as:

Identifying computer viruses; detecting intrusions such as denial of service attacks; simulating various types of external attacks.

Probing for open ports or other externally visible points of attack.

Identifying weaknesses in password files and passwords; and security checks during operation.

An example of the security testing tool is IBM AppScan.

The image below depicts the way security testing can be planned and carried out against typical project test phases.

Security requirements need to think about scenarios of past security breaches and plan for the same during the requirement phase.

Security test planning should be prepared during the design phase. If applicable, security automation plan should be made a part of the test plan. Test environment setup to perform security testing starts at test planning stage and completes when the coding starts.

Security testing can be started during the coding phase and can continue till the system is moved to production, and sometimes after.

In the next section, we will focus on the tools that support dynamic analysis.

Tool Support for Dynamic Analysis

Dynamic analysis tool provides runtime information on the state of the software code.

The information provided includes:

Allocation, use, and de-allocation of resources

Flag of unassigned pointers or pointer arithmetic faults

Dynamic analysis tool also identifies the defects only when software is on the run or is executed. This tool is also used for component testing or component integration testing.

For example, Bounds Checker which looks for memory leaks.

In the following section, we will focus on the tools that support performance and monitoring.

Tool Support for Performance and Monitoring

Features or characteristics of performance-testing tools include:

Supporting load generation on the tested system;

Measuring the timing of specific transactions as the load on the system varies; measuring average response times;

Producing graphs or charts of responses over time.

For example, Load Runner and Silk Performer. Monitoring tools are used to verify, analyze and report the behavior of the system resources.

Features or characteristics of monitoring tools include:

Support for identifying problems

Sending an alert message to the administrator, such as a network administrator

Logging real-time and historical information

Finding optimal settings

Monitoring the number of users on a network; monitoring network traffic, which can be done either in real time or covering a given duration of operation with the analysis performed afterward.

Tools for Usability Issues and Data Quality Assessment

Following are the tools used for usability issues and data quality assessment:

Usability Testing Tools

Usability Testing Tools help in assessing the ease of use of applications from the point of view of end users.

For example, xSort for web usability testing.

Data quality assessment Tools

Data quality assessment tools help in

Assessing data quality

Reviewing and verifying data conversion process

Verifying migration rules

In the next section, we will discuss the second topic ‘Effective Use of Tools—Potential Benefits and Risks.’

Effective Use of Tools – Potential Benefits and Risks

In this topic, we will discuss the potential benefits and risks of the tools. We will also focus on how the tools can be used effectively.

Let us begin with the potential benefits, in the next section.

Potential Benefits

Tools, when carefully analyzed and applied in right context, help improve test productivity dramatically.

There are many benefits of using tools to support testing.

A few common benefits of tools include

Reduce repetitive tasks

Achieve high consistency and repeatability

Provide objective assessment

Easy access to test information

We will begin by reducing repetitive work, in the next section.

Benefits – Reduce repetitive work

Repetitive work, when manually performed can be tedious, and they can be handled by tools more efficiently.

Following are a few examples of repetitive tasks, and tools that can handle them more efficiently.

Let us focus on high consistency and repeatability, in the next section.

Benefits – High Consistency and Repeatability

Manual testing is dependent on the style and nature of the individual performing the test. Hence, it differs from person to person. Tools remove this variation as they can only perform the task they are programmed for.

Following are few examples where repetitive work can be performed by testing tools with high consistency. Debugging and test execution tools for retesting, test execution tools for entering test data, and test design tools for creating test cases.

In the following section, we will look at objective assessment.

Benefits – Objective Assessment

Subjective prejudices of people can often lead to defects being ignored. Such prejudices can be eliminated as test tools do not have artificial intelligence.

Following are few examples where tools can be used effectively in objective assessment. Traceability tools for test coverage; monitor tools for system behavior, and test management tools such as a quality control to capture incident information.

In the next section, we will discuss access to information.

Benefits – Access to information

A large amount of data doesn't confirm the communication of information. A human brain can easily register and interpret visual information.

For example, a chart or a graph is a better way to demonstrate information than a long list of numbers, which is the main reason why charts and graphs in spreadsheets are useful.

Special purpose tools give visual output for the information they process. Following are few examples where tools can be used for presenting the data in an easily comprehensible manner.

In the next section, we will understand the potential risks of using testing tools.

Potential Risks

Although there are significant benefits that can be achieved by using tools to support testing activities, there are many organizations that have not achieved the benefits they expected.

A few potential risks of tools are as follows:

Unrealistic expectations from the tool.

Underestimation of the tool.

For example, time, cost and efforts required to introduce a tool, time and efforts needed to achieve significant and continuous benefits, resources and efforts required to maintain the test assets generated by the tool.

Over-reliance on the tool

Risks from tool vendor such as vendor moving out of business, selling the tool to a different vendor, retiring the tool, or poor service

Compatibility issues with other tools including requirement management, version control tools, defect management tools, and test management tools.

We will focus on special consideration for some type of tools, in the next section.

Special Consideration for Some Tools

For each type of tool, a few aspects need to be considered to ensure successful implementation.

When using performance testing tools ensure coding Standards are followed; follow a step by step approach to rectify existing code per the standards.

Test Management Tools

Before considering or selecting a Test Management Tool, check if it is compatible with other tools to get all the benefits it promises and also designs and generates test reports for which it can be used.

Let us understand the effective use of tools with an example in the next section.

Effective Use of Tools – Example

ABC Corp invested $1 million towards purchasing a new tool for Test Execution Process. After this substantial investment, the Management team decided to dismiss 50% of its testing team. They thought that the tool would be able to compensate the effort.

The remaining team had limited experience in using the tool and hence struggled to use it for their projects. The learning curve took them longer than usual to conduct their tests. At the same time, they were burdened with additional work due to the dismissal of 50% of team members.

The team failed to implement the tool and handle the existing test load in the organization. The Management team blamed the tool for the failure, and it became shelfware in the organization. They had to rehire more resources to ensure the backlog was cleared.

In the next section, we will begin with the third topic ‘Introducing a Tool into an Organization.’

Introducing a Tool into an Organization

In this topic, we will discuss tool selection process, factors in selecting a tool, tool implementation process, and success factors for deploying a tool.

Let us begin with tool selection process, in the next section.

Tool Selection Process

Introducing any new tool into an organization involves two processes.

They are:

Selection

Implementation

The selection process involves including a business case for tool requirement by defining problem without tool or need for the tool, tool support as a solution, and identifying the constraints of a tool.

In the following section, we will look at the factors in selecting a tool.

Factors in Selecting a Tool

A few common factors to be considered while selecting a tool includes the following:

Assessment of the organization's maturity. For example, readiness for change;

Evaluation of tools against clear requirements and objective criteria

Prioritize requirements

Conduct proof of concept to check whether the product works as desired and meets the requirements and objectives

Evaluation of the vendor regarding reliability, support, and other commercial aspects or open-source network of support

Identifying and planning internal implementation including training and mentoring for new users

Ease of use and installation of the tool

Compatibility with other tools that are already a part of the organization

Cost of lease or purchase of the tool.

In the next section, we will understand the process of tool implementation.

Tool Implementation Process

Tool implementation process includes the following:

Get management commitment for required support on the decided tool

Introduce to the team mentioning the need for the tool and how it addressed those needs

Pilot the tool

Evaluate the tool based on pilot findings

Move on to phase wise implementation and

Review the implementation regularly.

The pilot project should experiment with different ways of using the tool.

For example, different settings for a static analysis tool, different reports from a test management tool, different scripting, and comparison techniques for a test execution tool or different load profiles for a performance-testing tool.

Before implementing any tool on a large scale, it should be put through pilot implementation.

Following are some of the objectives for the pilot implementation:

Understanding tool features and limitations;

Required updates to the existing process to implement the tool;

Defining a new process to maintain the tool, if required;

Considering the returns on investment; and

Evaluating the pilot project against the objectives.

In the next section, we will discuss the success factors for deploying a tool.

Success Factors for Deploying a Tool

A few factors to be considered for deploying a tool are as follows:

Adopt phased implementation

Ensure process fits well to use the tool

Define guidelines and train new users as required

Monitor the tool benefits

Capture lessons learned and constantly improve.

Let us understand the introduction of a tool with the help of an example in the next section.

Introducing the Tool – Example

ABC Corp invested $1 million towards purchasing a new tool for Test Management Process. After this substantial investment, it did not want to waste time before the tool was released to all teams within the organization.

The teams were also excited about learning something new and hence quickly accepted the implementation.

However, as the teams started to use the tool, each team realized that the tool did not match the project process. Hence, they requested for tool workflow customizations.

With multiple requests coming in from teams, the tool support team was unable to define a consistent workflow.

Over a period of time, the main workflow was not followed by any team, and each team had its own exception flow being followed. The maintenance of the tool became difficult and eventually, the tool had to be removed from the organization.

This example shows the importance of conducting a pilot for any new implementation, to ensure a tool is scalable to the needs of the larger organization. With this, we have reached the end of the lesson.

Let us now check your understanding of the topics covered in this lesson.