Glossary – Testing

Context: Testing is a procedural process for critical evaluation; a means of determining the reliability, accuracy, quality and business acceptability of systems or computer programs; under trial testing.

The core purpose of testing is to fix problems and deliver a safe, reliable error free product or system.

This glossary section serves to clarify and expand on terms used in related posts on this site. The aim is to explain some of the business jargon, engineering vocabulary and technical terms associated with testing computer based systems.

.

--- ----- ---

Acceptance Testing (White or Black box testing)

User acceptance testing is performed on a system prior to its delivery to a business. The business users and development team attempt to use the system and perform checks to ensure that it adequately meets the department needs and customer requirements.

In any event, acceptance testing involves running a suite of tests on the completed system to verify basic functionality. Acceptance criteria will have previously been agreed during the initiation and requirements gathering phase of the system being designed and built.

Creating tests executing them, feeding back issues to the development team, or agreeing on business-work-arounds, or approving product features for delivery, are the objectives of acceptance testing work.

Acceptance testing is part of the formal approval process for accepting a new system into a business, or it is about making sure that a product is commercially ready for selling into a market place.

.

--- ---

ATE (Automatic Test Equipment)

This usually refers to testing of electronic sub-systems using clip on or bed of nails test assemblies. The testing performed is mostly functional testing in nature using back driving techniques and measurement of currents and voltages to determine the health of electronic components under test.

.

--- ---

Black Box Testing (internal workings are unknown)

Black Box Testing is a software testing process where the internal structure/ architecture / design/ implementation of the system or item being tested is NOT known.

Test the product from a user’s perspective by using the system or component in the same way that a user would.

Black box testing is about using the system under test in the way that it’s designed to be used, and then also trying to use the system in a way that it is not perceived to be used by the developers. By accessing features using the screen interface if there is one, or sending commands to the system, if that’s how it works is part of the process to evaluate how reliable or acceptable a system can be. As a strategy, one might place custom designed test data into data drop directories, or compose a sequence of computer commands to send to the system under test. A sales person or engineer might not know how the system works internally, but they will understand the system’s interface, or how the system should be run from a user perspective. It’s all about exercising the system in the way that satisfies its functional or user requirements.

Testing by passing valid or invalid user inputs into the system being tested, is part of a strategy that attempts to determine if the correct expected behaviour is being achieved. We are checking to see if the system is working as per expected in requirement documentation, we want to know is the product or item is meeting customer expectations or not.

See White box testing for further understanding on the difference between white box and black box testing.

.

--- ---

Diagnostic Testing

Diagnostic tests are usually performed when a product / system malfunction has already been observed and we need to better understand how and why a problem occurs. The objective is to understand what the product or system does, BEFORE any error symptoms manifest themselves. Diagnostic tests are technical in nature and are designed to understand how to fix a problem and to ensure that no component or sub system is faulty. A diagnostic test may also test software components and is usually designed to validate an operation against a known or purpose planned test result.
For further information, see Software Debug Testing

.

--- ---

Functional Testing (White or Black box testing)

A set piece test, for a ‘product feature’ to ensure that the desired operation or outputs are as predicted and thus correct. A functional test is a sequence of test operations performed one after the other until we reach the end of the test procedure.

Repeat functional testing ensures that the deliverable product is reliable and does not suffer from any sort of progressive error problem or memory leaks. Function testing is designed to trap critical errors that break product(s) under test.

Functional Testing (Black box testing)

Tests whether a system meets functional requirements. Black box functional testing does not cover internal computer software coding of the project. Instead, it checks whether the system behaves according to expectations.

.

--- ---

Integration Testing (White box testing)

Performed in order to test whether two or more systems or computer software code modules work or coordinate properly.

Perhaps a better title for Integration Testing should be merge testing, because when we install many computer programs together, into one complete system (databases, data communications programs, user computer programs, …, perhaps we also include a web site, etc.), we are attempting to create a production-look-alike system that closely matches a complete system used in the business – (or market place). In essence integration testing is about creating a complete system that can be subjected to a battery of automated computer tests. These tests are designed to break the system and expose problems, malfunctions or weaknesses that we might not see untill or unless the individual parts are integrated into a complete whole.

The purpose of integration testing is to expose faults in the interaction between individual units or sub-systems or even individual computer programs.

Performance Testing (Stress Testing)

Testing responsiveness of the system under load test in terms of timing or data throughput or number of connected users. It may refer to the efficiency of either the underlying code or the environment in which the system is running. Communications profiling, or system data usage profiling is sometimes used as an initial analysis before designing a performance or stress test.

.

--- ---

Manual Testing (Black box testing)

This is user intensive testing where people attempt to use the product in the way it was intended to be used. Phase one: Test people attempt to deliberately use the product incorrectly by performing tasks wrongly or in the wrong order. Phase two: This is often termed black [1] box testing and requires test people to act as end users and make use of user computer screens (date entry, administration, reporting…); testing is purposed to use the product in the way that it was designed to be used. Manual testing is mostly functional testing although add hock performance testing can also be performed if several people group together to perform a user intensive stress test of the system or product.

[1] Black box testing: The focus is not on implementation details but on overall business features and functionality compared to product specifications.

.

--- ---

Regression Testing (Repeat Testing)

The repetition of computer tests that are executed a few thousand times. This testing tends to throw up errors that software developers don’t see during their ad hock development testing. Regression testing is often combined with integration testing and is performed after changes have been made to system hardware or computer software source code. Regression testing helps to identify incremental errors such as memory leakage or performance issues.

Script based regression testing is used during development release testing, and is completed before customer acceptance trials are started.

.

--- ---

Software Debug Testing

Debug or diagnostic testing is usually performed by the developer using special development tools; these tools allow the developer precise control and visibility of software source code as each computer instruction is being executed. The developer is able to examine (or modify) computer data values as individual computer command instructions are being executed. One valuable feature is the ability to halt the computer processing at a fixed place, and to then make a technical evaluation of what the computer program is attempting to do at a point in time. This is a valuable feature amongst software development tools and is one of the core tools used by developers to fix any malfunctions that manifest themselves during testing.

Special software code can also be injected into the product during development testing. This is usually designed to identify the progress of programmatic operations taking place, in order to ascertain if any incorrect operations have occurred.

.

--- ---

System Testing (White or Black box testing)

This evaluates the system’s compliance with its specified requirements, such as performance, compatibility, security, regression testing, reliability, accessibility, and so on.

For some types of product, the difference between system testing and integration testing can sometimes be quite subtle, you might say, that it’s the difference between basing your tests on using some test data – or not. For certain types of product, the time pressure to release a product into the market place can be extreme; business pressures to get to the market place before a trade show, or before a competitor product release, may negate the desire to have a system test that is uniquely different to an integration test. Many types of business can, and do go straight from integration testing into acceptance testing and to the market place.

However, there are many sectors where system testing is critical to ensuring that lives are not lost (car impact testing, lifts that can reach 22mph going down {travels 1,083 feet in 33 seconds}, …).

A Real Time System must run continuously until it has been powered off. Failure to run properly can mean great economic loss and/or loss of human life.

The defence industry, satellite manufacture, aircraft are all examples where system testing is seen as a critical component of a development release life cycle and may not and should not be skipped.

System testing takes place out in the field, with no privileged access to internal systems. Testing in real life conditions is the ultimate test environment where there is little control on what influences your system under test. In a development lab, you have complete control over the run time environment where your product is operated. At sea, in space, at the top of a mountain, it might not be practical to send out an engineer to swap out a faulty component.

Redundancy: The ability of the system to detect a faulty sub-system, switch it out of service, and switch into service a secondary backup sub-system (within acceptable time parameters)

Recovery Testing: Forcing the system to fail in a variety of ways and witnessing system recovery.

Security Testing: Stressing the protection mechanisms built into the system

Stress testing: Confront the system / program with normal and abnormal situations

Performance testing: Verifying that the system operates within its performance limits

.

--- ---

Test Data (specifically database test data)

Customised test data allows you to audit and validate that a system is performing according to expectations.

Databases are often tested prior to releasing development changes into business use. If a malfunction or problem is detected during product release testing, then, provided that you own managed test datasets, it will be possible to repeat the same test after a fix has been applied. Having managed test data allows you to audit and check that a fix has been successfully applied.

Managed test data is valuable to the business since it allows for regression testing for little extra expense. Such test data, needs mechanisms and computer software algorithms that can inject (or replace) data into systems being tested.

It is a best practice that test data should be identifiable as being distinctly different from normal business data, the reason for this is to mitigate against the possibility that business functions make ACCIDENTAL USE of test data. Crafted data values can sometimes coexist with business data in a production system, but only if all business functions can clearly identify test data as being distinctly different to normal business data.

For completeness – unmanaged test data has dubious value since, if a test is found to pass or fail, the test person is none the wiser as to understand if a fix is required. One of the design requirements of test data, is that, data values are designed to induce a deliberate test fail (or test pass) in order to allow validation and checking of a system. The ability to correctly handle BAD data without crashing is a common design requirement. Conversely, the objective of data values that is designed to induce a deliberate test PASS is also proof that a system-under-test does not fail, when processing CORRECT data.

.

--- ---

Unit Testing (White box testing)

Using white [2] box testing, developers carry out software source code unit testing in order to check whether a particular module or unit of software code is working properly. Code ‘unit testing; is implemented at a fundamental level; it is carried out when the unit of the code is developed or when particular functionality is created.

[2] White box testing: where knowledge of internal structures is used to identify the best ways to test the system.

.

--- ---

White Box Testing

This is a software testing method in which the internal structure/ design/ implementation of the item being created is known to the tester.

Testing based on an analysis of the internal structure of the item or system under test.

White-box testing verifies system functions and computer software code according to design specifications

Useful in finding hidden errors in early phases of development

Can help to identify and remove unnecessary lines of computer software code

White box testing disadvantages include:

Requires a skilled engineer who understands computer software and aspects of a systems design and architecture.

Takes a lot of time sheet time (expensive labour costs) to create, validate and test each white box test.

Requires specialised test equipment.

Unit test source code needs to be maintained and updated whenever the implementation needs new features or changes to be added to the system under test.

Validating and checking any unit test software source code is difficult and becomes a work task in itself.

White box testing often requires a dedicated / purpose designed set of test data in order to test each path or condition of the system under test. Having maintained and versioned test data is an important task, it also requires quite a bit of design time and also testing time in order to validate the custom test data.