Contact us:

Understanding Software Testing

The most common term for a collection of test cases is a test suite. The test suite often also
contains more detailed instructions or goals for each collection of test cases. It definitely
contains a section where the tester identifies the system configuration used during testing. A group
of test cases may also contain prerequisite states or steps, and descriptions of the following
tests.

A test case in software engineering normally consists of a unique identifier, requirement references
from a design specification, preconditions, events, a series of steps (also known as actions) to
follow, input, output, expected result, and actual result. Clinically defined a test case is an
input and an expected result. This can be as pragmatic as 'for condition x your derived result
is y', whereas other test cases described in more detail the input scenario and what results might
be expected. It can occasionally be a series of steps (but often steps are contained in a separate
test procedure that can be exercised against multiple test cases, as a matter of economy) but with
one expected result or expected outcome. The optional fields are a test case ID, test step or order
of execution number, related requirement(s), depth, test category, author, and check boxes for
whether the test is automatable and has been automated. Larger test cases may also contain
prerequisite states or steps, and descriptions. A test case should also contain a place for the
actual result. These steps can be stored in a word processor document, spreadsheet, database, or
other common repository. In a database system, you may also be able to see past test results and who
generated the results and the system configuration used to generate those results. These past
results would usually be stored in a separate table.

Manual testing is the oldest and most rigorous type of software testing. Manual testing requires a
tester to perform manual test operations on the test software without the help of Test automation.
Manual testing is a laborious activity that requires the tester to possess a certain set of
qualities; to be patient, observant, speculative, creative, innovative, open-minded, resourceful,
unopinionated, and skillful.

Repetitive manual testing can be difficult to perform on large software applications or applications
having very large dataset coverage. This drawback is compensated for by using manual black-box
testing techniques including equivalence partitioning and boundary value analysis. Using which, the
vast dataset specifications can be divided and converted into a more manageable and achievable set
of test suites.

There is no complete substitute for manual testing. Manual testing is crucial for testing software
applications more thoroughly.

Test automation is the use of software to control the execution of tests, the comparison of actual
outcomes to predicted outcomes, the setting up of test preconditions, and other test control and
test reporting functions. Commonly, test automation involves automating a manual process already in
place that uses a formalized testing process.

Another important aspect of test automation is the idea of partial test automation, or automating
parts but not all of the software testing process. If, for example, an oracle cannot reasonably be
created, or if fully automated tests would be too difficult to maintain, then a software tools
engineer can instead create testing tools to help human testers perform their jobs more efficiently.
Testing tools can help automate tasks such as product installation, test data creation, GUI
interaction, problem detection (consider parsing or polling agents equipped with oracles), defect
logging, etc., without necessarily automating tests in an end-to-end fashion.

Test automation is expensive and it is an addition, not a replacement, to manual testing. It can be
made cost-effective in the longer term though, especially in regression testing. One way to generate
test cases automatically is model-based testing where a model of the system is used for test case
generation, but research continues into a variety of methodologies for doing so.

In computer programming, unit testing is a method of testing that verifies the individual units of
source code are working properly. A unit is the smallest testable part of an application. In
procedural programming a unit may be an individual program, function, procedure, etc., while in
object-oriented programming, the smallest unit is a method, which may belong to a base/super class,
abstract class or derived/child class.

Ideally, each test case is independent from the others; Double objects like stubs, mock or fake
objects as well as test harnesses can be used to assist testing a module in isolation. Unit testing
is typically done by software developers to ensure that the code they have written meets software
requirements and behaves as the developer intended.

The goal of unit testing is to isolate each part of the program and show that the individual parts
are correct. A unit test provides a strict, written contract that the piece of code must satisfy. As
a result, it affords several benefits. Unit tests find problems early in the development cycle.

Test-Driven Development (TDD) is a software development technique consisting of short iterations
where new test cases covering the desired improvement or new functionality are written first, then
the production code necessary to pass the tests is implemented, and finally the software is
refactored to accommodate changes. The availability of tests before actual development ensures rapid
feedback after any change. Practitioners emphasize that test-driven development is a method of
designing software, not merely a method of testing.

Test-Driven Development is related to the test-first programming concepts of Extreme Programming,
begun in the late 20th century, but more recently is creating more general interest in its own
right.

Load testing generally refers to the practice of modeling the expected usage of a software program by
simulating multiple users accessing the program's services concurrently. As such, this testing is
most relevant for multi-user systems, often one built using a client/server model, such as web
servers. However, other types of software systems can be load-tested also. For example, a word
processor or graphics editor can be forced to read an extremely large document; or a financial
package can be forced to generate a report based on several years' worth of data. The most accurate
load testing occurs with actual, rather than theoretical, results.

Stress testing is a form of testing that is used to determine the stability of a given system or
entity. It involves testing beyond normal operational capacity, often to a breaking point, in order
to observe the results.

In software testing, stress testing often refers to tests that put a greater emphasis on robustness,
availability, and error handling under a heavy load, rather than on what would be considered correct
behavior under normal circumstances. In particular, the goals of such tests may be to ensure the
software doesn't crash in conditions of insufficient computational resources (such as memory or disk
space), unusually high concurrency, or denial of service attacks.

Beta testing comes after alpha testing. Versions of the software, known as beta versions, are
released to a limited audience outside of the programming team. The software is released to groups
of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta
versions are made available to the open public to increase the feedback field to a maximal number of
future users.

The users of a beta version are called beta testers. They are usually customers or prospective
customers of the organization that develops the software. They receive the software for free or for
a reduced price, but act as free testers.