A module is not complete, until you are certain that it works correctly.
The easiest way to ensure that this is the case, and to protect yourself
from introducing bugs in the future, is to create a test suite for your
code. Perl has a fantastic testing culture and many great tools to make
testing easy.

Why write tests?

Writing code without tests, is setting yourself up for failure. Even the
best programmers introduce errors into their code without realising it.
You've certainly made some mistakes in code you thought should work, just
in this course so far.

While it is possible to execute your code a number of times with different
inputs, to see whether it is behaving as we expect; writing tests allows us to
automate this process. This increases the number of tests we can run, and
prevents us from forgetting any.

As a general rule, our test suite will only ever grow. If we write a test that
exposes a bug, and then fix that bug, we keep the test, just in case a later
change introduces a similar bug.

What can we test?

We can test anything we can run. Typically, however, a lot of your testing will
be testing modules. These are easy. We load the module, and then we write
tests for each and every subroutine, to make sure that they return the expected
result for the inputs we specify.

Testing the outcome of scripts can be a little more difficult, especially if
they make changes that may need to be reversed, however the actual testing
principles remain the same.

Coding with testing in mind

It is possible to write code that is relatively easy to test, or almost
impossible to test. As a consequence it can be quite difficult to add tests
after the fact. We recommend writing tests at the same time as you write your
code, and keeping the following rules in mind:

Keep each subroutine small, doing one task and that task well.

Throw errors on failure. Use Carp to do this appropriately.

Where possible, return values rather than printing content out to STDOUT.

Write each subroutine as independently as possible.

Pass each subroutine all the arguments it needs, rather than have it rely on
values from elsewhere in the program, or environment. It's okay if it, in turn,
calls other subroutines so long as they too are written independently.

Testing Strategies

Testing cannot prove the absence of bugs, although it can help identify
them for elimination. It's impossible to write enough tests to prove that
your program is flawless. However, a comprehensive test-plan with an
appropriate arsenal of tests can assist you in your goal of making your
program defect free.

When testing there are two typical paradigms for testing, these are as
follow:

Black box testing

You have a specification, and you test that the code meets that
specification. This doesn't require any knowledge of how the code works.
For example, if a date could be input, you might try each of the following:

Today's date

Yesterday, tomorrow

One year from now

One year ago

One hundred years from now

One hundred, two hundred or three hundred years ago

29 February, on a leap year

29 February, not on a leap year

32nd January, any year

31st April, any year

Partial dates

Empty dates

Non-dates

Different date formats (20-01-2011 vs 20/01/2011).

Different date arrangements (20-01-2011 vs 01-20-2011)

The specification should state which of the above are valid and invalid,
and how each should be handled. In either case, so should the
documentation. For example, your program may only allow for the date to be
valid if it occurred in the last 150 years (in which case you'd test 149,
150 and 151 years ago, as well).

White box testing

You know how the code works, you need to test the edge cases. For example
if you accept someone's address and then store it in a database field that
accepts 120 characters then you'd check:

A zero character address

An address containing 1 character

An address consisting of just 1 database meta-character, such as '

An address of 119 characters

An address of 120 characters

An address of 121 characters

An address containing 120 characters exactly but which also includes
several meta-characters such as ' which will need to be escaped for the
database

You'd then verify that the data you retrieve in is exactly the same as the
data you stored, or that an error is given as appropriate.

Combining these ideas

It should be clear to see that both a combination of white box and black
box testing strategies is required to give us a robust testing suite. For
example, in the white box list above, we're not testing that the address is
an address, just that our code handles it correctly. If we were to add
address validation, however, then we'd look carefully at what that
validates and add extra tests to test that out (and also tests we expect
to fail).

Running our test suite

If we've created a module with module-starter we will have a bunch of tests
already created for us. These can be found in our t/ directory, for example
My-Module/t in this case. Let's look at what tests we have to start with:

00-load.t boilerplate.t manifest.t pod-coverage.t pod.t

00-load.t

This tests
that our module can be loaded without errors. This should be the first
test run, hence starting with the number 00.

boilerplate.t

This test warns the author against leaving boilerplate documentation in the
README, Changes and lib/module files.

Initially these tests are marked in a TODO block, to prevent their failure
from slowing you down.

manifest.t

This test checks that your manifest is up to date. The manifest contains
the list of files your distribution contains, and a list of dependencies
that your distribution relies on. This information is essential if you are
going to distribute your module to CPAN.

pod.t

This tests whether your POD in your file is valid.

pod-coverage.t

This tests whether you appear to have provided sufficient documentation for
your code. Every subroutine name which is exported must appear in a
=head[2|3] or =item block.

Depending on your version of Module::Starter you may have additional
tests as well.

When we run perl Makefile.PL it creates a Makefile for us. Amongst other
things, this records the names of the tests that exist in our distribution.
If you have not added or removed any test files since you last ran
perl Makefile.PL then you do not need to run that command again before
running make test.

Conversely, whenever you do add or remove test files, you must remember to
run perl Makefile.PL before you run make test.

prove always runs the files that are currently in t/ at the time it
is invoked.