Introduction to Ant Part 4: Unit Tests with JUnit

Now that we have dependency management with
Ivy
working along with everything else covered before, we’ve covered almost
everything required to start building real projects with Ant.

Another thing any real project should have, is unit tests. Thankfully,
using the scaffolding already put in place in earlier parts of this
series, integrating a JUnit testing task into our
existing build script is really straight-forward.

Defining a classpath dedicated to tests, which includes all
Ivy-downloaded libraries for testing, as
well as th actual compiled test classes themselves.

It may not seem immediately useful to have a dedicated directory/path
for testing libraries, however before long, you’ll likely come across
various libraries and frameworks (aside form JUnit itself) which assist
with and facilitate things like mocking, in-memory database testing,
etc., which you’ll need to exclude from your distributable builds. It’s
neater to manage these dependencies separately.

The build target which actually executes the junit
task to run our test
cases.

A couple of attributes are being set here, but most importantly within
this particular script are errorproperty and failureproperty. Ant
will set properties with the names specified in the event of a test
error or failure. We can use these in the test target described below
to alter the behavior of the script based on what happened within the
tests.

The formatter element in use here is indicating we want plain-text
output, and it should only be printed to the console, rather than sent
to a file. Although I’m not using it in this particular script, a common
use-case would be to set the format type to xml, and use the todir
attribute of the batchtest element to have the test reports written to
XML files, which may then be presented as fancy HTML reports, usually by
a build server such as Jenkins, which is
incredibly useful.

Finally, we are using the batchtest tag to indicate we want to run a
batch of tests. The fileset provided is using the test source
directory we defined, and is only going to process test classes whose
name ends in ...Test.java. This is useful since you may want to
include helper, utility and mock classes within the test sources path,
which typically JUnit would choke on, as it looks for tests in all
provided files within the fileset provided.

<!-- Run tests, exiting with status code 1 on error, or 2 on test failure --><targetname="test"depends="run-tests"description="run junit tests, and fail the build on error or failure"><failif="tests.errors"message="Error encountered while executing tests"status="1"/><failif="tests.failures"message="Tests failed"status="2"/></target>

The final change to build.xml, the actual test target itself.

Used in combination with the attributes passed to the junit task
defined in the run-tests, this target allows us to fail the build with
varying exit codes to allow automated build tools to react or report
differently, based on the outcome of the tests, without needing to parse
logs or the test reports. Mostly this is a convenience feature.

That’s it for the build.xml changes, now a minor adjustment to
ivy.xml to download JUnit for us:

Pretty straight-forward, with the addition of the test configuration.
Ivy
Configurations
offer a way of grouping different sets of dependencies for different
purposes. In our case, the test configuration can be used for all
test-related third-party dependencies, including JUnit itself! For
example if you were developing a database application, and wanted your
tests to run against an in-memory Derby database rather than something
external, you’d configure that dependency as part of the test
configuration, which would mean it would not be included in your
default libraries used by your (probably) redistributable code.

That’s about all there is to it. You can now execute the following to
run unit tests in your project: