Wonderful tech talk by Gerard Meszaros who is a consultant specialising in agile development processes. In this particular presentation Gerard describes a number of common problems encountered when writing and running automated unit and functional tests. He describes these problems as “test smells”, and talks about their root causes. He also suggests possible solutions which he expresses as design patterns for testing. While many of the practices he talks about are directly actionable by developers or testers, it’s important to realise that many also require action from a supportive manager and/or system architect in order to be really achievable.

We use many flavours of xUnit test frameworks in our development group at Talis, and we generally follow a Test First development approach, I found this talk beneficial because many of the issues that Gerard talks about are problems we have encountered and I don’t doubt every development group out there, including ours, can benefit from the insight’s he provides.

I’ve been doing some work this iteration on getting Selenium RC integrated into our build process so we can run a suite of automated functional regression tests against our application on each build. The application I’m working on is written in PHP, normally when you use Selenium IDE to record a test script it saves it as a HTML file.

For example a simple test script that goes to Google and verifies that the text “Search:” is present on the screen and the title of the page is “iGoogle” looks like this:

<html>

<head>

<metahttp-equiv="Content-Type"content="text/html; charset=UTF-8">

<title>New Test</title>

</head>

<body>

<tablecellpadding="1"cellspacing="1"border="1">

<thead>

<tr><tdrowspan="1"colspan="3">New Test</td></tr>

</thead><tbody>

<tr>

<td>open</td>

<td>/ig?hl=en</td>

<td></td>

</tr>

<tr>

<td>verifyTextPresent</td>

<td>Search:</td>

<td></td>

</tr>

<tr>

<td>assertTitle</td>

<td>iGoogle</td>

<td></td>

</tr>

</tbody></table>

</body>

</html>

You can choose to export the script in several other languages, including PHP, in which case the test script it produces looks like this:

The Export produces a valid PHPUnit test case that uses the Selenium PHP Client Driver(Selenium.php). Whilst the script is valid and will run you do need add a little more to it before the test will correctly report errors. As it stands all errors captured during the test are added to an array called verificationErrors, by catching the assertion Exceptions that are thrown when an assert fails, in other words if you ran this test as it is, and it did fail you wouldn’t know! To correct this we need to do two things. Firstly, each assert needs to have a message added to it which will printed out in the test report if the assert fails. Secondly we need to modify the tearDown method so that once a test has run, it checks the verificationErrors array, and if any failures have occurred, fails the test. After making these changes the PHP test script looks like this:

Obviously, I have also given the PHP Class and test function slightly more meaningful names. Now you have a PHP Unit Test case that will use the Selenium PHP Client Driver with Selenium Remote Control to launch a browser, go to the specified URL, and test a couple of assertions. If any of those assertions fail, the tearDown method fails the test … pretty cool, right?

Well now it get’s better. Because the Selenium Client Driver has a published api which is pretty easy to follow, there’s no reason why you can’t just write test cases without using Selenium IDE … for those who want to you could even incorporate this into a TDD process. But for all this to hang together we need to be able to run a build, on a Continuous Integration server which checks out the code, runs unit tests and selenium regression tests against that code line, and only if all tests succeed passes the build.

We are currently using ANT, and CruiseControl to handle our CI/Automated build process. When running the automated suite of tests we need to ensure that the Selenium Remote Control server is also running which creates some complications. The Selenium Remote Control server takes several arguments which can also include the location of a test suite of html based selenium tests – which is really nice because the server will start, execute those tests and then end. Unfortunately you can’t invoke the server and pass it the location of a PHP based test suite. This means you need to find a way to start up the server, then run your tests, and once they are complete, shut the selenium server down.

He are the ANT targets that I have written to achieve this, if anyone can think of better ways of doing this I’d welcome any feedback or suggestions, to run this example you’d simply enter the command “ant selenium” :

A couple of notes, the reason I have to use a conditional check at the end of the selenium target is because, if the exec task that runs the PHP tests was set to failonerror=true then the build would never reach the next line which shuts the Selenium RC server down. To ensure that always happens I have to set the exec to failonerror=false, but this means I have to check what the result was from the exec. Which if successful will return 0, if test failures exist will return 1, and if there were any errors (preventing a test to be exectuted ) will return 2. Hence the conditional check sets regressionTest.err if either of these latter two conditions are true.

Also in order to start up the server, which could take up to a second, but can’t be sure precisely how long. I have to use the Ant Parallel task, which calls the task to start the server and the task to run the tests at the same time. The task to run the tests has a 2 second sleep in it, which should be more than enough time to allow the server to start. This all kind of feels a little clunky, but at the moment it does work very well.

In a nutshell, thats how you integrate PHP based Automated Selenium Regression tests into a continuous build.

I’m in the process of setting up a continuous integration environment for a new PHP project I’m starting. On our previous project, which was Java based, we used the following tools to support similar requirements on that project in order to allow us to implement the project using a test driven approach and automate build generation:

It seems to work quite well, here’s the relatively simple ant build script that controls it all.

<?xmlversion="1.0"encoding="UTF-8"?>

<projectdefault="all"name="DummyProject"basedir=".">

<targetname="all"depends="clean, init, test, sniff"/>

<targetname="clean">

<deletedir="doc/CodeCoverage"/>

<deletedir="doc/UnitTestReport"/>

</target>

<targetname="init">

<mkdirdir="doc/CodeCoverage"/>

<mkdirdir="doc/UnitTestReport"/>

</target>

<targetname="test"description="Run PHPUnit tests">

<execdir="./"executable="TestRunner.bat"failonerror="true">

</exec>

</target>

<targetname="sniff"description="">

<execdir="./"executable="Sniffer.bat"failonerror="true">

</exec>

</target>

</project>

I’m currently running this on a windows machine although it’s trivial to change it work in an *ix based environment which I’ll probably configure in the next day or so. I had a couple of problems installing PHP_CodeSniffer although it was because I hadn’t installed PEAR properly. If you have any problems installing PHP_CodeSniffer under Windows then follow these instructions:

To install PEAR under windows do the following, which assumes you have PHP5.2x installed in c:\php :

cd c:\\php
go-pear.bat

The interactive installer presents you with some options, if you follow the defaults you should be fine.
Once PEAR has installed you can install PHP_CodeSniffer like this:

cd c:\\php
pear install PHP_CodeSniffer-beta

This will download the PHP_CodeSniffer package and install into into your php/PEAR folder.

Once this is done you can check to see if it has installed by calling phpcs with -h flag which will produce the following:

C:\\php>phpcs -h
Usage: phpcs [-nlvi] [--report=] [--standard=]
[--generator=] [--extensions=] ...
-n Do not print warnings
-l Local directory only, no recursion
-v[v][v] Print verbose output
-i Show a list of installed coding standards
--help Print this help message
--version Print version information
One or more files and/or directories to check
A comma separated list of file extensions to check
(only valid if checking a directory)
The name of the coding standard to use
The name of a doc genertor to use
(forces doc generation instead of checking)
Print either the "full" or "summary" report

In our development group at Talis We’ve been thinking a lot about how to test more effectively in an agile environment. One of my colleagues, sent me a link to this excellent talk by Scott Ambler which examines the Role of Testing and QA in Agile Software Development.

Much of the talk is really an introduction to Agile Development which is beneficial to listen to because Scott dispels some of the myths around agile, and offers his own views on best practises using some examples. It does get a bit heated around the 45 minute mark when he’s discussing Database Refactoring, some of the people in the audience were struggling with the idea he was presenting which I felt was fairly simple. If you really want to skip all that try to forward to the 50 minute mark where he starts talking about sandboxes. What I will say is that if your having difficulty getting Agile accepted into your organisation then this might be a video you want to show your managers since it covers all the major issues and benefits.

Here’s some of the tips he has with regard to testing and improving quality:

Do Test Driven Development, the unit tests are the detailed design, they force developers to think about the design. Call it Just-in-time design.

Use Continuous Integration to build and run unit tests on each check-in to trunk.

Acceptance Tests are primary artefacts. Don’t bother with a requirements document simply maintain the acceptance test since the reality is that all testing teams will do is take that requirement and copy it into an acceptance test, so why introduce a traceability issue when you don’t need it. http://www.agilemodeling.com/essays/singleSourceInformation.htm

Use Standards and Guidelines to help ensure teams are creating consistent artefacts.

Code Reviews and Inspections are not a best practise. They are used to compensate for people working alone, not sharing their work, not communicating, poor teamwork, and poor collaboration. Guru checks output is an anti-pattern. Working together, pairing, good communication, teamwork should negate the need for code reviews and inspections.

Short feedback loop is extremely important. The faster you can get testing results and feedback from stakeholders the better.

Testers need to be flexible, willing to pick up new skills, need to be able to work with others. They need to be generalising specialists. The trend that is emerging in agile or the emerging belief is that there is no need for traditional testers.

Scott is a passionate speaker and very convincing, some of the points he makes are quite controversial yet hard to ignore – especially his argument that traditional testers are becoming less necessary. I’m not sure I agree with all his views yet he has succeeded in forcing me to challenge my own views which I need to mull over and for that reason alone watching his talk has been invaluable.