Wakaleo Consulting are pleased to announce we will be running a new series of our popular course 'TDD, BDD and Testing Best Practice for JAVA Developers', in the upcoming months. The three-day intensive training workshops will be held in the following locations on these dates:

Melbourne - 22-24 August

Sydney - 5-7 September

Canberra - 12-14 September

This a great course for those JAVA developers looking to improve code quality, write more effective tests, focus development on core requirements, and create a higher quality product for the end user. Principle Wakaleo consultant, John Smart, will be presenting each of these

When you run a Grails application from the command line (using grails run-app, for example), it will run on port 8080 by default. This also applies when Grails runs web tests such as Selenium, Canoo webtest or HTMLUnit. If you have web tests in your application, Grails will automatically start up the application before running the tests, which is just what you want.

However, the port 8080 is not always what you want. You may need to run your Grails app on a different port if port 8080 has already been taken by another application. This is particularly the case on build servers, for example. The simplest solution is to set the server.port property on the command line, as shown here:

$ grails run-app -Dserver.port=8888

This is good as far as it goes, but has some limitations. Firstly, what if you always want to use the port 8888 for your Grails apps? Secondly, if you are using Hudson with the Grails plugin, this plugin doesn't currently let you provide command-line options like this. A work-around is to invoke grails from the command line in Hudson, as shown here:

$ grails -Dserver.port=8888 test-app

Note that Grails is finicky about the position of command-line parameters - they need to go after the "grails" command and before the target.

Another solution is to change the default port used by Grails itself. In Grails 1.1 at least, the trick is to modify the $GRAILS_HOME/scripts/_GrailsSettings.groovy file. In it, you will find a reference to the default port (8080). Just change it to whatever you require, and Grails will always run on this new port. The modified entry might look like this:

"You fought in the Butler Wars?" "Yes, I was once a Hudson developer, the same as your father..."

Who, in our field, has not heard of Hudson? And who has not heard of the recent fork, which sent echos of seismic proportions through the Open Source community? Lots has been written, by both sides, about this fork, so I won't dwell on the causes and events here. Rather, I want to discuss the implications of the fork for developers, and the future paths of each product.

Hudson rose from a hobby project to becoming by far the most popular CI tool on the market in the space of just a few years, largely because of its intuitive interface, ease of use, fast development pace, and extensibility. Indeed, despite some (technically valid) criticism of the internal Hudson architecture, Hudson plugin development has proved easy enough to allow a host of community developers to have written over 330 plugins, which have greatly contributed to Hudson's success in the past.

Since the fork, we have two very similar products on the market. In the blue corner, we have Jenkins (née Hudson), lead by Kohsuke Kawaguchi (the original author of Hudson) and other members of the Open Source community, commercially backed by Cloudbees, and followed by, it would appear, the large majority of the current Hudson developer community. Jenkins is already showing signs of continuing the same tradition of a very fast development pace, innovative features and the broad support of the plugin developer community that the original Hudson enjoyed. And, in the red corner, we have Hudson the elder, backed by Oracle and supported by Sonatype, who are already undertaking some major under-the-hood changes.

It seems a large number of the developers who wrote these plugins (the same who voted overwhelmingly for a name change from "Hudson" to "Jenkins") are presently shifting their development focus to Jenkins. In theory, most plugins should continue to work on both versions of the product, and I have no doubt that this will remain a high priority for the development teams of both versions. What remains unclear is whether these new plugin releases will be published to both update sites, or only to one (presumably the Jenkins update center).

The big wildcard, however, is Sonatype, who have chosen to side with Oracle and support the Oracle-branded Hudson version. One of the principle reasons behind this decision is certainly because this path makes it easier to integrate the (fairly significant) infrastructural changes that the Sonatype team would like to introduce to Hudson, in order to facilitate Hudson's integration with other Sonatype products, and thus provide a more integrated product suite for their clients. Fair enough indeed.

Indeed, most of the development work currently being done on Hudson (as opposed to Jenkins) seems to be being carried out by, and at the initiative of, Sonatype developers. This work involves deep structural changes, in the same spirit as the work done to migrate Maven 2 to Maven 3. Knowing the Sonatype team, this work will no doubt be done with a strong emphasis on backward compatibility, regression tests and stability. However, these infrastructure changes run deep, and I suspect it will take them a while to bear their fruits.

Oracle, on the other hand, is proclaiming very loudly that they are the true representatives of "the community", perhaps a little too loudly to be truly convincing. However, community discussion, tweets, mailing lists, and a recent poll seems to indicate a strong community preference for Jenkins. Indeed, despite coming into official existence less then a month ago, Jenkins gains around 60% of overall votes, over three times as many votes as the longstanding Hudson name which came in a distant second with around 17% of the votes.

The big question is, however, how many Hudson users have been actively following the Hudson/Jenkins fork, and how many of them will take the decision to go with Jenkins rather than staying with Hudson. The Jenkins developer have gone out of their way to make upgrading from Hudson to Jenkins easy and painless, but no action is generally easier than any action - people need a reason to make a change. The Hudson developers and plugin developers who voted massively for the name change do indeed make up only a small proportion of the Hudson user community - how accurately do they represent the broader Hudson user base, who may not be following the blogs, tweets and mailing lists with such close attention, and who may not even be aware of the existence of Jenkins? This is the user base that Oracle, boldly (and some would say pretentiously) claims to represent.

Time will tell how founded this claim is. However, the Hudson user community does not seem to be made of the same material as many other user communities - the Hudson user is by definition very technical and close to the development community, more akin to a MySQL DBA than an Open Office user.

What proportion of the Hudson/Jenkins user base rely on the approval of managers who would only trust a product with a big name behind it, even for an internal development team? Any what proportion are free to choose the tool they feel is the most appropriate for their uses? My feeling is that a majority of Hudson users, like the developer community, will stay loyal to the principles that made Hudson so popular: ease of use, a fast development pace, and a broad and active developer community, and therefore follow Jenkins. And, if Oracle lets them have their way, Sonatype will continue to develop a high-quality more Maven-specific Hudson variation designed to integrate smoothly with the other Sonatype tools. Time will tell how accurate this picture is, of course.

In the light of these changes, the Continuous Integration with Hudson book, soon to be published by O'Reilly, will be renamed "Jenkins: The Definitive Guide", though most of the material will still apply equally well to both products.

Some other interesting discussions of the Hudson/Jenkins fork and in particular of Sonatype's role can be found here and and here.

Thucydides version 0.9.268 has just been released, with a few very interesting new features. Thucydides is an open source reporting library that helps you write more effective BDD-style automated acceptance criteria, and generate richer test reports, requirements reports and living documentation. In this article, we will look at some of the new ways this version lets you handle work-in-progress or pending scenarios with Thucydides and JBehave.

In JBehave, a scenario is considered passing if all of the step definitions are implemented, even if there is no code. This is because there is no obligation to use step libraries within the step definitions, though it is a good practice for more complex tests. Consider the following scenario:

Scenario: Logging on via Facebook
Given Joe is a Frequent Flyer member
And Joe has registered online via Facebook
When Joe logs on with a Facebook token
Then he should be given access to the site

When you execute this with no step definitions, it will be reported as Pending, as illustrated here:

When you implement the steps, they will be considered successful unless an exception is thrown or a step is marked as pending. So the following will indeed pass:

One of the principle rules of Continuous Integration (and Continuous Delivery) is that you should never knowingly commit code that will break the build. When you practice test-driven development this is easy: you write a failing test (or, more precisely, a failing "executable specification"), make it pass, and then refactor as required. You only commit your code once you have refactored and ran all of your unit tests, to ensure that you haven't inadvertently broken anything elsewhere in the code.

But acceptance tests take typically require a lot more code than unit tests, and take a lot longer to implement. If you start with a failing automated acceptance test, you may have a failing test for hours or even days.

The general principle of CI still applies for automated acceptance tests - you should never knowingly commit code that breaks one on the build server. When people do this, it inevitably results in a continual stream of broken builds, which people ignore because it is considered the normal state of affairs. There is no easy way to know if a build is broken because of a regression, or because of an "in-progress" acceptance test. In these circumstances, CI has very little value. The status reporting becomes flawed. If "real" regression issues occur, they are detected and fixed more slowly. And any attempt at Continuous Delivery becomes impossible, since you can never reliably know when a build is ready to be released into production.

Here are a few techniques that teams use to get around this problem:

Tagging the acceptance tests

One common approach used with tools like JBehave, Cucumber and SpecFlow is to tag the acceptance tests that are work in progress, and to configure the Continuous Integration build to only run the stories without the work-in-progress tag. For example, the following JBehave scenario uses the @wip tag to mark a scenario that is work-in-progress:

User Authentication
Narrative:
In order to prevent unauthorized use of member points
As the system admin
I want users to authenticate before they can access their account
Meta:
@wip
Scenario: Successful authentication
Given Jane is a registered Frequent Flyer
When Jane authenticates with a valid email address and password
Then Jane should be given access to her account

This approach manages the living documentation well enough, but some other aspects need to be considered when it comes to actually implementing the features.

Feature Branches

Many teams use short-lived, preferably local, branches to develop new features. This is fast, easy and common practice for teams using git. The linux code base, for example, relies extensively on feature branching to develop and integrate new features.

For teams still on centralized version control systems, it is a little more dissuasive, as branching and merging using tools like Subversion can be a painful process, and the concept of a local branch generally does not exist.. But can still be a viable option. The trick is not to let the branch live for too long (for example, more than a couple of days), because long-lived branches create a risk of integration issues down the track.

At the risk of stating the obvious, feature branches should also include the corresponding automated tests, whether they be unit, integration, acceptance, or any other automated tests that will be run on the build server. These are written and run locally, alongside the application code, and merged back into the master branch along with the application code when the feature is finished.

Incremental implementation

Another, preferable, approach is to break down the new feature into small pieces that can be built and delivered incrementally. Even if short-lived feature branches are often used for this sort of work (simply because they are convenient, and make it easier to experiment safely), the increments are completed quickly, often within a few hours, before being merged back into the master.

For bigger changes, you can use a slightly different approach. This usually involves building the new feature in isolation, maintaining the existing solution until you are ready to replace it completely. For example suppose you need to replace a payment processing module in your application. This is a large chunk of work, that you won't be able to do in one sitting. The first thing you do is to isolate the payment processing module, for example using an interface (if you are using a dependency injection framework such as Spring or Guice, this may already be done as part of your normal development work). You then build an alternative implementation of the module, according to the new or modified requirements, using TDD to drive the design and implementation. Your new acceptance tests use the new module; once these all pass, you are ready to replace the old implementation with the new.

This approach is similar to the idea of "Feature Toggles" promoted by Martin Fowler, but much simpler to implement. It makes it feasible to work directly against the master branch, though it will not reduce the risk of integration issues if the development takes too long.

Conclusion

In both of these cases, the aim of the game is to never commit code that breaks a build, but at the same time to keep your code up to date with the latest changes in the code base.

Data-driven testing is a powerful way of testing a given scenario with different combinations of values. In this article, we look at several ways to do data-driven unit testing in JUnit.

Suppose, for example, you are implementing a Frequent Flyer application that awards status levels (Bronze, Silver, Gold, Platinum) based on the number of status points you earn. The number of points needed for each level is shown here:

level

minimum status points

result level

Bronze

0

Bronze

Bronze

300

Silver

Bronze

700

Gold

Bronze

1500

Platinum

Our unit tests need to check that we can correctly calculate the status level achieved when a frequent flyer earns a certain number of points. This is a classic problem where data-driven tests would provide an elegant, efficient solution.

Data-driven testing is well-supported in modern JVM unit testing libraries such as Spock and Spec2. However, some teams don

Thucydides is an open source library designed to make practicing Behaviour Driven Development easier. Thucydides plays nicely with BDD tools such asJBehave, or even more traditional tools like JUnit, to make writing automated acceptance tests easier, and to provide richer and more useful living documentation. In a series of two articles, we will look at the tight one and two-way integration that Thucydides offers with JIRA.

The rest of this article assumes you have some familiarily with Thucydides. For a tutorial introduction to Thucydides, check out the Thucydides Documentation or this article for a quick introduction.

Getting started with Thucydides/JIRA integration

JIRA is a popular issue tracking system that is also often used for Agile project and requirements management. Many teams using JIRA store their requirements electronically in the form of story cards and epics in JIRA

Suppose we are implementing a Frequent Flyer application for an airline. The idea is that travellers will earn points when they fly with our airline, based on the distance they fly. Travellers start out with a "Bronze" status, and can earn a better status by flying more frequently. Travellers with a higher frequent flyer status benefit from advantages such as lounge access, prioritized boarding, and so on. One of the story cards for this feature might look like the following:

This story contains a description following one of the frequently-used formats for user story descriptions ("as a..I want..so that"). It also contains a custom "Acceptance Criteria" field, where we can write down a brief outline of the "definition of done" for this story.

These stories can be grouped into epics, and placed into sprints for project planning, as illustrated in the JIRA Agile board shown here:

As illustrated in the story card, each of these stories has a set of acceptance criteria, which we can build into more detailed scenarios, based on concrete examples. We can then automate these scenarios using a BDD tool like JBehave.

The story in Figure 1 describes how many points members need to earn to be awarded each status level. A JBehave scenario for the story card illustrated earlier might look like this:

Frequent Flyer status is calculated based on points
Meta:
@issue FH-17
Scenario: New members should start out as Bronze members
Given Jill Smith is not a Frequent Flyer member
When she registers on the Frequent Flyer program
Then she should have a status of Bronze
Scenario: Members should get status updates based on status points earned
Given a member has a status of <initialStatus>
And he has <initialStatusPoints> status points
When he earns <extraPoints> extra status points
Then he should have a status of <finalStatus>
Examples:
| initialStatus | initialStatusPoints | extraPoints | finalStatus | notes |
| Bronze | 0 | 300 | Silver | 300 points for Silver |
| Silver | 0 | 700 | Gold | 700 points for Gold |
| Gold | 0 | 1500 | Platinum | 1500 points for Platinum |

Thucydides lets you associate JBehave stories or JUnit tests with a JIRA card using the @issue meta tag (illustrated above), or the equivalent @Issue annotation in JUnit. At the most basic level, this will generate links back to the corresponding JIRA cards in your test reports, as illustrated here:

For this to work, Thucydides needs to know where your JIRA server. The simplest way to do this is to define the following properties in a file called thucydides.properties in your project root directory:

You can also set these properties up in your Mavenpom.xml file or pass them in as system properties.

Thucydides also supports two-way integration with JIRA. You can also get Thucydides to update the JIRA issue with a comment pointing to the corresponding test result.

Feature Coverage

But test results only report part of the picture. If you are using JIRA to store your stories and epics, you can use these to keep track of progress. But how do you know what automated acceptance tests have been implemented for your stories and epics, and, equally importantly, how do you know which stories or epics have no automated acceptance tests? In agile terms, a story cannot be declared "done" until the automated acceptance tests pass. Furthermore, we need to be confident not only that the tests exist, but they test the right requirements, and that they test them sufficiently well.

We call this idea of measuring the number (and quality) of the acceptance tests for each of the features we want to build "feature coverage". Thucydides can provide feature coverage reporting in addition to the more conventional test results. If you are using JIRA, you will need to addthucydides-jira-requirements-provider to thedependencies section of your pom.xml file:

Now, when you run the tests, Thucydides will query JIRA to determine the epics and stories that you have defined, and list them in the Requirements page. This page gives you an overview of how many requirements (epics and stories) have passing tests (green), how many have failing (red) or broken (orange) tests, and how many have no tests at all (blue):

If you click on an epic, you can see the stories defined for the epic, including an indicator (in the "Coverage" column) of how well each story has been tested.

From here, you may want to drill down into the details about a given story, including what acceptance tests have been defined for this story, and whether they ran successfully:

Both JIRA and the JIRA-Thucydides integration are quite flexible. We saw earlier that we had configured a custom "Acceptance Criteria" field in our JIRA stories. We have displayed this custom field in the report shown above by including it in thethucydides.properties file, like this:

jira.custom.field.1=Acceptance Criteria

Thuydides reads the narrative text appearing in this report ("As a frequent flyer…") from the Description field of the corresponding JIRA card. We can override this behavior and get Thucydides to read this value from a different custom field using the jira.custom.narrative.field property. For example, some teams use a custom field called "User Story" to store the narrative text, instead of the Description field. We could get Thucydides to use this field as follows:

jira.custom.narrative.field=User Story

Conclusion

Thucydides has rich and flexible one and two-way integration with JIRA. Not only can you link back to JIRA story cards from your acceptance test reports and display information about stories from JIRA in the test reports, you can also read the requirements structure from JIRA, and report on which features have been tested, and which have not.

In the next article in this series, we will learn how to insert links to the Thucydides reports into the JIRA issues, and how to actively update the state of the JIRA cards based on the outcomes of your tests.

Behaviour Driven Development is an increasingly popular Agile development practice that turns testing on its head. It turns automated acceptance testing from a verification activity, done once the development work is done, to a specification activity, with tester involvement starting from the word go.

In this talk, we will look at how Behaviour Driven Development radically changes the traditional tester role in Agile projects, and empowers them to contribute much more to the successful outcomes of the project. We will see how collaboratively written acceptance criteria help reduce assumptions and errors in the early phases of the project, and help ensure that the features being built are both well understood and valuable to the business.

We will look at ways to write more effective, easier to maintain automated acceptance tests. And we will see how automated and manual acceptance test reporting can be combined to provide valuable progress and release preparation reporting.

Behaviour-driven development (BDD) started as an improved variation on test-driven development, but has evolved to become a formidable tool that helps teams communicate more effectively about requirements, using conversation and concrete examples to discover what features really matter to the business. BDD helps teams focus not only on building features that work, but on ensuring that the features they deliver are the ones the client actually needs.</>

Learn what BDD is, and what it is not

Understand that the core of BDD is around conversation and requirements discovery, not around tools.

Understand the difference and similarities between BDD at the requirements level, and BDD at the coding level.

Learn what BDD tools exist for different platforms, and when to use them

Behavior Driven Development (BDD) is an approach that uses conversions around concrete examples to discover, describe and formalize the behavior of a system. BDD tools such as JBehave and Cucumber are often used for writing web-based automated acceptance testing. But BDD is also an excellent approach to adopt if you need to design a web service. In this article, we will see how you can use JBehave and Thucydides to express and to automate clear, meaningful acceptance criteria for a RESTful web service. (The general approach would also work for a web service using SOAP.) We will also see how the reports (or "living documentation", in BDD terms) generated by these automated acceptance criteria also do a great job to document the web service.

Web services are easy to model and test using BDD techniques, in many ways more so than web applications. Web services are (or should be) relatively easy to describe in behavioral terms. They accept a well-defined set of input parameters, and return a well-defined result. So they fit well into the typical BDD-style way of describing behavior, using given-when-then format:

Given some precondition
When something happens
Then a particular outcome is expected

During the rest of this article we will see how to describe and automate web service behavior in this way. To follow along, you will need Java and Maven installed on your machine (I used Java 8 and Maven 3.2.1). The source code is also available on Github. If you want to build the project from scratch, first create a new Thucydides/JBehave project from the command line like this:

mvn archetype:generate -Dfilter=thucydides-jbehave

Enter whatever artifact and group names you like: it doesn't make any difference for this example:

This will create a simple project set up with JBehave and Thucydides. It is designed to test web applications, but it is easy enough to adapt to work with a RESTful web service. We don't need the demo code, so you can safely delete all of the Java classes (except for the AcceptanceTestSuite class) and the JBehave .story files.

Now, update the pom.xml file to use the latest version of Thucydides, e.g.

Once you have done this, you need to define some stories and scenarios for your web service. To keep things simple in this example, we will be working with two simple requirements: shortening and expanding URLs using Google's URL shortening service. We will describe these in two JBehave story files. Create a stories directory under src/test/resources, and create a sub-directory for each requirement called expanding_urls and shortening_urls. Each directory represents a high-level capability that we want to implement. Inside these directories we place JBehave story files (expanding_urls.story and shortening_urls.story) for the features we need. (This structure is a little overkill in this case, but is useful for real-world project where the requirements are more numerous and more complex). This structure is shown here:

The story files contain the BDD-style given-when-then scenarios that describe how the web service should behave. When you design a web service using BDD, you can express behavior at two levels (and many projects use both). The first approach is to describe the JSON data in the BDD scenarios, as illustrated here:

Scenario: Shorten Urls
Given a url http://www.google.com/
When I request the shortened form of this url
Then I should obtain the following JSON message:
{
"kind": "urlshortener#url",
"id": "http://goo.gl/fbsS",
"longUrl": "http://www.google.com/"
}

This works well if your scenarios have a very technical audience (i.e. if you are writing a web service purely for other developers), and if the JSON contents remain simple. It is also a good way to agree on the JSON format that the web sevice will produce. But if you need to discuss the scenario with business, BAs or even testers, and/or if the JSON that you are returning is more complicated, putting JSON in the scenarios is not such a good idea. This approach also works poorly for SOAP-based web services where the XML message structure is more complex. A better approach in these situations is to describe the inputs and expected outcomes in business terms, and then to translate these into the appropriate JSON format within the step definition:

Scenario: Shorten URLs
Given a url <providedUrl>
When I request the shortened form of this url
Then the shortened form should be <expectedUrl>
Examples:
| providedUrl | expectedUrl |
| http://www.google.com/ | http://goo.gl/fbsS |
| http://www.amazon.com/ | http://goo.gl/xj57 |

Let's see how we would automate this scenario using JBehave and Thucydides. First, we need to write JBehave step definitions in Java for each of the given/when/then steps in the scenarios we just saw. Create a class called ProcessingUrls next to the AcceptanceTestSuite class, or in a subdirectory underneath this class.

The step definitions for this scenario are simple, and largely delegate to a class called UrlShortenerSteps to do the heavy-weight work. This approach make a cleaner separation of th what from the how, and makes reuse easier - for example, if we need to change underlying web service we used to implement the URL shortening feature, these step definitions should remain unchanged:

Now add the UrlShortenerSteps class. This class contains the actual test code that interacts with your web service We could use any Java REST client for this, but here we are using the Spring RestTemplate. The full class looks like this:

The Spring RestTemplate class is an easy way to interact with a web service with a minimum of fuss. In the shorten() method, we invoke the urlshortener web service using a POST operation to shorten a URL:

In both cases, we return the JSON document produced by the web service, and verify the contents in the then step using the JSONAssert library. There are many libraries you can use to verify the JSON data returned from a web service. If you need to check the entire JSON structure, JSONAssert provides a convenient API to do so. JSONAssert lets you match JSON documents strictly (all the elements must match, in the right order), or leniently (you only specify a subset of the fields that need to appear in the JSON document, regardless of order).

The following step checks that the JSON documet contains an id field with the expected URL value. The full JSON document will appear in the reports because it is being passed as a parameter to this step.

You can run these scenarios using mvn verify from the command line: this will produce the test reports and the Thucydides living documentation for these scenarios. Once you have run mvn verify, open the index.html file in the target/site/thucydides directory. This gives an overview of the test results. If you click on the Requirements tab, you will see an overview of the results in terms of capabilities and features. We call this "feature coverage":

Drill down into the "Shorten URLs" test result. Here you will see a summary of the story or feature illustrated by this scenario:

And if you scroll down further, you will see the details of how this web service was tested, including the JSON document returned by the service:

BDD is a great fit for developing and testing web services. If you want to learn more about BDD, be sure to check out the BDD, TDD and Test Automation workshops we are running in Sydney and Melbourne this May!

John Ferguson Smart is a well-regarded consultant, coach, and trainer in technical agile practices based in Sydney, Australia. A prominent international figure in the domain of behaviour driven development, automated testing and software life cycle development optimisation, John helps organisations around the world to improve their agile development practices and to optimise their Java development processes and infrastructures. He is the author of several books, most recently BDD in Action for Manning.

Due to popular demand, we are now running our very popular Automted Web Testing with WebDriver workshop over two days. This workshop is an excellent way for both developers and testers to get up to speed with high quality automated web testing practices, in either Java or .NET.

Test automation is vital important skill for any tester today, but in my experience many testers do not have the training, experience or background to write automated tests as effectively as they could. This course will help them realise this potential. (Testers will need a basic understanding of programming in Java or .NET, or some experience in working with test automation scripts, to get the most out of the course).

This intensive two-day workshop teaches students how to write solid, reliable, and maintainable automated web tests using the best-of-breed open source technologies like Selenium WebDriver and Thucydides. Students learn how to write tests more efficiently and more cleanly, in a way that both increases the number of scenarios they can automate, and reduces ongoing maintenance costs considerably. Students also how to write automated web tests that provide better communication and reporting value, using BDD tools like JBehave andSpecFlow.

We are running the course over the next few months on the following dates:

Engage stakeholders more effectively to discover not just what they ask, but what will really help them!

These are just a few of the benefits of Behavior Driven Development (BDD), a powerful collaborative practice that helps teams deliver not only software that works, but to ensure that the features they deliver will provide real business value, and to reduce errors, wasted effort and reworkalong the way.

* The "three amigos" meeting is a BDD collaborative practice where a business analyst, a developer and a tester work together to precisely define the expected outcomes of a set of requirements. This simple practice is a surprisingly effective way to eliminate misunderstandings and incorrect assumptions.

We have an exciting training schedule lined up for 2014, with the introduction of a new workshop on Automated Web Testing with Selenium/WebDriver 2 and Thucydides, and updated course material in our other BDD master classes. Here is the line-up of what's on offer over the next few months in Sydney and Melbourne.

BDD Requirements Workshop (1 day)

Behaviour Driven Development, Specification By Examples, Acceptance-Test Driven Development: call it what you will, it is the most effective way we know today to get teams imagining, designing and delivering high value products that will make a difference to your business.

Behaviour Driven Development (BDD) is an approach than helps teams focus on defining and delivering features with demonstrable business value. Teams using BDD think about requirements in terms of

An essential part of a well-written unit test is a well-written assertion. The assertion states the behavior you expect out of the system. It should tell you at a glance what a test is trying to demonstrate. It should be simple and obvious. You should not have to decipher conditional logic or sift through for loops to understand what it is doing. In addition, any non-trivial logic in a test case increases the risk of the test itself being wrong.

Over recent years there has been a rise in the popularity of tools and techniques that make it easier to write more fluent code, both for production code and for tests. In the testing space in particular, there are many libraries that now support fluent assertions in different languages. Fluent assertions are simply ways of writing assertions in a more natural, more readable and more expressive manner.

There are two main flavors to fluent assertions. The first typically uses the word