Featured in AI, ML & Data Engineering

In this article, author shows how to use big data query and processing language U-SQL on Azure Data Lake Analytics platform. U-SQL combines the concepts and constructs both of SQL and C#. It combines the simplicity and declarative nature of SQL with the programmatic power of C# including rich types and expressions.

Featured in Culture & Methods

The book Agile Leadership in Practice - Applying Management 3.0 by Dominik Maximini is an experience report of the agile transformation journey of NovaTec. Maximini shares his experiences from applying principles and practices from Management 3.0, success stories, failure stories, and learnings from experiments.

Featured in DevOps

Yuri Shkuro presents a methodology that uses data mining to learn the typical behavior of the system from massive amounts of distributed traces, compares it with pathological behavior during outages, and uses complexity reduction and intuitive visualizations to guide the user towards actionable insights about the root cause of the outages.

Getting started with Acceptance-Test Driven Development

Related Vendor Content

Related Sponsor

GitLab is a single application for the entire software development lifecycle. Try GitLab for free.

Then this article is for you – a concrete example of how to get started with acceptance-test driven development on an existing code base. It is part of the solution to technical debt.

This is a real-life example with warts and all, not a polished schoolbook example. So get your trench boots on. I will stay with just Java and Junit, no fancy third-party testing frameworks (which tend to be overused).

Disclaimer: I don’t claim that this is The Correct Way, there are many other “flavors” of ATDD out there. Also, there’s not much new or innovative stuff in this article, it’s just well-established practices and hard-earned experience.

What I wanted to do

A few days ago I sat down to build a password-protect feature for webwhiteboard.com (my pet project). People have long been asking for a way to password-protect their online whiteboards, so it was time to get it done.

It sounds like a simple feature, but there are lots of design decisions to be made. So far, webwhiteboard.com has been based on anonymous usage and didn’t have any kind of accounts or sign-in or password stuff. Who should be able to protect a whiteboard? Who should be able to access it? What if I forget my password? How do we keep things simple yet secure enough?

The webwhiteboard code base has decent unit test and integration test coverage. But it had no acceptance tests; that is, tests that go through an end-2-end flow from the user perspective.

Design considerations

The main design goal of web whiteboard is simplicity: to minimize the need for logins and accounts and other annoyances. So I set two design constraints for the password feature:

Setting a password on a whiteboard will require user authentication, but accessing a password-protected board will not. That is, a user who opens a protected whiteboard needs to enter the whiteboard password, but doesn’t need to “log in”.

Login will be done using a third-party OpenId/Oauth service provider, initially Google. That way, the user doesn’t have to create yet another user account.

Implementation approach

Lots of uncertainty here. I was unsure of how I wanted it to work, and even more unsure about how I wanted to implement it. So here was my approach (basically ATDD):

Step 1: Document the intended flow at a high level

Step 2: Turn it into an executable acceptance test

Step 3: Make the acceptance test run, but fail.

Step 4: Make the acceptance test succeed.

Step 5: Clean up the code

This is iterative, so at each step I may decide to go back and tweak a previous step (which I did very often).

Step 1: Document the intended flow

Suppose the feature is Done. An angel came and implemented it while I was asleep. Sounds too good to be true! How would I verify this? What is the first thing I would test manually? This:

I create a new whiteboard

I set a password on it.

Joe tries to open my whiteboard, is asked to enter a password.

Joe enters the wrong password and is denied access

Joe tries again, enters the right password, and gets access. (“Joe” is of course just me, using another web browser…).

Already as I wrote this little test script, I realized there are lots of alternative flows to take into account. But this was the main scenario. If I can get just this to work, I’ve come far.

Step 2: Turn it into an executable acceptance test

Here’s the tricky part. I have no other end-2-end acceptance tests, so how do I even start? This feature will interact with 3rd party authentication systems (my preliminary decision was to use Janrain) as well as databases, and there’s lots of tricky web stuff involved with popup dialogs and tokens and redirects and such. Ugh.

Time to take a step back. Before solving the problem “how do I write this acceptance test” I need to solve the more basic problem “how do I write acceptance tests at all, in this code base?”

To drive this question, I tried to identify the “simplest possible feature” that I could test, something that already works today.

Step 2.1 Write the Simplest Possible executable acceptance test

Here’s what I came up with:

Try to open a non-existing whiteboard

Check that I didn’t get a whiteboard

How would I implement this test? Which frameworks? Which tools? Should it involve the GUI, or bypass it? Should it involve client code or talk directly with the server?

Lots of questions. The trick is: don’t answer them! Just pretend it’s all been beautifully solved somehow, and write the test as pseudocode. Here it is.

publicclass AcceptanceTest {
@Test publicvoid openWhiteboardThatDoesntExist() { //1. Try to open a non-existing whiteboard//2. Check that I didn't get a whiteboard
}
}

I run it, and it succeeds! Hurray! Er, no wait, that’s wrong! The first step in the TDD triangle (“Red-Green-Refactor”) is Red. So I need to make it fail, to prove that the feature needs to be built.

I better get on with writing some real test code. But nevertheless, the pseudocode got me moving in the right direction.

Step 2.2 Make the Simplest Possible acceptance test Red.

To make this test real, I make up a class called AcceptanceTestClient, and I pretend it has magically solved all the questions and gives me a beautiful, high-level API for running my acceptance test. Using it is as simple as this:

client.openWhiteboard("xyz");assertFalse(client.hasWhiteboard());

As I write that code, I’m essentially inventing an API that suits the exact needs of this test case. It should be about as many lines of code as the pseudocode.

Next, I use shortcut keys in Eclipse to have it auto-generate an empty version of AcceptanceTestClient and the methods I need:

In the acceptance test setup, I “short circuit” the system and cut out all frameworks, 3rd party services, and network communication.

(Click on the image to enlarge it)

So I create an AcceptanceTest client that talks to the web whiteboard service in the same way that the real client does. The difference is behind the curtains.

The real client talks to the web whiteboard service interface, and this is running in a GWT environment which automatically turns requests into RPC calls and relays to the server.

The acceptance test client also talks to the web whiteboard service interface, but this one is directly connected to a local service implementation, no need for RPC and, hence, no need for GWT while running the tests.

Also, in the acceptance test configuration it replaces the mongo database (a cloud-based NoSQL database) with a fake in-memory database.

The reason for all this faking is to simplify the environment, make the tests run faster, and make sure the tests are testing the business logic isolated from all the framework and network stuff.

This may sound like a complicated setup, but in fact it’s pretty much just an init method with 3 lines.

WhiteboardServiceImpl is the existing server-side implementation of the web whiteboard system.

Note that the AcceptanceTestClient now accepts a WhiteboardService instance in it’s contructor (a pattern knows as “dependency injection”). This gives us a bonus side-effect: it doesn’t care about the configuration. The same unmodified AcceptanceTestClient class can be used to test against the live environment by just sending in a live-configured instance of WhiteboardService.

So in summary, the AcceptanceTestClient mimics what the real web whiteboard client does, while providing a high-level API towards the acceptance tests.

You might be wondering “why do we need an AcceptanceTestClient when we already have a WhiteboardService we could talk to directly?”. There’s 2 reasons:

The WhiteboardService API is more low-level. AcceptanceTestClient implements exactly the methods needed by the acceptance tests, and in exactly the way that will make them as easy-to-read as possible.

AcceptanceTestClient hides stuff that the test code doesn’t need, for example the concept of WhiteboardEnvelope, the createIfMissing boolean, and other lower-level details. In reality there’s more services involved too, such as a UserService and WhiteboardSyncService.

I’m not going to bore you with more details on the AcceptanceTestClient code, since this article isn’t about the internal plumbing of web whiteboard. Suffice to say, AcceptanceTestClient maps the needs of the acceptance tests to the lower level details of interacting with the whiteboard service interfaces. This was easy to implement, since the real client code effectively serves as a how-do-I-interact-with-the-service tutorial.

Actually I didn’t write any production code for this (since the feature already exists and works), it was just test framework code. But nevertheless I spent a few minutes cleaning that up, removing duplication, making method names more clear, etc.

Finally I add one more test, just for the sake of completeness, and because it’s so easy :o)

@Testpublicvoid passwordProtect() { //1. I create a new whiteboard//2. I set a password on it.//3. Joe tries to open my whiteboard, is asked to enter a password.//4. Joe enters the wrong password and is denied access//5. Joe tries again, enters the right password, and gets access.
}

And now, again, I write the test code while pretending that AcceptanceTestClient has everything I need, in exactly the way I need it. I find this technique immensely useful.

This code took just a few minutes to write, because I could just make up things as I went along. Almost none of these methods actually exist in AcceptanceTestClient (yet).

As I wrote this code, I had to make a number of design decisions. No need to think too hard, just do the first thing that comes to mind. Perfect is the enemy of good enough, and right now I want just good enough, which means a runnable test that fails. Later, when the test runs and is green, I will refactor and think harder about the design.

It’s very tempting to start cleaning up the test code now, especially refactoring out those ugly try/catch statements. But part of the discipline of TDD is to get to green before you start refactoring, the tests will protect you as you refactor. So I decided to wait with the cleanup.

Step 3 – make the acceptance test run, but fail

Following the testing triangle, next step is to make it run but fail.

Again, I use Eclipse shortcuts to have it create empty versions of all the missing methods. Very nice. Run the test and, voila, we have Red!

Step 4: Make the acceptance test green

Now I’ve got a bunch of production code to write. I’m adding several new concepts to the system. As I do this, some of the code I add is non-trivial, so it needs to be unit-tested. I do that using TDD. Same as ATDD, but on a smaller scale.

Here’s how ATDD and TDD fit together. Think of ATDD as an outer cycle:

For each loop around the acceptance test cycle (at a feature level), we do multiple loops of the unit test cycle (at class & method levels).

So although my high-level focus is getting the acceptance test to Green (which can take a few hours), my low-level focus is for example getting the next unit test to Red (which usually takes just a few minutes).

This isn’t hardcore “Leather & Whip TDD”. It’s more like “at least make sure that unit tests & production code are in the same commit”. And that commits happen several times per hour. Might call that TDD-ish :o)

Step 5 clean up the code

As usual, once the acceptance test is green, it’s cleanup time. Never skimp on that step! It’s like doing the dishes after having a meal – it’s quickest to do it right away.

I clean up not only the production code, but the test code as well. For example I extract the messy try-catch stuff into a helper method, and end up with this nice and clean test method:

My goal is to make the acceptance test so short & clean & easy-to-ready that comments are redundant. The original pseudo-code/comments act as a template – “here’s how clear I want this code to be!”. Removing the comments gives a sense of victory, and as a positive side-effect makes the method even shorter!

What next?

Rinse and repeat. Once I had the first test case working, I thought about what’s missing. For example, I said password protection should require user authentication. So I added a test for that, made it red, made it green, and cleaned up. And so on.

Here is a full list of tests that I’ve created for this feature (so far):

passwordProtectionRequiresAuthentication()

protectWhiteboard

passwordOwnerDoesntHaveToKnowThePassword

changePassword

removePassword

whiteboardPasswordCanOnlyBeChangedByThePersonWhoSetIt

I’ll most certainly add more tests later, as I discover bugs or add new features.

All in all, this was about 2 days of effective coding. Much of it was going back and reiterating on the code and design, not as linear as it might seem in this article.

What about manual testing?

I did lots of manual testing as well, after the automated tests were green. But since the automated tests cover the basic functionality and many of the edge cases, I could focus the manual testing on more subjective and exploratory stuff. How is the high level user experience? Does the flow make sense? Is it understandable? Where do I need to add help texts? Is the design aesthetically acceptable? I’m not trying to win any design awards, but I don’t want something monumentally ugly either.

A strong suite of automated acceptance tests removes the need for boring, repetitive manual testing (aka “monkey testing”), and frees up time for the more interesting and valuable type of manual testing.

Ideally I should have built automated acceptance tests from the beginning, so part of this is really just me paying off some technical debt.

Key take-away points

There, I hope this example was useful to you! It demonstrates a pretty typical situation – “I’m about to build a new feature, and it would be nice to write an automated acceptance test, but I don’t have any so far and I don’t know what frameworks to use or even how to get started”.

I really like this pattern, it’s gotten me unstuck so many times. In summary:

Write a very simple acceptance test for something that already works today (like just opening your application). Use it drive your implementation of AcceptanceTestClient and associated test configuration (such as faking connection to databases and other external services).

Write the acceptance test for your new feature. Make it run but fail.

Make it green. While coding, write unit tests for any non-trival stuff.

Refactor. And maybe write some more unit tests for good measure, or remove redundant ones. Keep the code sqeaky clean!

Once you’ve done this, you’ve crossed the most difficult threshold. You’re up and running with ATDD!

Third party frameworks

Your message is awaiting moderation. Thank you for participating in the discussion.

Thanks for this fantastic article.I appreciate very much your detailed step by step approach.

Could you eloborate a bit on " I will stay with just Java and Junit, no fancy third-party testing frameworks (which tend to be overused).". I see a lot of value in sticking with the same langage from product source code to automated tests source code, but I find very useful to use a third-party framework (Robot Framework in my case) though. It allows me to move from one project/product to another (in Java, C# or whatever) and keep my "neutral" framework all the time. Also useful to engage non-technical testers to write/understand test cases written in the DSL of the framework. Your experience/advice would be appreciated on that matter.

The quick answer, it's all about quick feedback and maintainability. UI-tests tends to be slow and brittle. You want less automated UI tests and more using the service layer, like explainded in this example above.

Re: Third party frameworks

Your message is awaiting moderation. Thank you for participating in the discussion.

Thanks Michel for your answer, but using third-party testing framework does not mean creating UI-driven tests. I am using Robot Framework to do some tests on the REST API of my product and it works fine. I am testing the same level of the pyramid I would test with a Java Unit/Integration testing framework but I find it more convenient. So I was curious about the motivations behind the "which tend to be overused"

Re: Third party frameworks

Your message is awaiting moderation. Thank you for participating in the discussion.

Here's what I meant about 3rd party frameworks being overused (things like Selenium, Mockito, Robot, etc):

Your code will probably be around long after you've moved on. Every extra framework becomes a barrier to entry for new people, and a source of potentially confusing bugs (since frameworks evolve and you get weird version incompatibility and dependency issues).

Of course, if the new developer happens to know all the frameworks you use, and they are all up-to-date, then it becomes an advantage. But that's unlikely if you have more than a few frameworks in use. A framework is only as good as the developer using it, and if the developer doesn't know the framework well and is in a hurry to get done... well.. corners will be cut and technial debt will pile up.

So I tend to stick with the core language as far as possible, with the exception of lightweight super-common frameworks like junit. Use frameworks and 3rd party libraries for specific purposes. But avoid full-fledged frameworks until you really really need it, when the pain of NOT having the framework becomes unbearable.