We are building an application using a completely test-driven approach. As developers, we are very familiar with unit tests but haven't been exposed to integrated / functional / acceptance tests. Hence this post.

The application exposes a web UI (HTML resources), which invokes a secured REST API (with JSON serialization), which then delegates to a core domain model (transactional application services and business entities) independent of any delivery mechanism. The REST API is to be made public eventually. Persistence is achieved using a relational database and an ORM. A standard Java / Spring / Hibernate technology stack is used.

If we wanted to do pure TDD, we would be writing the following tests:

Selenium / WebDriver tests for the web UI

integration tests for the REST API

integration tests for the application service

integration tests for the repository (persistence)

unit tests for the web controllers (REST API)

unit tests for the application service

unit tests for the validator

By integration tests, I mean tests that target a fully functional application deployed on a production-like environment. By unit tests, I mean tests that target individual classes for which collaborators / dependencies have been replaced by test doubles (mocks, stubs, whatever).

Clearly, every test has a different perspective (user-centric for web tests, API consumer-centric for API tests, developer-centric for others) and validates different assertions (HTML element contents for web tests, HTTP responses and status codes for API tests, database contents and exceptions for application tests, edge cases for unit tests, etc.). But there's also a lot of overlap between all these tests, particularly when it comes to fixtures (e.g. user cannot register if other user with same email is already registered). As such, we could avoid writing some of these tests by focusing on the higher layers (e.g. only write the web test for the user registration with duplicate email case), but the lower layers would then be exposed if they were to be reused by a different client (e.g. application services used by a batch client).

So we end up wondering how useful writing all these tests will be and how we can be more efficient in our approach. What is your take on this?

I realize this is more a discussion-type question than one that has a clear and verifiable answer, so if this is not the appropriate forum, would you please direct me to the best forum for these discussions.

Where you can string together a sequence of unit-level tests and functional tests, do this: if you can code your functional actions in the form of a function to click a link, and pass in the page, link information, and expected result from a text file you'll need precisely one unit of code to handle clicking a link, and can use this in multiple locations for process flow testing. Similar routines for combo selection, button clicking and so forth mean that you can minimize the duplication effort.

If multiple test runs are exercising the same part of the application, make the data validation part of exactly one run.

Document your testing as you go. I'm working with a script codebase that's got over half a million lines of code, almost as many lines of data and performs about that many test cases, most of it poorly documented if there's any documentation at all. Working out which bit does what is such a pain that there's a lot of duplication in there, and the legacy effect is nightmarish.

Writing and maintaining automated tests is a big investment and it is ok to start slowly

It is expensive to write and maintain automated tests. If the tests are written in an ineffective way, the investment may not pay off. Similarly, if the tests require a great deal of maintenance, either because of how the tests are written or because the interfaces under test are changing quickly, your investment may not pay off. As with any big investment, it is worthwhile to ease into the process so that to you have time to experiment and to recover from mistakes.

Automated UI tests are especially fragile, and you still need to test manually

In my experience, UI tests tend to be more fragile than API-level tests. Browers continue to evolve, and it seems to me that every new browser release requires changes to Selenium, and sometimes changes to my tests as well. If the HTML does not use element IDs in the appropriate places, your test must identify elements by less permanent means.

On top of that, some aspects of the UI will still require human scrutiny: layout, wording, use of color, fonts, and graphics, and usability.

Testing is a means to an end, not an end in itself

Our jobs are difficult and complex, and it is tempting to look for absolute answers that will solve problems unconditionally. However, each developer and each project are different. Some developers like to code fast and use automated tests to find mistakes. Other developers may think for a long time before writing a single line of code, and consequently make fewer mistakes. As long as their quality is high, it does not matter how they go about their jobs.

Moreover, even for the same developer, there may be variation in the quality of their work depending on what they are working on.

I once believed that everyone should write automated tests for everything, but after working with a variety of teams of different sizes, skills and experiences, I believe the right answer is more complicated and messy.

Consider writing automated tests when any other kind of testing is time-consuming, error-prone, and/or repetitive

Some problems lend themselves more readily to automated testing than others. Focusing on areas that are clearly difficult to test by any other means will help you make good investments in your valuable time.

Talk to each other

Some duplication of effort is inevitable if you are ambitious about testing, but you can reduce it by talking to each other about what you test.

I wouldn't worry about redundant tests at first. It's much better to test a piece of code twice than not at all. The only way to accurately determine whether your tests are redundant is by using a code coverage tool (or extensive logging). However, as Kate Paulk states, there are no magic/silver bullets with testing, or software development in general (of course!). Adding a code coverage step adds time and complexity.

However, I completely agree that soup-to-nuts testing is useful. I wouldn't trust a test/dev team that believed that modular/unit/integration testing was sufficient to validate the stability of a system. The big question is, how much of that effort should you automate and when. I know it sounds completely primitive, but you might get better results with targeted manual testing and more of the automated testing you already have in place. System testing is a development project in and of itself.

Anyway, if you start out with a general plan for your test harness, and you add in some coverage analysis, the refactoring should become obvious somewhere down the line. Also, another caveat, beware of testers/developers/contractors who want to build system level tests without having a general idea of how the system should work. You will most likely end up with a pile of scripts that consume resources and reveal zero regressions.