Tuesday, 10 April 2012

Writing REST services with JAX-RS (and its reference implementation Jersey) is easy. A class annotated with @Path and some methods with @GET, @POST, ... annotations is enough for a fully functional REST service. Real-world applications however are more complex. There are request-filters for authorization and access control, context-providers for injecting data-access-objects, mappers that convert exceptions to appropriate http responses, MessageBodyReaders and -Writers to convert JSON and XML to and from Java objects, and so on. All these components can (and should) be tested using unit-tests. But this is not enough. To be sure, that these components work together correctly, integration tests are needed. These can be costly to run. They always need the full environment to be configured and running. And the more complex an application, the more complex it is to set up this environment (web-server, database, search-engine, message-queue, ...).

The Jersey Test Framework offers the possibility to write lightweight integration-tests, that do not need any external resources to be available. The web-container, where all components (resources, filters, mappers, ...) run, is configured and started on-the-fly. Moreover, it is possible to provide mocks for data-access-objects and thus extinguish the need for external services.

The most interesting part of this example (and the part, that demonstrates the need for integration tests on top of unit-tests) is the exception, that is thrown in the removeTodo method. This exception is not catched in the TodoResource. It will be propagated and finally be transformed into a 400-Response by the following exception-mapper:

Testing the REST-service

Now we want to write tests for the todo service. Testing the get- and add-todo methods with the jersey test framework wouldn't be much different from simple unit-tests. The power of the jersey test framework becomes clear, when testing the remove-todo method. When a user wants to delete a non-existent todo, we expect the service to return a 400-response. Ensuring this with standard unit-tests would be hard. The test case using the jersey test framework is quite simple.

Some explanations:
Because we do not want to connect to a external database, the TodoService has to be mocked. This is done by defining a provider, that injects a mocked TodoService. Because we also want to configure the mock-object inside our test, the MockTodoServiceProvider is defined as inner class of the test and the mock-object is stored in a class variable of our test class.

The test is configured to use a GrizzlyWebTestContainer. See the last part of this blog-post for advantages and disadvantages of using other containers. The configuration of the test-container is done in the configure() method.

In the test method itself, the TodoService mock is instructed to throw a TodoNotFoundException, when the removeTodo() method is called. A WebResource pointing to our test-container is created and a DELETE request is fired. If everything works fine, the result of this request must be a 400 error. And the response-body must contain the reason for the error.

In the same way, you can also test other components, like authorization-filters, access-control and response-mappers (Jackson or JAXB) without the need of external environment to be present. Of course, there is also a downside of using this kind of tests: they are rather slow. The on-the-fly setting up and tearing down of the web container is very expensive. Another disadvantage is, that most test-containers use real system ports for their communication (the only exception is the InMemoryContainer, which has other shortcomings). These ports may be blocked by other applications, whath causes the tests to fail. This is a problem, when using helpers like infinitest, where it can happen, that multiple tests are run at the same time.

Integrated Client-Server-Tests

If there is also a java-based client-implementation for the REST-service, this client can be used in jersey tests, too. Our example TODO-service comes with such a client-implementation:

The most interesting part of this client is again the removeTodo() method. It not only executes the HTTP request, but also checks if the request failed because the todo to delete did not exist. This is done by checking the response-code and the response-body. This can be used to simplify the jersey test:

Now this test really cannot be called a unit-test anymore. In these few lines, we check, that the TodoNotFoundException thrown by the TodoServic is correctly converted in a HTTP-Response, that our client understands and converts back to the appropriate TodoNotFoundException. If any of the involved components is changed, without changing affected components, the test will fail.

Tips and Tricks

Decide what type of container to use before writing tests

There are two kinds of containers available for the jersey test framework: high-level servlet containers and low-level containers. Both have advantages and disadvantages.

The high-level servlet containers offer the full functionality of a servlet container, automatically injecting instances of HttpServletRequest, ... . If your application relys heavily on servlet specific classes, these containers will be your first (and probably only) choice. The servlet functionality comes at a price: All implementations need to open system ports, which makes the tests more fragile and also a little bit slower. Another drawback of using real servlet containers in tests is, that you don't have direct access to the instances of your resources and (context-)providers. To allow the use of mock-objects, you must work around this problem, for example by assigning context-objects to static fields, as we did with the mocked TodoService.

Low-level containers on the other hand, allow you to directly modify the ResourceConfig used. Like this, you have access to all instances (resources, providers, filters) used for the rest service. This greatly simplifies mocking. So if you don't rely on the servlet-api, you'll probably go for a low-level container.

Do not use WebAppDescriptor for low-level containers

Althoug possible, I do not recommend using WebAppDescriptors for low-level containers. The reason lies in the method LowLevelAppDescriptor::transform(), that is used to transform a WebAppDescriptor to a LowLevelAppDescriptor, when a low-level container is used. The method simply ignores all non-boolean init-params. Moreover, there is a bug when using the property com.sun.jersey.config.property.packages with multiple (colon-separated) package-names. Even if these shortcomings get fixed, you should not rely on the transform() method. The power of low-level containers lies in the possibility to directly modify the used ResourceConfig, which is only possible when using a LowLevelAppDescriptor.

Speedup jersey tests

Because the JerseyTest base class starts a new web-container before each test, the tests are rather slow. One possibility to speed them up, would be to start a web-container only once per test-suite. A implementation for a base class doing this is included in the example-application.

Extended InMemoryTestContainer

The InMemoryTestContainer is the only container, that does not open any real ports on the system. Of course, being a low-level container, no servlet-specific functionality is available with this container. But if you do not rely on the servlet-api too much, this container is the perfect choice to write really fast and lightweight integration tests.

However the InMemoryTestContainer, coming with the jersey test framework, has another drawback: you cannot declare any request- or response-filters, because they are overridden by logging filters. To work around this problem, I implemented my own in-memory-test-container (basically only copying the original code and removing the logging filters). The code is also included in the example application.