Tag: testing

Every time I have to use EasyMock, I get annoyed. For those who don’t know EasyMock, it is a mocking framework that allows you to mock certain pieces of code when doing unit testing. In my humble opinion, there are better ways to create mocks in unit test, but at my current project I’m using EasyMock.
There are 2 reason, the first is the lack of more advanced mocking tutorials on the interwebs. When I go looking for a specific solution, I never really find what I need.
The second reason is a more justified one, but still personal. I alway forget how the API of EasyMock works, it is not self explanatory, a former colleague of mine even once called it DifficultMock…
I’m not going to rant about my lack of love for this java mocking framework, instead I’m going to offer a solution.
So basically, what I was trying to do, was to capture an argument I give to a method and return the same object. We could take the JPA method EntityManager.merge(T entity) method as example. We take an unattached object and attach it. The result is our attached object. Seems releasable to include in a test.
So I want to capture my argument and return it. After playing a bit with the API I got this

So I’m not able to define my andReturn method, because it already tries get the value of my captured object. Instead of giving my andReturn method a value, I have to give it a function, a closure. Since Java doesn’t have closures (yet), I’m forced to use a poor mans closure.
EasyMock supplies an Interface and a method we can use here. The IAnwser interface and we replace the andReturn method by the andAnwser method.
By creating an annonymous implementation of this interface, we create a one-function class, like a normal closure.

When you are writing unit tests, you want to test all the border cases. You can create a method for every border case, but sometimes this is a very repetitive job.

JUnit has a more pragmatic approach for these sorts of tests, called Parameterized tests. The idea is simple. You create your testcase with your test methods. You define a list of parameters that is given to the constructor and all the tests inside the testcase are executed for every parameter.

The first thing that is different about this test is the @RunWith annotation at the top. Instead of the Junit4 runner, we use the Parameterized runner. This annotations looks for a static method that is annotated with the @Parameters annotation.
When we look closer at the getParameters method, we see that every item in the List maps to the constructor of the TestCase, so every item represents a running test. This concrete example results in equal to 4 unit tests.

Now you can be wondering what is the added value of this annotation over a list that you pass to your test. If you are just giving a test method a List with parameters, and one parameter creates a failure in your test, you will have to search for the test that is failing. When you execute it this way, any modern IDE will point which parameter is incorrect.

When working on a very large project, quality is of the essence. One key part of adding quality to a software project, is by testing.

You have several ways of testing an application. We used unit testing, integration testing, selenium testing. But due to a big difference between our local testing environment (tomcat server) and our test teams environment (websphere), we have to check our release when it is deployed on the test teams environment.
We used the principle of smoke testing. Doing some basic actions and see that the application is still stable. Only big failures are fixed in the release. When we discover a bug, we register it in our bug tracker and it is fixed in the next release.

I think we could also used Arquillian. Arquillian is a framework that wraps your integration test into the container you run them in. Unfortunately Arquillian wasn’t an option. We had to do the test manually.

We started with creating a checklist that is hosted on our local wiki.
This way, we make sure that everything is checked before we migrate our release to the testers. Simple, but very efficient.

We are using this checklist, but every release, everyone tests the same features. This leads into cases where someone always tests the same way. This way, some essential things could be forgotten.

We wanted to create a system where we cycle over test, so each group of test is executed by someone else, with enough rotation to make sure no testing habit is developed.

We wanted to keep this a bit fun, because no one really likes doing smoke tests.

The smoke test dogs came to life.
The idea is simple, the tests on the checklist have a certain color. there are as many colors as there are characters in Quentin Tarantino‘s movie, Reservoir dogs.
The day of the release, everyone picks a card in the scrum stand-up meeting. This indicates what test should be executed.
We have a placeholder in our office where the cards should hang when all the test are succeeded. When all the cards are back, we are ready to release to our testers.
This is a very simple way to keep testing fun for everyone. The only disclaimer, this could lead to some role playing 😉
Big thanks to my collegues @erwinravau, @jordanvermeir, @jaspervdm, Cindy and Maarten

For our integration tests we had to use the development database. On that database there are some batch parameters stored and one of those parameters is a cutoffTime. As the name suggest, our business logic can not be processed after that time, so we had to create a mechanism to make sure that the tests are not executed, else the build would fail.
The first thing we did was a quick fix. We added an if-statement to the test to see what time it is and if the test can be executed. As some of you already know, an empty method annotated with @Test will execute and be successful. This is not correct, when someone breaks code that is in those tests, the test should not be marked as successful. We should be able to add the @Ignore annotation on the tests when the cutOffTime is passed.

I looked into this problem and the first thing that came to my mind was to use Annotations, because it is meta-data and not test functionality. JUnit4 has a very easy way to handle annotations, so I decided to go that way.

I want this annotation to cancel all my tests in a testcase, not just a method, that is why i put the ElementType on TYPE.

Our integration tests are running with the SpringJUnit4ClassRunner so I extended this. I have overridden the invokeTestMethod method, because this is the one executing each test. Here is the source code

As you can see, the classAnnotation() method will give you all the annotations that are on your class *and are specified with @Retention(RetentionPolicy.RUNTIME)*. I simply check if the annotation is found on the class. When I’m done checking if the annotation is found and if the test should be ignored, i can fireTestIgnored on the notifier or execute the test. We just annotate our tests with @RunWith(IntegrationTestClassRunner.class).
This way the test should not fail or succeed, but now you can simply see that the test is ignored. The statistics are clean :-).

I know that the whole purpose of this extention is odd, but we are forced into this position. With this post I’m trying to show you that extending a test framework like JUnit isn’t hard at all and you can use it in every way you want to.Tweet