Without getting into the code itself and what it’s really doing, I would just say that it’s quite “simple” – it has no loops, no complicated if-else statements, nor any sophisticated algorithm in it. It simply creates an object of RealEstateField while the more “sophisticated” code resides in its dependency _mapService, which already has tests of its own.

Questions arise – why should we test the method createField if it does nothing special anyway? Moreover, even if we will decide to test it, how should we assert its outcome, a RealEstateField object, which is quite a big object?

Let’s try to answer these 2 questions.

Should I test such methods?

Many people tend to think that only a code that consists of complicated if-else statements, loop statements and algorithms, is worth testing. To them, it doesn’t make any sense to write a test that lets _mapService.findAddress() return the address ’11 Wall Street’ and then to assert that field.getAddress() really holds the value ’11 Wall Street’.

Here is a reminder of how field.address is being filled:

field.setAddress(_mapService.findAddress(coordinates));

Well, I tend to disagree, mainly for the following reasons:

Simple code can break too.

When developers are working on a piece of code like the one above, many things can go wrong: Mistakenly, they can switch lines; They can remove lines completely; They can put a value in the wrong attribute; Lines can be removed when you are merging code with different developers’ commits, and so forth.

Let’s remember that the real purpose of automated tests in general and unit tests in particular, is to preserve the current behavior of the code, that is, if someone breaks the code’s behavior by a mistake – some tests should fail.

Simple code doesn’t mean “not critical”.

If a code is “simple” and unsophisticated, it doesn’t mean it’s not critical. Consider the example above, what if someone will mess up the price calculation for the real estate fields? This can be a fatal bug!

Moreover, IMO, the worst kind of bugs are the ones that throw no exceptions – they corrupt the data silently, and when this corrupted data eventually trigger an exception elsewhere, it’s hard to find the origin of the problem, which is the code that corrupted the data in the first place. My conclusion: there is no escape from verifying that the object was created correctly, hence, asserting the whole object with all of its attributes.

The problems with these kind of tests are quite obvious. First, they are a code smell called Obscure test. Second, to write such a long list of assertions is quite exhausting. Third, normal objects tend to change and more and more attributes are being added or removed, and thus, this list of assertions becomes incomplete the first time we forget to update
it accordingly. For example, suppose we have been asked to add a new attribute called ‘tax’ to the RealEstateField object and fill it in the createField method. There is a good chance that we will forget to add another assertion for this new attribute and the test will still pass, obviously.

Deep comparison libraries.

A better option would be to use libraries like AssertJ that do deep comparison, meaning, they take two objects and with reflection, they compare the attributes, one by one. This is how you use such a library:

If you look at line 12 (highlighted), you will see that the code is now filling a new attribute – city, which its value is taken from _mapService.getCity(). The tests are unaware of this new method of _mapService so it returns null (that’s the default behavior of most mocking frameworks – return the default value for every method if not stated otherwise), hence, the city attribute of the returnedRealEstateField is also null. And since the city attribute of the expected RealEstateField is also null (the tests are unaware of it too and never put a value in it), the expected object is equal to the returned object and the tests pass. This is a bad thing of course, the tests should have failed.

Another problem with this option is that it does not save us from the exhausting work – we still need to set a value for each and every attribute in the expected object.

This leads me to the third option and the one I like the most

Serialize and compare.

The idea behind this method is to serialize the returned object into a json/xml string and then compare it with a pre-made json/xml string that represents the expected object. It should look like this:

Unlike with the other two methods, if a new attribute is being added to the object as time goes by a
nd you forget to change your tests to assert it, the test will fail because the json of the returned object now contains the new attribute (even though it is null) but the json of the expected object doesn’t contain it. This is the desired behavior since the tests now are incomplete and they better fail.

It’s extremely easy to build the expected json string: you simply run the test for the first time, putting an empty string in the expected json string and let JUnit tell you where you got “wrong” and how the real json string should really look like. You do it like this:

But this method has a flaw – you can’t easily build the expected json string before you have a working code (as I described a few lines above) and that’s a violation of a fundamental principle in TDD: You are not allowed to write any production code unless it is to make a failing unit test pass.

When needed, I choose to use this method even at the cost of violating this TDD rule every once in a while. After all, most of the time we are not required to use this method anyway – most of the time, the outcome of the CUT (class under test) is a lot more simple to verify.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.