Thoughts about software development

Archive for March, 2005

I must be one of the last developers on this planet not to understand
Inversion of Control. Every once in a while, a new tutorial appears, I
think to myself "ok, this is it, this time, I’m really going to understand how I
can use it in all my projects", but every time, I finish the article shaking my
head in utter confusion.

In this article, the author is using the example of a flight/cab reservation,
which seems reasonable to me. He contrasts the traditional Java approach
to the one used in, say, Spring, where the classes to instantiate are declared
in an XML file.

I have several problems with this approach:

If you rename one of your classes, you need to remember to update the
XML file as well (as far as I know, no IDE will refactor across non-Java
files as of today).

If you used interfaces to start with, the odds that you will have to
rename signatures in your API are minimal, so you are better off using
straight Java for this in the first place.

What is the big gain in Spring invoking my setters instead of me doing
it in plain Java?

If several instances of the same class need to exist at runtime, I need
to add some sort of XML primary key to my object declarations so that the
container know exactly which instance I need. Tell me again: why am I
writing this in XML instead of Java?

Basically, what you are doing here is spreading your business logic into both
Java and XML for no apparent reason and some quite obvious drawbacks.

Having said that, while I obviously dislike the "setter injection" part of
IoC containers, I do see some value in the factory aspect.

I have found myself reinventing abstract factories way too often and, I bet,
writing bogus implementations many times. It’s not always easy to create a
clean and thread-safe abstract factory implementation, so I definitely see the
point in a framework helping me to specify these in a declarative way and that
would take care of managing the pool of objects, their cardinalities, etc…

PicoContainer does away with XML and writes everything in Java, which makes
more sense to me, and except for the incomprehensible requirement that all your
dependencies need to be declared in the constructor (a constraint the creators
eventually relaxed after years of pressure coming from the community), it seems
to be doing a reasonable job. It’s amazing that it took so long for the
creators to understand that this inane and bigoted requirement made
PicoContainer inapplicable for any kind of pooling

One of the basic tenets of unit testing is that test methods should be independent of each other. JUnit goes to extremes to guarantee this principle by reinstantiating your test class before every method. I personally think that as soon as you do more than unit testing, your test methods will invariably become dependent on each other (and as a matter of fact, what are setUp() and tearDown() if not dependent methods?), and that a testing framework needs to account for that. But that’s not the point of this post.

Recently, I started wondering why this principle seemed to be so important and I asked the following question to several developers: "Why is it important to have test methods that are independent of each other?".

I was quite surprised to receive pretty much only one type of answer: "So you can rerun your tests easily".

My surprise comes from the fact that this is a tool concern and not a core design principle.

What was even more surprising is that JUnit doesn’t actually give you any way to achieve this task, but let me give you an example.

Every night, thousands of tests are run and when I come into the office in the morning, I check the reports and investigate the failures. Being able to quickly rerun the failures is of critical importance to developers but there is no easy way to achieve this with JUnit. Actually, there is not even an easy way to run a specific test method without the assistance of a third-party ant task, and at any rate, such add-ons are not really helping you with the problem at heart: you should be able to rerun the failed tests the same way your nightly tests ran them.

Now, imagine a testing framework that would give you a very easy way to rerun only the tests that failed in the previous run. Would independence of test methods from each other be so important any more?

With that in mind, I spent a couple of hours on the plane yesterday coding this feature for TestNG (you saw that coming, didn’t you?), and it turned out to be quite trivial.

TestNG runs tests based on a definition file called testng.xml. Whenever tests fail in a test run, TestNG will create a corresponding testng-failed.xml that will contain only the tests that failed. Therefore, a typical session will look like this:

testng -d output testng.xml
testng output\testng-failed.xml

This task is made slightly more complicated by the fact that TestNG supports dependent methods, so if a failed method depended on the successful run of previous test methods, these test methods must also be included in testng-failed.xml, but this was trivial to achieve since TestNG already knows the order in which the methods must be ordered. It was just a matter of filtering out all the test methods that succeeded and/or were not necessary to rerun the failed tests.

I already started using this feature to debug TestNG itself and it’s made me save a lot of precious minutes already.

I have just discovered that Eclipse supports Hippie Completion. It’s a
strange name that dates back to emacs (where it first appeared) and it’s also
supported in IDEA. The default key binding in Eclipse is ALT-/ and it’s
become my new best friend.

Hippie Completion tries to complete immediately (as opposed to offering you
suggestions, which is what happens when you type Ctrl-Space) based on what you typed recently and the surrounding context.
It’s a bit hard to describe and actually, there doesn’t seem to be any
documentation about it except for the initialrequest for
enhancement, which was implemented by an external contributor and promptly
integrated into Eclipse.

What matters to me is that 90% of the time, it inserts the right symbol.

Try it, you’ll get hooked.

Update: Alexandru asked me how I found out, it’s simple: Ctrl-Shift-l
(that’s an "l" as in "little"), which displays the little window you can see at the
top of this article, and which lists all the key bindings available in the
current context. Great way to make discoveries.

Mike Keith posted a very good summary of annotations and also cover some
limitations of the specification.
Ted commented on some of them and I’d like to take this opportunity to share
my viewpoint.

Mike deplores the absence of inheritance and the fact that you can’t use
null as a default value. I can sympathize with his disappointment
since I was one of the strong advocates of these two features in the JSR-175
committee, both of which got eventually voted down.

Inheritance was deemed too hard to specify, which is a fair point since we
are talking about entities which are "almost" interfaces but not quite.
There was also some concern about resolving annotations when multiple
inheritance is involved and also what happens when overriding of annotation
occurs and, worse, when the two overriding annotations are defining different
attributes.

The impossibility of defining null as a default keyword still
bothers me to no end as of today, but I have to confess that just like Ted, I
don’t recall the specifics of forbidding it. I just remember that they
sounded convincing (to be fair, Neal spent a great deal of time
explaining on the alias why it was a big issue for the compiler and that it was
close to impossible to achieve).

So if we want annotation inheritance, tools will have to define their own.

With my work on EJBGen andTestNG these past years, I have been
exposed to a lot of feedback on the use of inheritance in annotations and I
believe the rules defined in TestNG have reached a reasonable semantics that
seems to provide a great deal of flexibility of power. I have covered
these rules in some of
my
pastpostings
but I will probably post another entry describing how inheritance, overriding
and partial overriding work in TestNG soon.

When you have finished reading an article filed with this tag, just click on
the bookmarklet and it will get automatically removed from your read_later
category. Note that you don’t need to make any modification to this
bookmarklet since it will use the cookie stored by del.icio.us to authenticate
you (it might ask you for your password the first time you invoke it), so you
can just drag it onto your toolbar and you’re ready to go.

A potential improvement would be to add a JavaScript redirect at the end
since right now, it will simply display the XML success packet, but I’ll leave
this as an exercise for the reader.

I was chatting with Hani last night and two interesting ideas came up:

Repeating failed tests.

When a test run shows several failed tests, the first thing that
developers want to do is rerun only those tests that failed, and not the
entire suite. An easy way to implement this with TestNG is to generate
a testng-failed.xml file that contains all the methods that failed.
If you need to rerun these failed tests, just invoke TestNG with this XML
file and you are done.

The only challenge here is that this file also needs to include all the
methods that the failed tests depend upon, but that’s pretty easy since
TestNG already supports this.

This feature also illustrates the importance of separating your test static
model (business logic) from the runtime model (which tests are run).
If you wanted to provide a similar feature with JUnit, you would need to
generate some Java code creating a TestSuite containing all the failed tests
(making sure you get the imports right, etc…) and compile this suite, or
by generating an ant file with the correct decorator.

Concurrent testing.

The original idea was to be have the testing framework invoke a test method
several times with concurrent threads, but I’m thinking that this should be
extended to entire groups. You put all your methods in a certain group and
you specify that this group should be run by a thread pool a certain number
of times.

I think the methods being invoked concurrently would not
necessary contain the asserts and that the real testing would be done by a
method later (which, obviously, will depend on that group).

Another thing is that this specification is a runtime concern, so I think it
belongs in testng.xml, not in the annotations.

Very often, I find an interesting article that I don’t have the time to read
just now and I want to make sure I won’t forget to read it some time later.
To achieve this, I have created a tag read_later in mydel.icio.us bookmarks. Whenever I find
such an article, I now file with this tag in my bookmarks and when I find myself
with a few minutes on my hand, I can just click on my read_later tag
and pick an article I haven’t read yet (and remove it from there when I’m done).

To speed up this process, I have created
this bookmarklet which allows me to file the current Web page in the
read_later category in one click (well, two, you still need to save it, but
I like this because it gives me a chance to add tags before I file the page).

Just drag and drop the link in your toolbar, edit it to replace
YOUR_LOGIN_HERE with your del.icio.us login and you’re ready to
go.