Found this piece by Allen Holub that makes the same point as the one I made in this post: design is all about trade offs.

Design, by nature, is a series of trade-offs. Every choice has a good and bad side, and you make your choice in the context of overall criteria defined by necessity. Good and bad are not absolutes, however. A good decision in one context might be bad in another. If you don't understand both sides of an issue, you cannot make an intelligent choice; in fact, if you don't understand all the ramifications of your actions, you're not designing at all. You're stumbling in the dark

In this post I don't want to debate whether the latter version is indeed better than the former. My claim is quite different, and can be summarized in two words: Who Cares ?!

Let me put it this way: even if we assume, for the sake of the argument, that version 2 is better than 1, there's still no need to refactor version 1. This is because version 1 is designed/implemented in such a way that its quality does not affect the quality of the program as a whole.

Here a few key observations about version 1:

It relies (solely) on classes from Java's standard library. It does not depend on application code.

It does not change the program's internal state. Specifically, the method does not change its input object (the jarFile variable); it is a static method so it has no instance fields; and, the method does not touch any static fields.

The method's code does not impose any restrictions of its own on the input object. The only restrictions (such as: jarFile must specify the path to an existing, readable, legal jar file) come from Java's standard library classes to which the input is fed.

These observation point out that this method is actually quite close to a library method. A library method never depends on application code; Library code has no access to the program's internal data structure (except for what it receives as inputs); Library code will not realize your program's business logic simply because it is not aware of your program.

If version 1 resembles a library method, why don't we turn into an actual library? After all this would make it reusable in different contexts. So, we move the method into a dedicated class which we then compile into a jar. We add the jar to our program's class path.

We now notice a very strange thing: we no longer care how version 1 is written. This is because we never care about the inner workings of library methods or classes - we are interested only in their external behavior. The key issue is what the library does, not how it does it. This is a direct result of the fact that a library hides implementation details and exposes only interfaces.

At this point a light-bulb goes off over your head. You realize that you don't need to bother yourself with creating artificial libraries. You just need to keep in mind the familiar principle of thinking in terms of API design. This means, among other things, that you force yourself to program against the API of your own modules.

(I am not saying that the API of this findClasses() method is perfect. The point is that both version 1 and version 2 offer the exact same API so they are virtually identical, API-wise).

One may err and think that we found a magic cure to all ill-designed programs: "Hey, I just need to treat a crappy piece of my code as a module with an API and my program's quality will rise".

Clearly, this is a misunderstanding of the process: not every piece of code can be treated as a module. In this example, the key issue was that our method presented library-like characteristics from the very beginning. The right conclusion is therefore this one:

if you have a well-isolated piece of code -- no dependencies; minimal, predictable side effects; no domain knowledge; a crisp interface -- then you can write it anyway you want.

Actually, there is an even deeper theme that runs throughout this post. It goes like this:

if you have a well-isolated piece of code, then you have a well-written piece of code. Stop fiddling with it.

Is there a criteria for distinguishing well-written (or well designed) programs from ill-written programs?

Here is what may very well be the best definition for quality in software:

A well-written program is a program where the cost ofimplementing a featureis constant throughout the program's lifetime.

The rationale is simple - if it gradually takes more and more time to add a feature to a program, you'll ultimately reach a point where it is just too expensive to change the code. This is plain economy: past that point it won't be worthwhile to add even the tiniest feature and the project will practically stall.

This was the intuition. To make things more precise, let me explain the basic terminology:

Cost: time needed for programming. In this post I will speak about the total cost of the program and about the cost per feature.

Feature: A change in the external behavior of the program whose completion can be determined objectively. For the purposes of this discussion there is no need to distinguish between adding something new, changing something old or fixing something that is broken. We will collectively refer to these activities as "implementing a feature". This definition coincides with the definition of a story in the Extreme-Programming world.

The last term that needs to be precisely defined is "constant". Providing a good definition for this term is hard because development costs tend to fluctuate so we need to find a definition that can tolerate fluctuations. In short, the challenge boils down to this: If it takes 10 hours to implement feature #1 and 12 hours to implement feature #2 does this mean the cost is not constant?

The best way to abstract away these natural fluctuations in costs, is to use the big-O notation that is typically used for expressing the performance of algorithms. Using this notation we obtain the following precise definition of software quality:

A program is considered to be well-written if its total cost function behaves like o(n) where n is the number of features in the program

Note that a total cost of O(n) reflects an O(1) cost per feature (amortized), so we can rewrite this definitions in terms of a per-feature cost.

Why do I like this approach for quality? Four reasons:

First, it can be objectively measured.

Second, it is normalized: this approach establishes a standard scale by which program can be measured and compared. For example: a total cost of O(n*n*n) is certainly worse than O(n*n); or: a total cost O(n*log n) is usually a close enough approximation of O(n). In short, we can quantify how far are we from the "good" end of the scale.

Third, this measure of quality captures the bottom line: value delivered to clients. Many software metrics capture artifacts of the development process (hours worked, lines of code, classes written, method per class, etc.). Such metrics only testify to the quality of the process. They do not testify to the quality of the outcome. In order for these metrics to be useful one should show that the measured artifact is correlated with properties of the outcome. This rarely happens. The definition presented here largely ignores the process. It cuts to the chase and measures the outcome.

Finally, this approach acknowledges the fact that a program is a complex system. It does so by measuring the program as a whole. It is difficult (or even impossible) to estimate the quality of a program from the quality of its individual sub-parts. Here are a few examples to illustrate the pitfalls of measuring software quality in pieces:

Example #1: Suppose I have a piece of code that - when examined in isolation - turns out to be really crappy. Somewhere in the program there is an excellent suite of tests that cover this piece from every possible angle. This suite makes the piece much more manageable than a similar well-written piece that has no tests.

Example #2: Sometime bad code can be located in unimportant modules. Imagine a program that realizes sophisticated analysis on data coming from CSV (comma separated values) files. The input module of this program reads the input file and populates an in-memory matrix. Even if this input module is badly written, its effect on the overall quality of the program is minimal. At any point I can replace the implementation of the input module because it has a crisp interface (the said matrix) that isolates it from the rest of the code.

Example #3: The problem that my program needs to solve can be nicely expressed as a Domain Specific Language (DSL). Thus, I define a DSL (an external DSL to be precise) , write an interpreter for this DSL and then implement the solution to the problem by writing a program in that DSL. If the DSL is a good DSL (that is: solutions to domain problems can be easily expressed in it) the quality of the whole program is high regardless of the quality of the interpreter.

The definition of quality presented here is quite close to the notion of RTF as coined by Ron Jeffries. However, the definition here is a bit more relaxed - it does not require tested features. This is because constant delivery of features requires constant refactoring (you can't plan everything in advance). And, constant refactoring requires testing - otherwise you will not be able to detect incorrect refactoring steps an you'll end up introducing bugs into the program which sooner or later will slow you down.