Is quality tradable?

I follow Kent Beck in making a distinction between internal and external quality. The pleasantness and effectiveness of a user-interface is external quality as it’s something that can be perceived by the users of a system. That is something that can be sensibly involved in a trade off – do I want extra work on making feature A easier to use or should I add feature B?

The internal structure of the software, however, is not something that’s directly perceivable by the user. I can’t tell from using a program whether its internals are constructed well or not. Internal quality is thus a more hidden attribute. When someone says we should do things reduce the design quality of a system to build more features, that person is applying the tradable quality hypothesis to internal quality.

Hidden in this reasoning there is the equation that lowering quality you can go faster: as Martin says this is true only in a really really short time frame but this is not the worst effect:

But the tragedy is that as soon as you frame internal quality as tradable, you’ve lost. […] Instead it’s vital to focus on the true value of internal quality – that it’s the enabler to speed. The purpose of internal quality is to go faster.

What’s your experience on this topic? How’s the “quality tradable hypothesis” is seen in your team? Under which circumstances high quality means going faster?

Flickr Photos

Archives

Meta

8 comments

Thanks for the post, interesting questions you raised. They remind me of past teams and projects where interestingly a system’s internal quality was a function of the team and the characteristics that brought its members closer together. The result was a self feeding loop of quality engineering decisions which led to increased levels of trust and motivation which further led to higher better engineering decisions, directly impacting the system’s internal quality. With a lot of trust, motivation and a low technical debt, highly factored system, it’s easy to see how productivity and quality start to grow faster and faster.

There’s two more subtle effect, often overlooked. One is that “I can’t do something with a lower quality than I’m used to. I keep using the musician’s example: once you reached a given level, you can’t do worse than a given average. It’s not even conscious”.
The second is that you need to feel what quality is. One might temporarily compromise in a given context, but for juniors in the team …they would perceive THAT as “maximum reachable quality”. This lie costs a lot.

I agree with ziobrando when he says “I can’t do something with a lower quality than I’m used to”. But what if you find yourself in a team where internal quality is less important than other factors for both the team and management (I have been part of that kind of teams).

First of all, you will find yourself struggling. You are used to some level and find yourself forced to work in another level. You are constantly trying to convince others that higher levels of quality are not performance hits.

And from that position you quickly find yourself trying to prove that higher internal quality means performance gain. But how do you prove that. You cannot say: let’s develop something with two levels of quality. And after a year or so, let’s evaluate and choose the best option.

To conclude: I fully agree with Fowler, the post and the comments, but how do you convince others of this.

I think that the most difficult thing here – I think @nikdeclerq was close to the point – is defining WHEN is the break even point. We all agree that a shortcut could us go faster in the short time, and quality is better in the long term.

Where is the border between the two areas? Two years from project start? Two weeks? Two Minutes?

If I count THIS project only, I’d say weeks. But If I consider my company or development team, in a longer time-scale I’d suspect that this short-term boost isn’t worth the price, due to negative effects.

Of course, also Quality Zealots could be counter productive. If I have 3 days to show a prototype, I’d probably won’t spend day one on Maven… Basically is risk-driven planning. If the prototype goes well, Maven is day 4. But this 3 days code is too important anyway to be crap.

@PierG: My experience says that timing is always the factor in the equation. You often get into arguments like “Why can’t you do things right the first time”, “Isn’t all that refactoring a waste of time”. As @ZioBrando says: “Where is the border between the two areas?”. And that is what I’m looking for. I have my personal opinion on where that border lies, based on feeling and experience, but so has everybody else. And we can all be wrong. What I would like to see more is an objective opinion based on facts. Where lies the border? Are there any guidelines, good practices, … The fact that people like Martin Fowler and Bob Martin seem to be having a difference of opinion on the whole “software craftsmanship”, doesn’t really give me a lot of hope on finding these objective opinions.