Archive for the ‘Opinion’ Category

There’s a lot of talk at the moment about the infamous Section 3.3.1. Rather than wasting more time on that, I’d like to share a pleasant afternoon made possible by my own phone: an Android-based G1.

I purchased this phone in the UK, but have since returned to Australia. This did require me to SIM unlock the phone, but my original provider (T-Mobile) allowed this for a minimal fee. This is better than being stuck, but not quite as good as having an unlocked phone to begin with. A neutral result on the freedom front.

Where having an Android phone really shines, though, is that I can install whatever I like on it. I’m not at the mercy of the manufacturer, or my service provider. I’m taking advantage of this right now by using a tethering application1 to give my laptop mobile internet access. I love being able to roam, settle somewhere comfy and work for a while. This is something certain carriers are resisting, purely to protect their own interest. But because I use a free platform, I’m in control.

This kind of freedom works particularly well for me, because I work from home, with all the flexibility that entails. It’s especially good considering that home is within walking distance of this:

Not a bad office for the afternoon :).

What’s my point in all of this? Not to gloat — well, not entirely! The real point is that freedom enables all sorts of fun and creativity. If you value that, don’t waste time complaining about the alternatives. Just get yourself an open phone, and start supporting it with your imagination.

I’ve been planning for some time to package up a couple of my earlier series’ of blog posts into a more digestible form. Finally, I’ve taken the time to combine, edit and publish the Testing Is … series as a single article:

Arguments in favour of automated software testing are too often presented in a patronising fashion. Developers that don’t write their own tests are labelled as “unprofessional” and the code they write as “worthless”. These arguments conflate two very different groups of developers: those who don’t care enough to test, and those that care deeply but are unconvinced of the benefits of testing. The former group aren’t worth convincing, and the latter can’t be expected to be swayed by an argument that insults them in this fashion.

Following up on my previous post about CITCON Paris, I thought I’d post a few points about each of the other sessions I attended.

Mock Objects

I went along to this session as a chance to hear about mock objects from the perspective of someone involved in their development, Steve Freeman. If you’ve read my Four Simple Rules for Mocking, you’ll know I’m not too keen on setting expectations, or even on verification. I mainly use mocking libraries for stubbing. Martin Fowler’s article Mocks Aren’t Stubs had make me think that Steve would hold the opposite view:

The classical TDD style is to use real objects if possible and a double if it’s awkward to use the real thing. So a classical TDDer would use a real warehouse and a double for the mail service. The kind of double doesn’t really matter that much.

A mockist TDD practitioner, however, will always use a mock for any object with interesting behavior. In this case for both the warehouse and the mail service.

So my biggest takeaway from this topic was that Steve’s view was more balanced and pragmatic than Fowler’s quote suggests. At a high level he explained well how his approach to design and implementation leads to the use of expectations in his tests. I still have my reservations, but was convinced that I should at least take a look at Steve’s new book (which is free online, so I can try a chapter or two before opting for a dead tree version).

A few more concrete pointers can be found in the session notes. A key one for me is to not mock what you don’t own, but to define your own interfaces for interacting with external systems (and then mock those interfaces).

The Future of CI Servers

I wasn’t too keen on this topic, but since it is my business, I felt compelled. I actually proposed a similar topic at my first CITCON back in Sydney and found it a disappointing session then, so my expectations were low. Apart from the less interesting probing of features on the market already, conversation did wander onto the more interesting challenge of scaling development teams.

The agile movement recognises the two main challenges (and opportunities) in software development are people and change. So it was interesting to hear this recast as wanting to return to our “hacker roots” — where we could code away in a room without the challenges of communication, integration and so on. Ideas such as using information radiators to bring a “small team” feel to large and/or distributed teams were mentioned. A less tangible thought was some kind of frequent but subtle feedback of potential integration issues. Most of the time you could code away happily, but in the background your tools would be constantly keeping an eye out for potential problems. What I like about this is the subtlety angle: given the benefits it’s easy to think that more feedback is always better, without thinking of the cost (e.g. interruption of flow).

Acceptance Testing

This year it seemed like every other session involved acceptance testing somehow. Not terribly surprising I guess since it is a very challenging area both technically and culturally. As I missed most of these sessions, they are probably better captured by other posts:

One idea I would call attention to is growing a custom, targeted solution for your project. I believe it was Steve Freeman that drew attention to an example in the Eclipse MyFoundation Portal project. If you drill down you can see use cases represented in a custom swim lane layout.

Water Cooler Discussions

Of course a great aspect of the conference is the random discussions you fall into with other attendees. One particular discussion (with JtF) has given me a much-needed kick up the backside. We were talking about the problems with trying to use acceptance tests to make up for a lack of unit testing. This is a tempting approach on projects that don’t have a testable design and infrastructure in place — it’s just easier to start throwing tests on top of your external UI.

Even though I knew all the drawbacks of this approach, I had to confess that this is essentially what has happened with the JavaScript code in Pulse. We started adding AJAX to the Pulse UI in bits and pieces without putting the infrastructure in place to test this code in isolation. Fast forward to today and we have a considerable amount of JavaScript code which is primarily tested via Selenium. So we’re now going to get serious about unit testing this code, which will simultaneously improve our coverage and reduce our build times.

Bertrand Meyer (of Eiffel fame) posted recently on The one sure way to advance software engineering. Meyer’s thesis is that by thorough examination of past software failures, we could learn from those mistakes. This is true enough, but then Meyer goes on to suggest that we:

… pass a law that requires extensive professional analysis of any large software failure. … The law would have to define what constitutes a “large” failure; for example it could be any failure that may be software-related and has resulted in loss either of human life or of property beyond a certain threshold, say $50 million.

Leaving the issue of making this law aside, the fact that this would only apply to such “large” failures necessarily marginalises the utility of the analysis. The truth is that such projects are a small minority of overall software development. Lessons learned from projects where such a large failure is a possibility – particularly where safety is involved – may not apply to the vast majority of software development. This doesn’t make the suggestion useless, but it makes it unlikely to have a significant impact on software engineering as a whole.

Meyer’s post also triggered a resurgence of the age-old debate of whether software development is worthy of the “engineering” label. With bugs so prevalent in mainstream software, people (mostly developers themselves) question the professionalism of our industry. This reminds of the EclipseCon 2007 keynote by Robert Lefkowitz, which draws attention to the EULA used by Microsoft. In particular, Lefkowitz asks why Microsoft1 should be allowed to disclaim all warranty and liability for damages incurred by the use of their software. Why doesn’t the law protect the market from buggy software?

The answer is simple: we aren’t all Larry Ellison, and cost does matter to the rest of us. The market might wish for higher-quality software, but to make this law would lead directly to a huge increase in the cost of software development. Consider the canonical example of software that is made to the highest standard: the code that controls space shuttles. The linked article quotes a figure of $35 million per year to develop and maintain 420,000 lines of code2. Consider that Microsoft Windows is about two orders of magnitude larger than this, and that the complexity of a code base grows faster than linearly with respect to the number of lines of code. The cost of bringing Windows up to shuttle-quality is unbearable, even for an 800-pound gorilla.

Does the fact that we need to trade off quality versus cost mean that development should not be considered as engineering? On the contrary, I believe that making sensible trade-offs is one of the very things that defines engineering. Engineering operates in the real world of scarce resources and budget constraints, and great engineers how to strike the right balance. They can also do more with less, by caring about their work and always seeking efficiency. I’ve met and worked with many developers that exhibit these traits, so I believe that there is hope for us yet.

—1 I presume as a proxy for the whole industry.2 Lines of code may be a weak metric, but I don’t think it detracts from the main point.

It had to happen sooner or later, but when a Stack Overflow user posted tips to gain reputation fast, the proverbial hit the fan. Why? Go ahead and read the 6 Simple Tips, which describe how to change your contribution behaviour to optimise for reputation. Notice how none of these tips are concerned with improving the quality of your contributions. In fact, some of them actively encourage sacrificing quality in favour of gaining points:

1. Be the First to Answer. Even at the cost of quality.

In Stackoverflow answers are arranged first by vote number and submission/edit time. Being the first answer that people see is a huge advantage.

This is a perfect illustration of metric abuse in action. By using the reputation “magic number” as an incentive, Stack Overflow encourages users to optimise for reputation rather than quality. Amusingly, Joel himself posted on this very topic some years ago:

“Thank you for calling Amazon.com, may I help you?” Then — Click! You’re cut off. That’s annoying. You just waited 10 minutes to get through to a human and you mysteriously got disconnected right away.

Or is it mysterious? According to Mike Daisey, Amazon rated their customer service representatives based on the number of calls taken per hour. The best way to get your performance rating up was to hang up on customers, thus increasing the number of calls you can take every hour.

Perhaps Joel should have a word to Jeff. I suspect that the response will be to tune the reputation system to plug these holes. If that is indeed the response, I’d like to ask Jeff: Have You Met Your Dog, Patches?

Do not use loops for list operations. Learning from functional languages, looping isn’t the best way to work on collections. Suppose we want to filter a list of persons to those who can drink beer. The loop versions looks like:

No loops

Java is not very well suited to a functional style, so I think the first example is more readable than the second one that uses Predicates. I’m guessing most Java programmers would agree, even those that are comfortable with comprehensions and closures and especially those that aren’t.

I guess this means I’m not “most Java programmers”. Although I find the verbosity of the anonymous Predicate in Stephan’s code lamentable, aside from that I think the functional style is far superior. And I don’t believe it should be seen as in any way at odds with “normal” Java style. There’s no magic here: it’s just a straightforward anonymous class and filtering is a very simple concept to understand. Conceptually it is no harder than the explicit loop: in fact it is easier as you know the higher purpose as soon as you see “filter”.

The higher level of abstraction is the key to its superiority. Every non-trivial Java project will include many instances of searching, filtering, folding and so on of collections. Repeating this most basic of logic time after time fills the code with mechanical details that hide its true purpose. Why not get those details right once (in a library) and then never have to see them again?

In fact, once you embrace this style, things get even better. Not only can you reuse basic operations like filtering, you can also:

Compose these operations to form even higher-level manipulations.

Build up a reusable set of functors (Predicates, Transforms, etc) instead of using anonymous classes.

The latter point is important, as it allows you to remove an even more damaging sort of repetition — that of your project’s domain-specific logic. Reduce Stephan’s example to:

Really, I think it is a shame to see Java programmers hesitant to use this style. The ideas may come from the functional world, but there is nothing but basic Java code at work here. The abstractions are both simple and powerful, which to me is the very definition of elegance.

I know what you’re thinking: yet another blog about the current App Store controversy. Perhaps you’re wondering which I’ll take out of the disenchanted iPhone developer / one-eyed fanboy angles? Well, for a change, neither. I’d actually like to start with a story…

Some years ago, when my wife and I were buying our first apartment, I found myself making small talk with the present owner. Following the usual script, I inquired how the chap made his living — and discovered that he was a futurist. I took this to mean he was some sort of Dr Who-like figure, navigating his way through time while fighting a variety of evil (yet comic) adversaries. Perhaps sensing my confusion, he went on to explain that this involved working for government think-tanks, speculating on the future and advising on policy.

This started me on a rant about politicians, in particular the shallowness of policy and the tendency to optimise for media sound bites and short election cycles. His response is where it gets interesting. He made the point that although what I said was true, it was wrong to attribute all the blame to the politicians. After all, the reason politicians act this way is because that is what gets them votes: both directly and indirectly via media exposure. So both the voters and media are driving a lot of this nonsense by rewarding shallow policies with votes and air time. His conclusion was that the cycle would never be broken by the politicians themselves — change would have to start from the bottom up.

You can probably see where this is going. Just as I had railed against the politicians, developers are now complaining about Apple’s locked-down App Store. The system is opaque, unfair and at times downright stupid. Blatantly anti-competitive policies are thinly veiled as protection of the user experience. But as wrong as this may feel, should we really expect Apple to change anything? Frankly, they own the platform and are entitled to set the policies they have. And so far, consumers and developers have been voting with their dollars and applications in droves. Even if consumers aren’t fully informed, developers can’t claim to be unaware of these policies, yet still they flock to the platform. With this kind of real, positive feedback, why would Apple feel compelled to rethink their policy?

If you believe the App Store policies are wrong, then cast your vote. Don’t buy an iPhone, and (more importantly) don’t develop iPhone applications. Realistically, in the near term consumers will keep buying the iPhone — it’s still a great product. But if enough developers vote with their applications, over time other platforms will surpass it. Apple’s marketing campaign doesn’t quite have the same ring to it when there isn’t an “app for that”. And if Apple see this shift they will hopefully rethink their policies, and the iPhone will thrive once again.

Recent experience has shown a wee problem with the way investment banks incentivise1 their workforce. The so-called bonus culture encourages excessive risk-taking optimised for short-term gain. This suits the employee, who pockets a succession of massive bonuses before the bottom eventually falls out (and spectacularly). When the inevitable happens, said employee can take a year or two off, collecting rare South American reptiles (or whatever else takes their fancy), before returning at the beginning of the next bubble.

What has this got to do with software? Allow me to stretch to not one, but three analogies…

Lack of Feedback

The shifting of pain caused by the credit crunch is self-evident: around the world governments are propping up failing banks, and workers from unrelated industries are losing their jobs. The root problem is allowing people to create pain without them feeling the effects themselves. This lack of feedback allows people to optimise selfishly, consciously or unconsciously. Analogous situations arrive in software projects when developers are isolated from their downstream users. For example:

A setup which places responsibility for quality on a dedicated QA team, rather than across the whole project, allows developers to disclaim responsibility for quality issues. This in turn allows them to churn out high volumes of poor quality code without paying a toll for it. That is, until the day when the bubble bursts, and they realise they are encumbered with an unmaintainable pile of spaghetti which defies improvement.

Every layer placed between developers and their end users muffles critical feedback. Opportunities to make simple fixes that could vastly improve the user experience are missed, as the message never gets through all the layers. The users are stuck with the pain, and the developers remain blissfully ignorant.

Metric Abuse

Metrics are a wonderful and useful tool. Indeed, they can be an important type of the feedback called for above. But they also have a dark side: they can easily be abused to create flawed incentive systems.

Probably the worst abuse of metrics is trying to measure “productivity”. In investment banking, taking profit as productivity leads to long term issues due to unsustainable risk. This is analogous to measuring developer productivity by the quantity of code (or stories, or whatever) produced. This doesn’t take into account the quality of the resultant software, not just in terms of code quality, but other intangibles like the user experience. You might ship a lot of features, but eventually your software will become so buggy and complicated that people will stop using it. “No problem”, I hear you you say, “just use quality metrics too!”. But these exhibit the same issues: any indirect measurement is flawed, and ultimately leads to people optimising the metric rather than their productivity.

The solution: don’t pretend productivity is a number. Life just isn’t that simple: you need to pay attention to the bigger picture.

Technical Debt

Apologies for mixing my financial metaphors, but my final analogy to draw revolves around Technical Debt. Although we might recognise and acknowledge that short term wins can cost more in the long run, sometimes practicality dictates taking a short cut. In the real world we do need to make compromises, whether we like it or not. This gets out of hand, though, when we don’t acknowledge the debt we are building up. You can try, like the banks, to push your debt to the side – hiding it away in obscurity. But the debt is still there, and it’s not getting any easier to pay off.

The solution: if you take a short cut, acknowledge the debt you are creating. After the real-world crisis has passed, start paying it back immediately.

Despite the natural desire to find the “perfect” software development methodology, the simple fact is there is no one-size-fits-all solution. This led me to wonder: are so-called “agile” methodologies just a passing trend? Knowing that there is no silver bullet, can agile really be considered different to trends that have come before?

In fact, I think there is something different about the agile way. The problem of finding the perfect method has been solved in the same way we solve everything in programming: by adding another level of indirection.

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools

Working software over comprehensive documentation

Customer collaboration over contract negotiation

Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

I read this of a recognition of the fact that the two toughest parts of software development are:

Dealing with people (communication, collaboration); and

Coping with change (rather than pretending you can avoid it).

These challenges have been and always will be the key to successful projects. By defining agile in terms of these fundamental challenges, rather than specific practices, it can adapt to different circumstances. In fact, thanks to this abstraction, it is not really accurate to think of agile as a methodology — it is much more about culture than specific tools or practices.

The problem with this indirection, however, is it doesn’t give people an easy answer. If you want to try agile, where do you start? There is no secret here: the answer is simply to try things out and see what works. Just remember:

If something works, that’s great, but don’t become complacent. No practice is perfect, always aim for improvement.

If something doesn’t work, think about why. Can you tweak it? Has it run its course? There are no “best” practices: if you give something a proper chance and it doesn’t fit your team, cut it loose.

Put another way, the “secret” to agile is it is constant refactoring — of the process itself.

I always groan inwardly when I see another job ad calling for a “Rock Star Programmer”. Obviously the idea is that the hiring company is after the top talent. In that sense alone it’s kind of stating the obvious. But is this a good way to attract real talent?

I’m not the only person to think asking for “Rock Stars” is a bad idea. However, several of the otherposts I’ve read on the issue, while making some good points, dwell a lot on their own definition of a “Rock Star”. They ascribe several traits to such programmers, none of which directly follow from the vague term. That’s not to say that there isn’t a real point underlying these definitions, though.

To me, the big question is: Do you want to hire someone that would describe themself as a “Rock Star”?. I’ve been fortunate enough to work with a bunch of bright people, and none of them got that way by having overactive egos. On the contrary, even if they knew they had talent, they also knew their limitations. I’m certain, in fact, that this trait was key in allowing them to continue learning and improving.

Naturally job ads always ask for talent — but specifically asking for a “Rock Star” seems to go beyond that. Be careful what you ask for!