Sunday, September 30, 2007

Andy's main point seemed to be that technical debt (like real debt) is a drag on the project. By taking shortcuts today (in documentation, or coding, or skipping tests - or cutting and pasting when we should be generalising) we create the appearance of progress, but slow down future progress.

Eventually, even small maintenance tasks take a tremendous amount of effort, not because the work is complex, but simply to pay off the interest. The customer "just" asks for one change, but it has to be implemented in fifteen places, and the code is hard to understand and wacky, and no documentation exists.

Andy even presents a "five step plan" to pay off the debt, much like any "real" debt reduction plan.

But I'm disappointed in just one way: I didn't see any talk of the root cause of technical debt. There must be one; pressure to meet deadlines (and the skipping of corners that tends to entail) seems to be universal.

Until you address the root cause, I suspect that any "technical debt reduction" plan will fail.

To understand how "technical compromises" happen, let's take one example.

I (Matt) am under pressure to hit a deadline.

If I cut a corner, and do a "bad job", I will still hit the deadline - a Positive, Immediate, Certain result. If there is any NEGATIVE result, it is uncertain, out in the future, if ever.

If I do it "right", I will miss the deadline. I could get a lecture from my boss, the customer, or both. I may be written up for not being a "team player" on my annual eval. That is negative, certain ... and immediate. Behavioral Psychology tells us that positive rewards are more powerful than negative. It also tells us that immediate rewards are more powerful than delayed. Finally, it just makes sense that certain rewards are more powerful than uncertain.

Which may explain my (slight) weight problem. A mountain dew will TASTE really good *right now*. It's positive, certain, and immediate. Not only that, one single drink won't make me fat. Yet the combination of those choices, over time will certainly make me fat - and habitually fat, to boot.

In software, the bad choice is the clever hack, done without improving the design. The extra if () { } block thrown around the code. Cruft. Files hanging around that should have been deleted last month. One or two or these won't kill you - in fact, they may certainly be a short-term gain. But a dozen? A hundred? A thousand?

It doesn't take a genius to figure out why shortcuts happen in code: The incentives are misaligned, just like weight gain. Unless you do something to change those incentives, exhorting the team to "Do The Right Thing" will be just more cheerleading, like "Zero Defects" was in the 1990's and "TQM" in the 1980's.

Thursday, September 27, 2007

Now, folks, don't get me wrong. I am a big fan of Extreme Programming, but I do not think that XP is the "one true way" or the "one right way" to do software development. I do think that it pushed back against the "traditional" school of software development in the right direction, at the right time.

For it's time, XP was the contrarian consultant, when, where and how it was badly needed.

If you want the elevator speech to explain Extreme Programming, one place to start is the XP In One Page Poster(*).

The poster tries to cram a lot of ideas in a little space. If I had to recommend one single thing that offers the most value - that I would recommend that any commerical or business software team take a long hard look at - it would be the "design philosophy" section at the bottom left.

Seriously. When I saw this today, I printed it out, ran over that section with a yellow highlighter, wrote "READ THIS FIRST!" with an arrow at the top, and put it on my wall o' attention grabbing stuff. (Mostly cartoons with an occasional big, visible chart ...)

This feeds off the idea in my earlier blog post that if your framework makes it hard to test, people won't use it.

Two of the common elements I see in test frameworks are:

(A) Lots of XML(B) Tough-to-type Syntax

I've explained (A) before, but let me talk for a moment about B.

Often, people have a web application. To test it, they may use framework that drives the browser. The tester then writes test 'code' in a number of possible languages, often Java (Web Driver), Ruby (Watir), or Visual Basic (Quick Test Pro or WinRunner).

The probem is when these frameworks see everything as an object. Yes, that's a problem. Because instead of writing:

(1) Someone tries to use the framework, and goes through so much pain that they give up.

(2) Someone puts an extreme amount of effort into learning the framework and is actually successful. We'll call him Joe. After that, Joe becomes the in-house expert on the tool. If Joe is assigned to test the software, the software will be tested with that framework. Otherwise, it will probably be tested manually.

(3) You write a custom piece of code that sits on top of the framework that eliminates all the fiddly-bit DOM mappings, so you can just call:

browser.getthetag('qty'); browser.setthetag('foo', 5);

(4) (Hopefully) You find a better framework.

I have quite a few colleagues who have had success with option three. Ruby/Watir, however, looks a lot like #4 - a non-goofy framework that allow you express complex test cases relatively easily, in a language that looks a lot more like English than other options, so it is self-documenting.

My prediction is that the big, slow, dumb tools will continue to dominate the mediocre "no one got fired by buying IBM" space, and smart people will continue to patch them to make them work.

Wednesday, September 19, 2007

"Writing applications that work in all different browsers is a friggin’ nightmare. There is simply no alternative but to test exhaustively on Firefox, IE6, IE7, Safari, and Opera, and guess what? I don’t have time to test on Opera. Sucks to be Opera. Startup web browsers don’t stand a chance.

What’s going to happen? Well, you can try begging Microsoft and Firefox to be more compatible. Good luck with that. You can follow the p-code/Java model and build a little sandbox on top of the underlying system. But sandboxes are penalty boxes; they’re slow and they suck, which is why Java Applets are dead, dead, dead ..."

Tuesday, September 18, 2007

Shrini Kulkarni has been after me to define my terms; after all, I keep writing about "Test Frameworks" but I've never defined the term.

Wikipedia defines a framework as "A basic conceptual structure used to solve a complex issue. It also warns that "This very broad definition has allowed the term to be used as a buzzword."

When I use the term, I mean any support, infrastructure, tool or "scaffolding" designed to make testing easier, and (often) automated.

For example: Let's say you have a simple program that converts distance from Miles to Kilometers. The application is a windows application. Every time we make a change, we have a bunch of tests we want to run, yet we can only enter one value at a time, manually. Bummer.

Yet we could think of the software as two systems - the GUI, which "just" passes the data from the keyboard into a low-level function, and "just" prints the answer, and the conversion formula function.

If we could somehow separate these two, and get at the formula function programmatically. Imagine that the formula function is struck in a code library, which can be shared with many different programs.

Then we could write a "tester" program, which takes an input file full of input values and expected results. The "tester" program simply calls the library, compares the actual result to the expected, and prints out a success or failure message.

This is basically how I test a lot of my code, using a program called Test::More. You could call Test::More and it's friends (Test::Harness, and so on) a "framework."

We can go dig into the details and test at the developer understanding level, or bubble up and only test things that have meaning to the customer. One popular framework for these higher level, business logic tests is FIT/Fitnesse

Of course, there is more to the application and just the business logic. The GUI could accept the wrong characters (like letters), format the decimals incorrectly, fail to report errors, handle resizing badly, or a half dozen other things. Even with one "framework", we still have the problem of testing the GUI (not to be forgotten) and testing the two pieces put back together again - "Acceptance" testing, or, perhaps, "System" testing.

This "outer shell" testing can also be slow, painful, and expensive, so there are dozens of free/open or commercial testing frameworks that allow you to 'drive' the user interface of windows or a web browser. With the big commercial tools, people often find that they are writing the same repetitive code, over and over again, so they write libraries on top of the tool like Sifl.

Years ago (back when he was at Microsoft), Harry Robinson once told me that MS typically had two types of testers: Manual Testers, and Developers who like to write frameworks. The problem was that the Developers would write frameworks that no one was interested in using. His assertion (and mine as well) is that people who straddle the middle - who like to test, and like to write software to help them test faster, can be much for effective than people entrenched on either side.

Thus, you don't set out to write a framework - instead, you write a little code to help make testing easier, then you write it again, then you generalize. Over time, slowly, the framework emerges, like Gold refined through fire.

But that's just me talking. What do you think? (And, Shrini - did I answer your questions?)

- I'm speaking at the Grand Rapid's Java User's Group on Tuesday the 18th of September. They literally sent out a call for speaker's last week, and I responded that I had a couple of lightning talks that I think might string together well for ten or fifteen minutes. So they asked me for an abstract for a full talk ... it should be interesting.

- I'm coaching a team of 4 and 5 year old children from Allegan AYSO Soccer. GO ALLEAN TEAM FIVE SILLY FROGS!