Hi. I'm Jon Jagger.
I help software teams improve their effectiveness.
I built cyber-dojo, the place teams practice programming.
I'm based in the UK.
I've worked in 22 countries.
If you don't like my work, I won't invoice you.
Hire me

Pages

This is the barchart from The Average Time To Green Game I
blogged about earlier. It's interesting to study this barchart.

The first green bar on the left is the highest. At this point the students hadn't realized what the aim of the game was - to control the average time to green. Everyone was busy beavering away on what they incorrectly thought they were being judged on - the trivial exercise of stripping backslash newline characters from a buffer. A couple of laptops would probably have carried on into the night without some gentle prompting. The height of the bar represents the average time to green remember. There were four very high individual times and the standard deviation was a hefty 174 (seconds).

The second green bar is marginally lower, indicating the average time to green had reduced slightly. Remember that before each iteration everyone swapped pairs (and moved to a new laptop). The pair swapping was the primary mechanism by which strategies for controlling the average time to green were passed on. At this point only one swap had taken place so not much strategy passing on had occured. There were three very high individual times and an even heftier standard deviation of 196.

The third green bar shows a bigger drop. Since there had now been two pair swaps the effect of passing on strategies was now more marked. Only now was the group really starting to realize what the true aim of the game was. They were starting to experience first hand how effective simply strategies like frequent compilation can be. There were two high individual times and the standard deviation was now down to 142.

By the fourth iteration three pair swaps had taken place and all the groups now genuinely understood the aim of the game (which was not to produce the world's greatest unsplice function!) Every single group got to green within 8 seconds. It was noticeable that on this iteration, as soon as the bell was rung each group chose to get to and stay at green at their first possible opportunity. The standard deviation was only 22.

The fifth iteration bar is perhaps the most interesting of all. By this time everyone understood the aim of the game and everyone was able to get their laptop to green within a very short time of the bell ringing. However, with some gentle prodding we suggested that they didn't have to stay at green at their first opportunity. Consequently, they were starting to decide whether to stay at green or continue for a little longer. They were starting to think about all the pairs not just their pair. This was the first real point when they were in control of the development process and not vice-versa. The standard deviation was 71.

For the sixth iteration we suggested the group aim for an average time to green of 60 seconds (they were at 130 seconds for the fifth iteration). On the seventh iteration the average was 85 seconds with a standard deviation of 74. Pretty impressive.

At the end of the game people were really starting to act with more team awareness. On the first iteration none of the green pairs offered to help a red pair - they just sat watching. In contrast by the end (with a little prompting) some green pairs were offering help to a red pair.

Something else that was noticeable too: at the first iteration everyone looked quite tense but by the end they all realized it really was a game and they all looked at lot more relaxed. Their manager Lars commented on how marked this difference was.

In a review the next day one of the developers commented "suddenly baby steps were being encouraged and large steps were being frowned upon." One developer recounted instinctively firing up the debugger, only to be persuaded by their partner that there were other more effective strategies.

When I'm buying something at a shop and I pay with cash I rarely have the right money so I usually get some loose change back. When this happens I look for a charity tin to drop the small coins into. I'd like to claim this is an act of generosity and altrusim but I can't. For one it's not very generous since it's only small change, and for another it's not very altrusitic since I do it mostly to reduce the hassle in my life. I go to great lengths to try to keep my life hassle free and I don't like lots of loose change in my pocket.

Of course, giving away my small loose change means I'm unlikely to ever have the correct change which creates a pleasant reinforcing feedback loop - broken only when there is no charity tin :-(

Doing this on my own doesn't make a lot of difference, but if everyone did it... Are you willing to give away your small change each time you shop? Would you sign up to a "campaign for small change"?

When you read "there is always a problem and it's always a people problem" it's easy to get the wrong idea. Technical problems and people problems are almost always deeply intertwined.

For example, suppose you're part of a team that has let warnings accumulate one on top the other over many months until they now number 10,000+. Aside from the obvious technical problem of having 10,000+ warnings the team has a much deeper people problem.

Ask yourself the question - why does the team continue to live with the pain of 10,000+ warnings? Their answer is "that's how its always been". Sure the number creeps ever upward, but what does that matter when they're up to 10,000+? The team have had so many warnings for so long that they no longer even think about them! That's abstraction!

And given that the team don't even see 10,000+ warnings as a problem will they be motivated to get rid of them? Unlikely. They've lived with 10,000+ warnings for so long they've become comfortable with the discomfort they cause! Of course they claim they're not in discomfort. Abstraction again!

They are caught in a vicious circle of blindness and numbness. To solve this people problem the first step is to somehow get the team to see and feel the pain 10,000+ warnings are causing. That will be difficult. Then you'll have to get individuals to change their behaviour. That will be difficult too.

An old couple go to the doctors. The doctor says they are fine for their age but he's noticed they are starting to become more forgetful and suggests they write things down so they don't forget them. Back at home the husband asks the wife if she'd like anything to eat. "A bit of ice cream would be nice" she replies. Anything with it he adds. "Some chocolate sauce. And you'd better write it down or you'll forget". The husband says he doesn't need to write it down. "And a spinkling of nuts too" adds the wife. "And you'd better write it down". Again the husband is sure he doesn't need to write it down. "Oh and a cup of tea" adds the wife again. "You'd better write it down". Still he doesn't write it down. Of goes the husband to prepare the food. Fifteen minutes later he walks into the lounge with a lovely english breakfast on a plate: fried eggs, bacon, sausage, mushrooms, tomato. The wife looks at the plate and says "where's the toast?"

Several years ago I was bitten by a bug where my C# unit tests weren't being run because I had accidentally left the public specifier off my test class definition (and therefore NUnit could not see the class). It was easy to fix - I simply added the public keyword. Wondering how many other times I had accidentally done the same thing I decided to write another NUnit test whose job was to use reflection on its own assembly looking for test classes that weren't public.
I was pondering this episode again recently while doing some C training. In C (and C++) you don't have reflection. It occured to me that you might be able to use code coverage to avoid this potential problem. If the coverage could separate out the test code from the tested code you could use less than 100% coverage of the test code to help indicate one or more accidentally uncalled test functions. This highlights that test code and tested code are not the same. Test code typically has lots of detail complexity but very little dynamic complexity.

De Luca's Law says that Managing software developers is 20% technology and 80% psychology. Dale Carnegie believed it was 15/85. On my bookshelf are half a dozen Agile books. All are about Agile Development, Agile Management, etc. None are focused on Agile Developers - on how to improve your agility as an individual. Conway's Law says "An organisation is constrained to produce designs which are copies of the communication structures of the organisation." Can this law be extended to consider agility?

A design cannot be more agile than the developers who designed it.

Intuitively it seems a reasonable idea. If XP aims to improve technical practices, and Scrum to improve management practices then perhaps it's time for a movement aimed at improving personal practices?

Here's a law I invented which I use sometimes when training or consulting.

Never believe any assertion containing the words never or always.

It can be applied to itself of course. And to many other laws. If you believe it then you don't and if you don't then you do. It essentially says you should think for yourself.
Bill Bailey is a UK comedian. On UK tv one time he recounted being shown into a big cat enclosure in a zoo in Chile. The keeper, whose English was very broken, said "Never face the cats". Bill said ok and they went in. After they had been inside for a while the keeper turned to Bill and said, "No no, sorry, always face the cats". Bill joked that the keeper added "we lose a lot that way".

Something special happened last week. I was in Bangalore doing some training at the request of my good friend Olve Maudal of Tandberg (now part of Cisco). A day on Test Driven Development was scheduled for Monday and I fell asleep Saturday night thinking about how to really get across the idea and nature of TDD to a group of developers. I woke up at 2am Sunday morning with The Average Time To Green Game pre formed and named, ready in my head!

Game setup

Each computer is given a label (eg Alligators, Bears, Cheetahs, etc).

Minimum of two people per computer.

Each computer must have a TDD framework installed (better still, use CyberDojo).

Game play

Every 10-15 minutes ring the Average Time To Green Bell (we found a small brass bell in a local shop).

When you ring the bell you also start a timer and project it so everyone can see it. This timer starts at zero and increments second by second.

The aim at each computer is then to get to green (all tests passing).

When a computer has got to green two things have to be recorded for that computer: the iteration number, the time it took to get to green since the bell (simply look at the projected timer).

When your computer is green you have to cover your laptop (we provided sheets of green paper) and wait till all the computers have got to green.

The data for all computers is recorded in a spreadsheet.

When all computers are at green everyone briefly looks at the graph made from the spreadsheet.

Then you have to swap partners and computers and a new iteration starts.

Game Goal

The goal of the game, which was clearly and explicitly printed on the instruction sheets was simply to control the average time to green across the whole group. The group naturally didn't understand that at first - they focused instead on the problem. The problem was completely trivial - Olve and I picked stripping backslash newline pairs off a character buffer - as in C/C++ preprocessor logical lines. It was utterly fascinating to watch how things progressed, and we feel it worked really well (and more to the point I think the participants did too), both in the TDD sense and in the team building sense.

Game photos

Graph of the average time to green over several iterations.

Helping to solve one computer that was holding the team up.

Total number of tests passing split by computer.

Relaxing at the end.

Game retrospective

The group were all well above average ability so we could have perhaps run fewer iterations, or used a less trivial problem.

We could have used staged goals. First measure the average time to green, then lower it, then control it.

Once the group felt they had control of the average time to green we could have let them choose their own goals.

We could have encouraged participants to write down their choice of strategies and their experience of pair programming.