Wednesday, December 31, 2008

Anticipatory design, as coined by Ron Jeffries, describes the extra effort a software developer might do to anticipate new or changing requirements. It’s interesting to attempt to model the cost of doing such work, particularly in light of XP or Agile methodologies. The goal is to come up with a rigorous basis for deciding when to do “extra” design up front.

A key variable in Jeffries model is R, the cost of designing a feature only when it is explicitly needed. Jeffries explores the cases where R ranges from 1 (implying no extra cost) to 5 (i.e. doing the design later is 5 times more expensive).

We extend the treatment of R in two directions: scale and variability. We believe that 5 is by no means the upper bounds on the cost of late design. We also believe that R comes in at least three variations.

We believe that there are actually three distinct class of R: R-normal, R-catastrophic, and R-XP.

R-normal is the R described by Jeffries in which doing the design later incurs a small but noticeable cost.

R-catastrophic is the case in which doing the design later incurs a huge cost.

R-XP is a class surprisingly not described by Jefferies in which doing the design later actually decreases the cost. This would occur in cases where it turn out that a needed component had already been created by someone else (within the group or as a third party purchase/download).

The question becames how to quantify the relative frequency of the three classes R and to see how that would change the overall cost equation. We began by creating a model in which

We set R-normal to 5 (as per Jeffries), R-XP is set to -5 (i.e. a saving) and R-catastrophic to a huge value such as 1000 (that is the definition of catastrophe after all).

Given these conditions the cost of catastrophic changes almost always overwhelms the effect of the R-normal and R-XP components. Only when the frequency of catastrophic change becomes extremely low or the definition of ‘catastrophe’ is set extremely mild do the other components have a real influence on the overall cost.

The question then becomes: does the practice of delayed design, or XP as a whole, itself influence the occurrence rate of catastrophic design failure? This would seem to have to be true in order to explain the successes being reported for projects using XP techniques. It also seems to be implied by Martin Fowler’s seminal work on refactoring. When discussing design Fowler advocates doing just enough design so that there is confidence that the resulting code would be refactorable. In other words, do enough design to leave the code resilient to change and thus resistant to catastrophe.

The conclusion to draw from this analysis is somewhat against the conventional wisdom. Our analysis indicates that the benefit of XP is not so much that it is faster but rather that it is safer! XP is often considered a high risk strategy and yet our analysis shows the exact opposite. Done correctly, XP leads to just enough design to avoid the castrophes that delay or destroy so much projects.

(portions taken from an unpublished paper written in collaboration with Morgan Creighton, 2001)

The biggest problem with vibrate mode is that people don't use it.The second biggest problem, for me, is setting it and then forgetting to unset it which can result in missing calls.

Currently setting your phone to vibrate mode is binary, you do it or you don't. Wouldn't it make more sense to tell your phone to go into vibrate mode for the next hour?

Imagine a configurable go-to-vibrate-mode-for-n-hours option. Set it for 1, 2 or 3 hours and from then on when you went into vibrate mode it would return to normal mode when the time expired.

Fewer missed calls...and maybe fewer lost phones. Ever misplace your phone and then try to call it so you can find the phone by listening for the ring? This doesn't work if your phone is in vibrate mode (which Murphy tells us will always be the mode its in when you lose it!). With auto-vibrate-reset this problem goes away.

Ok, I'll get back to discussing software, cognition and philosophy now. :-)

Hopefully the title of this post sounds odd. Asking if Ruby is the new Java would be more expected, but I want to look at a different aspect of language evolution than just how Ruby compares to Java.

Way back when I started playing with computers we had lots of languages to choose from, Pascal, Fortran, Lisp, Prolog (my personal favorite for some tasks). At that time is was standard and expected that an engineer would be fluent in multiple languages.

One of the aspects of C that is striking is how terse the language can be. Complex statements such as z = (++x > y) ? m : n; can be written with what Tufte might call a high data to ink ratio (http://www.infovis-wiki.net/index.php?title=Tufte%2C_Edward). To put this in the modern vernacular: C has low ceremony, most everything you write matters, there is a low degree of extra tokens needed just to keep the compiler happy.As we moved to a Java based world where many/most engineers seemed to think they only needed to know one language, we found ourselves with a language with a bit more ceremony. To create a new object I need to say: MyClass myObject = new MyClass();Thats a lot of tokens just to get a new object.One of the stated advantages of some of the newer languages such as Ruby is that they are low on ceremony. Sometimes writing fewer token makes the code simpler but sometimes it simply makes it terse.I wonder if somehow we're coming full circle back to the asthetic of C.

One approach that some people have proposed to eliminate intrusive cell phone ringing is to jam or shield an area from the cell phone signal. Its actually surprisingly easy, but possibly illegal, to block the signal. A group in Japan has experimented with adding metal oxides to paint, a wall painted in this way appears to block cell phone reception.

I think this is the wrong approach, and not just for the standard "what about the doctor who much always be reachable" argument.

A better approach would be to standardise on a signal / text message / whatever that could be broadcast to a cell phone to request that the phone turn itself into vibrate mode. Movie theatres, resturants, etc. could simply broadcast this signal and compliant phones would become well behaved. Considering the venom we have for people who interrupt us with inappropriate cell phone ringing non-compliant phones might soon find themselves at a competitive disadvantage.

Alternatively, location aware phone like the IPhone could take a pro-active approach an notice their own location. Imagine being able to configure your IPhone so that whenever you were "near" a location on your Be-Polite-List it automatically went into silent mode.

Though its hard to remember life without a cell phone it also hard to imagine not having to complain about bad cell phone etiquette. As fun as it to complain about people forgetting to set their phones to vibrate in the movies, at dinner, etc., I propose that the solution to this problem is to fix the phone, not the user. The cell phone is a small computer, and we should offload standard tasks to that computer, and setting your phone to vibrate in certain well known conditions is a standard task.My IPhone for example has a build in Calandar, and is well setup to know about my Google or Microsoft calendar as well. I think my phone should notice when I'm in a calendar event and automatically set my phone to vibrate mode. How hard is that? Imagine your phone being smart enough to know when you're in a situation where a silent ring is more appropriate than an audible ring.This approach would have the additional benefit of automating the return to audible ringing. I suspect I'm not the only one who sets their phone to vibrate before going into the movies and then forgets to turn the ringer back on at the end of the movie. I've missed a lot of calls that way. Noticing the end of an event and taking appropriate action, turning back on the ringer, is another task we should hand off to the phone.