In this paper the authors present the results of 128 role plays they conducted with software practitioners. These role plays analysed the influence of checklists on the risk perception and decision-making. The authors also controlled for the role of the participant, whether he/she was an insider = project manager or an outsider = consultant. They found the role having no effect on the risks identified.

Keil et al. created a risk checklist based on the software risk model which was first conceptualised by Wallace et al. This model distinguishes 6 different risks – (1) Team, (2) Organisational environment, (3) Requirements, (4) Planning and control, (5) User, (6) Complexity. In their role plays the authors found that checklists have a significant influence on the number of risks identified. However the number of risks does not influence the decision-making. Decision-making is influenced by the fact whether the participants have identified some key risks or not. Therefore the risk checklists can influence the salience of the risks, i.e., whether they are perceived or not, but it does not influence the decision-making.

It’s never to late to start reading a classic. This is one for sure. The original paper which proposes the waterfall software development model. This is now extremely common place – but and that is what stroke me odd as well, the model shows a huge number of feedback loops which typically are omitted.

The steps of the original waterfall are as follows

System Requirements

Software Requirements

Preliminary Program Design which includes the preliminary software review

Analysis

Program Design which includes several critical software reviews

Coding

Testing which includes the final software review

Operations

Among the interesting loops in this model is the big feedback from testing into program design and from program design into software requirements. By no means can is this model what we commonly assume to be a waterfall process – there are no frozen requirements, no clear cut steps without any looking back. This is much more RUP or AGILE or whatever you want to call it than the waterfall model I have in my head.

This is a gem. Craig Brown from the ‚Better Projects‘-Blog (here) created a presentation on Jurgen Appelo’s Definite List of Project Management Methodologies. Jurgen published his list first in his blog over at noop.nl and now moved it into a Google Knol here. Craig put it into a great tongue in cheek presentation. I very much enjoyed it, as such here it is:

Wallace et al. conducted a survey among 507 software project managers worldwide. They tested a vast set of risks and tried to group these risks into 3 clusters of projects: high, medium, and low risk projects.

Wallace et al. showed two interesting findings. Firstly, the overall project risk is directly correlated to the project performance – the higher the risk the lower the performance! Secondly, they found that even low risk projects have a high complexity risk.

Kavis, Mike: 10 Mistakes that Cause SOA to Fail; in: CIO Magazine, 01. October 2008.
I usually don’t care much about these industry journals. But since they arrive for free in my mail every other week, I could help but noticing this article, which gave a brief overview of two SOA cases – United’s new ticketing system and Synovus financial banking system replacement.

However, the ten mistakes cited are all too familiar:

Fail to explain SOA’s business value – put BPM first, then the IT implementation

In this article Miranda & Abran argue „that project contingencies should be based on the amount it will take to recover from the underestimation, and not on the amount that would have been required had the project been adequately planned from the beginning, and that these funds should be administered at the portfolio level.“

Thus they propose delay funds instead of contingencies. The amount of that fund depends on the magnitude of recovery needed (u) and the time of recovery (t). t and u are described using a PERT-like model of triangular probability distribution, based on a best, most-likely, and worst case estimation.

The authors argue that typically in a software development three effects occur that lead to underestimation of contingencies. These three effects are (1) MAIMS behaviour, (2) use of contingencies, (3) delay.
MAIMS stands for ‚money allocated is money spent‘ – which means that cost overruns usually can not be offset by cost under-runs somewhere else in the project. The second effect is that contingency is mostly used to add resources to the project in order to keep the schedule. Thus contingencies are not used to correct underestimations of the project, i.e. most times the plan remains unchanged until all hope is lost. The third effect is that delay is an important cost driver, but delay is only acknowledged as late as somehow possible. This is mostly due to the facts of wishful thinking and inaction inertia on the project management side.

Tom DeMarco proposed a simple square root formula to express that staff added to a late project makes it even later. In this paper Miranda & Abran break this idea down into several categories to better estimate these effects.

In their model the project runs through three phases after delay occurred:

Time between the actual occurence of the delay and when the delay is decided-upon

Additional resources are being ramped-up

Additional resources are fully productive

During this time the whole contingency needed can be broken down into five categories:

Budgeted effort, which would occur anyway with delay or not = FTE * Recovery time as orginally planned

Overtime effort, which is the overtime worked of the original staff after the delay is decided-upon

McBride covers two aspects of tools and techniques for the management of software developments. Firstly the monitoring, secondly the control and thirdly the coordination mechanisms used in software development.

The author distinguishes four categories of monitoring tools: automatic, formal, ad hoc, and informal. The most common tools used are schedule tracking, team meeting, status report, management reviews, drill downs, conversations with the team and the customers.

The control mechanisms are categorised by their organisational form of control as either output, behaviour, input, or clan control. The most often used control mechanisms are Budget/schedule/functionality control, formal processes, project plan, team co-location, and informal communities.

Lastly the Coordination mechanisms are grouped by which way the try to coordinate the teams: standards, plans, formal and informal coordination mechanisms. The most common are specifications, schedule, test plans, team meetings, ad hoc meetings, co-location, and personal conversations.

Stewart outlines a framework of management tasks which are set to span the whole life cycle of a project. The life cycle consists of 3 phases – selection (called „SelectIT“), implementation (called „ImplementIT“), and close-out (called „EvaluateIT“).

The first phase’s main goal is to single out the projects worth doing. Therefore the project manager evaluates cost & benefits (=tangible monetary factors) and value & risks (=intangible monetary factors). In order to evaluate these the project manager needs to define a probability function of these factors for the project. Then these distribution functions are aggregated. Stewart suggests using also the Analytical Hierarchy Process Method (AHP) and the Vertex method [which I am not familiar with, neither is wikipedia or the general internet] in this step. Afterwards the rankings for each project are calculated and the projects are ranked accordingly.

The second phase is merely a controlling view on the IT project implementation. According to Stewart you should conduct SWOT-Analyses, come up with a IT diffusion strategy, design the operational strategy, some action plans on how to implement IT, and finally a monitoring plan.

The third stage („EvaluateIT“) advocates the use of an IT Balanced Score Card with 5 different perspectives – (1) Operations, (2) Benefits, (3) User, (4) Strategic competitiveness, and (5) Technology/System. In order to establish the Balanced Score Card measures for each category need to be defined first, then weighted, then applied and measured. The next step is to develop a utility function and finally overall IT performance can be monitored and improvements can be tracked.

Well, it is a bit aged but given the projects I have seen, it is far from being outdated. So what is his answer? It’s Peopleware not Software and people have to function in their roles and sometimes they don’t.

DeMarco lists as root causes: Scheduling errors („The schedule is crap, when even high performers have no slack“), missing accountability by management („I don’t ask for an estimate, I ask for a promise!“), missing prioritization („All these recommendations for improving ourselves are great. But what if only one thing succeeds? What would it be?“), and the general tendency to ‚fuck up‘ the end-game (i.e. value capturing after implementation).

And of course DeMarcos specialty – Software Development Metrics. He adds the nice insight that measuring something without a clear idea how to improve on that metric is a waste of time and money. It might be worthwhile to sample business case points etc. for a while, but in the long-run only defect counts should be institutionalized.