Getting to On-Time Software Projects

I am a big advocate of getting the project management schedule right for a project. While this seems obvious, what is not often as obvious is that a lot of the problems seen in a project can often be root caused back to an inadequate schedule estimate.

In the article A Successful Manager But Never A Successful Project? I talk about how some successful senior managers may have never experienced a successful project and how understanding this may help us be more effective with these managers. Capers Jones replied on LinkedIn to the article with his insights into those areas that caused the most problems in unsuccessful projects. He relates his experience based upon being an expert witness in breach of contract cases:

Poor estimates before the projects start

Poor quality control and a lack of pre-test inspections and static analysis

Poor change control (requirements creep at more than 1% per month)

Poor or even fraudulent status tracking that conceals problems instead of dealing with them.

What struck me about this list, while it matched my own experience, was it might also be a bit misleading. The potentially misleading part was that items 2 through 4 can often be root caused back to the first item of poor estimates. This does not detract from the list, it only adds insight into how we can avoid these same problems.

I replied to Capers’ comments with the following details from my own practice and experience (edited and expanded from my original comments on LinkedIn):

Poor Estimates Before The Project Start

Poor schedule estimates are in my experience the number one cause of unsuccessful projects. Once we fixed these, got good schedule estimates, then organizations that had never in their history delivered a project when promised found themselves delivering on time — and always with a noticeable boost in quality.

Poor Quality Control

This in my experience is often a red herring hiding the real root cause which is often an inadequate (i.e., way too short) schedule. Once we got a realistic schedule then this meant that software programmers (for example) could actually do a good job — the first time. The bump in quality we’ve always seen was a testament to this. Quality control activities (that supposedly failed) are often a patch to try and fix problems whose root cause is the original poor schedule estimate. The quality control activities were doomed to fail as much as the schedule was. The best quality control was a good schedule.

Poor Change Control

Again, often another red herring hiding the fundamental root cause of a late project, which was too short a schedule. In one humorous case, we delivered a huge mass market commercial electronics product on time — the first in recent corporate memory — in large part because we had a good schedule estimate.

A team did an analysis of the requirements as part of the post-project activities. Their conclusion? Massive requirements creep! They withdrew the report when asked that since we delivered on time did that mean massive requirements creep was not necessarily always a problem? There was apparently no massive problem as the “creep” was the same as in other projects and once we got an appropriate schedule, that level of requirements creep was manageable and even a competitive advantage.

Changes in requirements are often only considered problems when we are looking for a reason to explain why we missed our schedule. As much as it seems intuitive that if we change what we plan to do it must impact the schedule, many projects have regular and consistent changes to requirements yet can achieve their schedules. In a sense, if we plan to have some level of requirements change — and most projects should probably plan on this — then this can be included in the schedule. If we consistently do similar kinds of projects (e.g., consumer electronics, software intensive projects, etc.) we should have a history that shows what is the normal amount of requirements changes and we should plan for the normal amount in the schedule.

Poor Or Even Fraudulent Status Tracking

While I’d like to say the same, that this is a result of a very bad schedule being used, clearly this is never an acceptable approach. At best I will say that poor schedule selection encourages an environment where this kind of behavior — poor status tracking — can grow as management looks the other way hoping they are hearing the truth. In fact, it is brutally honest status tracking that should provide the first objective indication that our schedule (or other project parameters) are way off the mark.

In one major project, one that was arbitrarily pulled in by two months from our original performance based schedule estimate, I watched in utter fascination as the schedule milestones slipped out. It looked like a tsunami as all the milestones slid out by a week with each week’s senior management status update. The humorous part was how we didn’t change the end date but kept it the same while all this was happening. Here the status reporting was accurate, except we just refused to extrapolate the trend and continued to show the same project completion date. The project was late by two months with the added damage that we delayed our most profitable version of the product, which would normally have launched first, until after we shipped our economy version of the product. With the lower priced product on the market first, the higher priced and more profitable product never sold well.

On later projects, where we were able to use realistic schedules, we would scare the heck out of the account team because we would tell the customer exactly where we were and what the problems were. Previously, because we always delivered late, the account team would constantly tell the customer that everything was going fine and we would deliver on time — and the customer would never believe us. Because our account team was so use to only telling the customer positive news, the change to giving the customer objective data was a shock to them. The account team was always amazed when the customer thanked us for finally giving them accurate information so that they in turn could adjust their internal business cases and schedules as needed.

Get The Schedule Right!

Getting the schedule right is a core competency of any project manager. While I stress using past performance (i.e., go look at how long things took for similar previous projects) we might not always have the needed information. This is where we talk with our peers in the industry and look into the the work of people like Capers Jones and Dan Galorath. Their experience, research, data and tools provide insight into how long similar projects in similar industries have taken. Since our objective with any schedule is to get it in the right ballpark, and then work like crazy to achieve it, this kind of data can help minimizes the chances of making the classic underestimate that is often the root cause of many of the other problems that haunt the unsuccessful project.

Aly,

Thank you for the update and I’ll take a look when I get the chance.

Bruce

Hi Bruce – It’s been awhile since I’ve commented but I’m glad to be back 🙂 We’ve launched our new schedule risk analysis tool, Full Monte, and I remembered this post about the importance of accurate, realistic project schedules. Please feel free to check out Full Monte and let us know what you think!