Blogroll

Month: May 2013

Years ago, in university, I spent a couple of weeks backpacking through Sichuan and Yunnan provinces. At the time, I was on a Chinese student stipend, and hence, often didn’t have very much money to spend – even if I could spend it, as many of those villages hadn’t quite gotten the memo that foreigners could actually use cash instead of those weird foreign exchange certificate things they used to foist on us.

I developed this game that I would play every time I walked into one of these villages. Basically, I would go to the first villager I saw and show him this stone that I kept in my backpack. This stone, I said, had magical powers. You could throw it in boiling water and make the most delicious soup imaginable.

Invariably, we’d do just that, and throw a pot of water on the burner, and then start boiling the stone. After a while, I’d make a show of tasting the ‘soup’ and making a face. “Hmmm,” I’d say, “it needs chicken.” And the guy would go out and butcher a chicken and throw it in the soup.

By this time, his neighbors usually had come by, and as the soup boiled and I kept tasting it, I would urge them to bring a selection of vegetables from their garden, maybe some celery or cabbage or onions…some salt….pepper…and all of that wonderful Sichuan spicy goodness that’s part of everything out there.

At the end of the day, we had a wonderful soup….all because of the magic stone.

Ok, right…..I never tried the magic stone trick in those days. Instead, I do recall scamming a couple of free meals posing as a “foreign investor” in a southern Gobi salt mine and maybe hamming up my ethnic background to share a meal with my “minority brother” in some small town in Gansu. I admit it….I stole the magic stone story from a kid’s book we got in a garage sale a couple of years ago.

But make no mistake, for the last couple of years, I’ve been plying the magic stone trick to great personal and professional benefit. I can’t tell you how many times I walk into an organization and have been asked to implement a tool, only to respond, “Well, the tool is pretty good, but we’ll need a project intake process to make it work,” or “if only we had a schedule update methodology, we could get to where you want to go as an organization.”

At the end of the day, we have a wonderful soup….and it only took a little old tool like Microsoft Project to provide the catalyst. That’s not to say that tools do not provide any benefit, because they do. However, without the process and yes, governance, in place, the tools do not do very much. Without process, they’re effectively the same value as a boiled stone.

For those of you who attended the Gartner PPM conference last week, one of the presentations discussed the role of tools in PMOs of varying maturity. The guidance was to focus on people when the maturity is low. Then focus on the process. Only after the PMO is ready should you focus on the tool.

I would add a nuance to that by pointing out that no project management organization is a single homogeneous monolith. There are always pockets of advanced project managers interspersed with less mature project managers. Hence, even in “low maturity” PMOs, there are always individuals who stand to benefit from the use of an enterprise project management tool.

How do we get to the point where we have the right people and the right process? Often, it takes a visible catalyst to drive it. Often it takes the magic stone.

*Note that this metaphor isn’t even all that creative in the IT world. Ian Clayton used it years ago to describe ITIL implementations. Feel free to borrow it to illustrate every other silver bullet solution to the world’s problems: CMDB, Service Management, ePMOs, OPM3, etc.

Bob is managing five projects. Each of those projects support a different aspect of a specific enterprise application. Several of those projects are pretty advanced in execution and have entered a deployment stage. Other projects have just barely gotten off the runway and are still onboarding their staff. One of Bob’s projects is still in the early proposal stage and hasn’t had any serious funding authorized. Every week, Bob’s management asks him to fill out a single status report for this program. Every week, Bob looks at the “Stage” drop down option in his status report and has no clue how to fill it out. He eventually closes his eyes, spins the mouse wheel not unlike the arm of a slot machine and randomly selects whatever stage sounds good at the time.

Anyone who’s ever tried to shoehorn a collection of projects into a single workflow has gone through the realization that programs require their own lifecycle, effectively serving as a macro version of the lifecycle for each project. If I am building a pipeline, I might split the pipeline into multiple sections. Each section would have a planning, permitting and construction stage. Some sections might require horizontal drilling. Other sections may include relatively even terrain. Each of those sections will be managed by a different project manager and may be built in sequence or in parallel.

The question then comes up….what stage is the overall program in? Is it in planning? Construction? Permitting? None of the answers make sense because the question doesn’t make sense.

A Simple Program Lifecycle

The solution is to step back and look at the program lifecycle. In the PMI standards, that’s depicted as a simple three stage model – a starter workflow, if you will.

Planning – where we define the program goals and how those will be tied to the authorized projects.

Execution – where we identify, authorize and execute the work. Note that this is a catch all stage that rolls up many of the standard project lifecycle stages.

Closeout – where we identify if our program met its goals and whether or not we achieved the anticipated benefits.

A More Nuanced Lifecycle Model

Clearly, this lifecycle wouldn’t work for all programs. What about the open ended programs such as Health Safety and Environment (HSE) initiatives? An organization will charter an HSE program and staff it with resources looking for ways to continually improve and maintain HSE performance by limiting the number of reportable incidents. These are the sort of programs that never really end, they just keep going on and on and spawning an endless series of projects.

So how does one come up with an appropriate program lifecycle? I don’t have all of the answers, but it seems to me that the correct approach would be to start at the end. What is the end goal of the program? Once that’s identified, we can work backwards from there to create a management model.

In my world, that pretty much means we have the following potential program lifecycle models:

Asset Lifecycles

Continual Improvement Lifecycles

Contracted or Defined Lifecycles

Product Development

An asset lifecycle is pretty much our standard lifecycle in the IT domain where an asset is almost always defined as a service, which is then supported by applications, which are then supported by infrastructure. Projects either add or remove elements from this asset, and operations ensure the asset performs its functions. In this model, the end of the asset dictates the end of the program, hence the projects and priorities spawned by the program will vary based on where the asset is in its lifecycle and investment analysis on when to improve the asset. As near as I can tell, wells and drilling platforms follow roughly the same model.

I see continual improvement lifecycles more in the business domain, where we might have continuing initiatives to increase efficiency, or reduce HSE incidents. These lifecycles will almost never end and often have (or should have) clearly defined metrics of success that were identified during the planning stages.

Contracted or defined lifecycles include definite ends. Our goal is to decommission this site, or to develop a complicated solution to a pressing problem. These programs may also exist as subprograms within an overall program framework.

Finally, we have the product development lifecycle, which takes a product into the market and then sustains it while it’s in the market. This lifecycle might perhaps be considered a subset of the asset management lifecycle insofar as there are a number of similarities, but there are also a significant number of differences related to the relative newness of the product’s underpinning elements and the challenges of supporting a market of external consumers as opposed to a more controlled group of internal consumers.

That’s not intended to be an exhaustive list of program management lifecycles – more an off the cuff analysis of the ones I come into contact with most frequently.

The Context Provides the Purpose

What do we do with this knowledge? The next time you’re forced to answer the question of “which single stage are all five of these projects in?” take a step back. Look at whether or not these projects are all related. If someone’s asking you that question, chances are that they are, in fact, related. Then look at what program goal these projects are supporting. Finally, develop an appropriate lifecycle for the program.

Only after you’ve done that, can you then begin to assess the reporting requirements of your individual projects, and provide that information in a meaningful and relevant manner.

Talking about this at work the other day and figured it was worth a post. If you’re not familiar with the retro encabulator, or its predecessor, the turbo encabulator, then this is well worth a watch. If you are familiar and haven’t watched in a while, it’s probably worth a look as well.

“I am struggling to see the overall value, or benefit for the investment, especially considering the effort required to build, manage and maintain the overall system,” he writes. “Having used these systems for many years, I don’t believe they are the right fit or the right answer for any enterprise. Project, program, and portfolio reporting should be handled more simply. Proper governance is the right answer, which doesn’t require sophisticated aggregation of data across the enterprise using EPM, or PPM technologies. In a nut shell, it’s overkill.”

Couldn’t agree more. In fact, as I was just pointing out to someone the other day, one of the main failure modes in an EPM tool implementation is that the initial implementation is overengineered and way too complex. After initial implementation, a tool like that invariably falls apart….only to be replaced a couple of years later by a kinder, gentler tool implementation.

That’s why I actually prefer to work with organizations that have failed to do this a couple of times. They’re the organizations that know what they don’t know. They’re the ones that realize complexity doesn’t work for them, and simplicity is the order of the day.

The Work Taxonomy, or ‘Waxonomy’

The comment also hit right on another theme that I’ve spent some time noodling about as of late, i.e. the work lifecycle. More on that in a bit. First let’s unpack this statement:

Proper governance is the right answer, which doesn’t require sophisticated aggregation of data across the enterprise using EPM, or PPM technologies. In a nut shell, it’s overkill.

From a portfolio level, I will agree with this statement provided we have gone through the effort of creating a standard data taxonomy throughout the portfolio. This is a topic that will undoubtedly be in my soapbox going forward. What I see as a requirement for successful end to end work and financial management is a common taxonomy of work, i.e. all work within the organization can be tagged with the following metadata:

Resource performing the work.

Business unit/cost center to which the resource belongs.

The asset that the work is supporting (this is the hard one).

Whether the work is building new stuff or maintaining existing stuff.

Whether the work is planned or unplanned.

Other really important metadata….

As long as all work (and investment if the organization is mature enough for it) is mapped to a common taxonomy, all of the data can be rolled up to the portfolio level so that I can get an overview of what my assets cost – and where that cost is coming from within the organization. That’s the same story whether I’m managing drilling platforms or IT applications.

Since the likelihood of all of that source data being stored in a single location is effectively nil, I would say that any inquiry into the TCO of an asset would almost certainly require at least a high level aggregation of data across the enterprise – whether it be in an EPM tool or in an overall data warehouse.

Authorization vs. Allocation

But let’s focus downward at the project level, which is what I was discussing in that post – which was originally prompted by the new Microsoft Project Server – TFS demo image. Do we require sophisticated aggregation of data across the enterprise at the project level? The answer here, I would posit, is it depends. It depends on what the organization is (rightly or wrongly) looking for. It depends on the organizational viewpoint on where in the lifecycle of work authorizations happen.

There’s an old sales story that gets bandied about. It’s a bit cliché at this point, but in the interest of being thorough, I will repeat it here:

When you have breakfast (assuming you have no religious or ethical issues with eating pork or other animal products), you have bacon and eggs. The chicken was involved in the making of the breakfast. The pig was committed.

The general gist is that the sales prospect is just thinking about committing, but they need to make that mental leap and commit entirely. Let’s think about that in the context of work. Let’s assume that work can be identified and traced through a lifecycle:

Work begins its life as capacity. That capacity appears on paper the minute a functional manager commits next year’s annual staffing plan to the staffing system.

The work gets closer to realization when a resource is hired into the organization. Now we don’t have planned capacity; we have actual capacity.

Along comes a project business case. The business case may not specify the exact resource, but once it is authorized and becomes a project, that work is now converted to demand. The work begins to take shape and be associated to the organizational work taxonomy.

The project is authorized.

The work is assigned to an individual. The proposed work has become an allocated task. In a traditional project, this might occur when the project is authorized to enter execution. In an agile project, this allocation would occur long after the project was authorized and only after the appropriate user story has been prioritized into the next iteration. This is putting us pretty close to the chicken/pig tipping point.

The individual then performs the task. This is what I would call the “work conversion event,” i.e. the planned work has now been unalterably converted to historical work. The work has become the pig. (This also begs the question of whether or not some sort of conversion rate metric could be applied to identify how much planned work actually becomes historical work – which would be a post for a different time.)

So essentially, we have had that work unit progress from planned capacity to actual executed work. That is the work lifecycle. That is what drives PPM system complexity in organizations intent on managing resources within the system. The farther down the work lifecycle that the organization tries to push before authorizing the project, the more complex the tool becomes.

The Road to Kanban

Let me try a different tack…..how about these two statements?

In an agile planning methodology, availability drives the work.

In a traditional planning methodology, work drives the availability.

There are at least two things about agile that significantly simplifies the planning process:

We allocate dedicated resources.

We don’t define the detailed tasks until they’re ready to be performed.

The inevitable result of this is that indeed planning for an agile project requires a lot less detail. I won’t get into the debate of whether or not it is less rigorous, or where the quality focus is, but at this point, when the project is authorized or even in the planning stages, there is a lot less detail developed. As a result, we can employ a kanban approach at the project level and pull the tasks into the next iteration as availability allows it. Availability drives the work.

Now let’s take a look at a traditional waterfall planning process:

We generally assume multithreaded resources working on multiple projects.

We define the detailed tasks long before they’re ready to be performed.

Those statements are both different aspects of the same issue. If we assume that our resources will be working on multiple projects, then we must plan their tasks out to the nth degree to ensure the work can be performed on schedule. This pushes our planning farther down into the work lifecycle. The further down the lifecycle we get from a planning perspective, the more complicated our system ends up being. Work drives the availability.

The inverse statement is also true. If we assume that our resources are only working on a single project, then we only have to plan their commitment to the project at a high level. As a result systems embracing this planning methodology can be much simpler in structure.

It’s our old forecasting bogeyman, multitasking, that drives the complexity.

.…and apply them to the portfolio level of planning. Then truly, almost any conventional portfolio planning methodology would essentially follow the agile approach:

We’re not planning named resources and therefore effectively are using a dedicated resource model.

We’re only allowing work into the pipeline when it corresponds to available (or forecast) capacity.

Portfolio optimization essentially becomes an exercise in constraint identification and optimization. It’s based on estimates of capacity and demand, and then building a backlog of work to optimize our resource constraints. The complexity lies in the data that supports the constraint analysis, and the fact that this data collection has been pushed far down the work lifecycle.

At this point, I’m willing to concede that I probably totally misread Mike K.’s original comment, but it did launch me on some interesting tangents, specifically around kanban at the PPM level and why EPM systems become so complex. So definitely, thanks Mike. It’s the comments that make blogging less like work and more like an exercise in long form improv.

I also point out that ironically, the TFS integration that actually launched the original blog post is actually designed to remove complexity from an EPM system and push it into a more appropriate tool.

In Texas and looking for great content and an opportunity to interact with the largest Microsoft PPM partner in North America? Look no further than the PMI Houston blowout where UMT will be the Project Server torchbearer.

We’re planning on bringing our best topics to the table, with a special emphasis on what’s next in industry-recognized project financial governance. Attend any of our presentations to learn how to architect project portfolio management structures with the global leader in the field.

In that last post, I talked briefly about how to perform a functional analysis as a prelude to integrating agile project methodologies into a mixed PPM environment. In this post, I’d like take a closer look at some of the functions and identify examples of where their processes would have to be tailored to interface with an agile governance methodology.

Schedule and Financial Controls

Clearly, one of the first steps in integrating agile project management is flagging your agile projects and ensure they are excluded from many of your current reports. You’ll do this because if you’re tracking stage gate compliance on your other projects, you’ll need to identify a mechanism to exempt these agile projects from that structure – and then develop another report that meets organizational needs to track schedule progress on agile projects. This, in turn, may confuse your report readers who now need to look at multiple report types to understand the different project types.

One potential solution to this would be to roll your reporting up a level. Instead of creating different executive reports for each project methodology, identify what question you’re really asking and whether or not a common report could encompass both methodologies. Often the solution is to aggregate the reporting up a level.

In IT, aggregating your reporting might typically mean reporting not at the project level, but at the application level. For example, I may have multiple projects supporting a specific application. I could then track an aggregated budget for project work that supports an application at the application level to show cost overruns. When I drill into that, I can go to the project level – which would then be tailored by methodology.

Resource Management

Chances are that if you’re the kind of organization that worries about PPM processes, you’re probably tracking resource allocation. I won’t go to far on this topic as I discussed it last week in The Increasingly Misnamed Project Server…..but suffice it to say that your standard resource management tool may not be suited to managing agile projects.

In this case, rather than shoehorning the agile methodology into a tool not suited for it, consider using something more appropriate, provided that the task and work tracking mechanism feeds into your centralized resource management tool. In this case, when paired with TFS for task management, something like Project Server simply serves as a work aggregation tool, and not a work tracking tool. Taking it one step further, the agile delivery can be managed in Excel, then passed via TFS back into the centralized Project Server resource capacity.

Those are but some of the functions that would need to be considered when integrating agile project management into a PPM portfolio. The list goes on and on.

…And Then…

Finally, let’s finish this post where we started…back at the macro level. We’ve now knit a new methodology into our PPM processes. What’s the inevitable end result of this? Well, a more agile planning process. The inevitable result of PPM maturity and functional maturity is to get more granular in the planning process. What you’ll find is that your portfolio of work will increasingly look like an agile project writ large, with a backlog of work and a predicted burndown. Each quarter, work items will be swapped out and reprioritized.

Why is that? Take a look at this presentation from Mike Cottmeyer for his take. If I interpret his slides correctly, once we get to the point of mature PPM, we can estimate our available resource capacity well enough that we can begin to apply a kanban system to our portfolio, i.e. we only allow work into the pipeline when we know we have enough capacity. The corollary to that is that we start to chunk out our work into smaller chunks in order to generate the value possible using the available resources at the time.

As the PPM process becomes more nimble, the operational cost of planning goes down. And as the operational cost of planning goes down, the relative value of the PPM process overall goes up.

Enjoying one of those rare confluences where my blogging, professional and speaking lives all converge at the same moment into the same topic. As it looks like I’ll be presenting this at the upcoming Houston PMI blowout, I figured I’d try to capture it on virtual paper as part of my preparation.

The general question is how to integrate agile project management into a mixed portfolio of agile and non-agile projects. This really ties into the discussion I posted a couple of weeks ago on EPM Bingo, or how to perform a structured analysis of your project portfolio management system.

Start from the Top

The basic gist is to back away from the daily processes, and look at the overall project lifecycle. Regardless of execution methodology, most project lifecycles follow the same standard process, i.e. registering the idea, building the business case, authorizing the project, doing stuff, reporting on stuff, etc. The nuance when we start looking at different management methodologies is to determine where the SDLC is determined, and then assessing the downstream impact of that decision.

The macro view is indeed quite simple – just a couple of boxes. 🙂 I didn’t even need to resort to fancy arrows or circles.

The pre-authorization stages of an agile project are pretty similar to a traditional waterfall approach. We build the business case and do the estimates and maybe a high level risk assessment. It’s at this point in a mixed methodology PPM organization that we might first consider to assign an SDLC model. I won’t digress too far into SDLC determination as that was the topic of this rant.

Suffice it to say, I am strongly of the opinion that the SDLC should be tailored to the scope and risk of the project. In a nutshell, the projected volatility of the scope should really dictate the SDLC model chosen – and that scope might not be accurately defined at this early level in the budgeting/approval process.

Functional Analysis

Once the SDLC determination is completed, things get more fun. That’s where we need to perform an analysis of the functions we’ll be supporting. By “functions,” I mean the various other offices and roles within the organization that require information from the project team:

Resource Management

Release Management

Architectural Strategy

Financial Controls and Chargebacks

Other

How do you know which functions you support? Take a look at the existing project lifecycle and catalog the various reports and data that are passed back and forth from the project team to those outside of it. You’ll end up with something like this:

Chances are you went through this exercise informally when PPM processes were first implemented. Now we’re leveraging that to add another project management methodology to the mix. Each of the boxes in the grid above should have defined inputs and outputs to the functional boxes hovering around the perimeter of the system.