Blogroll

Month: November 2011

A couple of weeks ago, I spent some time figuring out how actual costs were calculated within Microsoft Project. That ended up being a series of 3-4 blogs posts, which I felt captured the meat of the discussion.

Well, over the weekend, I started playing with Microsoft Project again, and I realized that there’s a key bit of functionality in there which definitely adds to the discussion. I am not sure how it adds to the discussion, but it definitely provides another building block in our common understanding of some behind the scenes calculations.

In this case, I have a new blank project. I add a single 10 day task to this project.

I set the Fixed Cost for the activity to $10,000.

Now this is where it gets interesting. Watch what happens when I enter a “1” in the Actual Work column.

First, we see that Work is now populated with 80 hours – which makes sense. That’s 10 days X 80 hours = 80. Ok.

Next, we see that Actual Cost has now been calculated as $100 or 1% of the total Fixed Cost. For the record, our Actual Work comprises .0125 of the Work.

Now, I change the Actual Work to 2 hours. I would expect Actual Cost to change to $200.

Nope – I end up with $300.

What’s going on here? As far as I can tell, here’s the calculation being performed behind the scenes:

(Round(100*[Actual Work]/[Work])/100)*[Fixed Cost]

…which, if broken down, yields the following steps:

Divide Actual Work by Work. In this case, with a 1 in the Actual Work, that yields .0125.

The recent PMO Symposium presented a fantastic source of ideas and concepts that I am still mentally digesting. As part of that digestion process, I am turning some of my thoughts into blog posts – and probably at some point into presentations.

This topic was born as I attended a session on the second day. I realized that I was suffering some mild form of cognitive dissonance. You see, each of the speakers had emphasized the importance of benefits realization – and discussed how true benefits realization really measures the portfolio impact on the strategic goals of the organization.

A couple of the speakers emphasized that merely making money, or “sell more, spend less” doesn’t represent a strategy, but really an “outcome” – perhaps an outcome that helps to enable a strategic goal.

So invariably, in presentation after presentation, the articulation of the PMO goals was assumed to happen after some sort of amorphous “strategy definition” stage. A PMO without a strategic framework to operate in was akin to a boat drifting without rudders.

Where the cognitive dissonance was coming in was when we start thinking about those organizations where for one reason or another, strategy has not been articulated? Should we tell PMO’s in those organizations that they have no right to exist? Clearly there are organizations that haven’t figured out their own strategy. Equally clearly, PMOs exist within those organizations and provide value.

So what is the role of the PMO in a company without articulated strategy? In essence, what is the role of the nonstrategic PMO? How does a PMO measure benefits realization when we don’t know what context the benefits should be measured in?

The first answer is that PMO’s may provide specific tactical benefits to the organization. The paper that kept getting mentioned was Dr. Brian Hobbs discussion of the current state of the PMO world. Take a look at page 22 to see the exhaustive list of identified functions of PMOs that generate value for the organization…

The second answer is that the PMO may provide the input required to encourage the organization to articulate the strategy. As I’ve often maintained, governance may not be required in the absence of resource constraints. It’s only when resource (or process) constraints are identified that the organization has to begin making concrete decisions about values and project/portfolio prioritization – hence governance may be born (or at least parented) of constraint identification.

Finally, there is something to be said about measuring how well the organization is meeting its desired outcomes. In the absence of true benefits realization, we function at the next lower level of abstraction, or the outcome level.

This prompted me to think about the V Model, a key part of the ITIL training curriculum that’s been around for years.

The V Model is properly read from the left arm, down to the base, and then back up on the right side. The general concept is that first we develop the Concept of Operations, then Requirements, then Detailed Design, then we build the system. After building the system, we move into validation mode, validating and testing each of the elements along the right side – corresponding vertically with the equivalent item on the left. So Integration, Test and Verification is a validation of Detailed Design. System Verification and Validation validates Requirements and Architecture, and so forth.

So to paraphrase the Wikipedia article, if System Verification and Validation is equivalent to asking “Are we building the thing right?” then Operation and Maintenance would validate the question “Are we building the right thing?”

How would a PMO of either the strategic or nonstrategic type fare against the V Model? As I see it, a strategic PMO starts at the very top left of our model and allows us to validate the benefits of the PMO against the defined organizational strategy. A nonstrategic PMO, on the other hand, will start at the next level down, and allows us to validate the results of the PMO against our desired outcomes, i.e. whether or not we are executing things correctly…..but not whether we are executing the correct things.

Well, it’s been a while since I fired up my virtual image and played around in the Portfolio Analysis module – specifically since March or so when I released that white paper on the topic. Last week, one of my colleagues was asking me about the mechanics of generic resources within Portfolio Analysis – which reminded me of this post which I’d half written but never gotten around to finishing.

In this post, I’ll talk about how projects may be manually prioritized within the context of the Portfolio Analysis module. This would allow organizations with their own prioritization mechanisms to bypass the entire strategic driver definition process that is in Project Server 2010 and proceed directly to the constraint optimization functionality.

What’s potentially confusing is that the manual prioritization mechanism allows for the use of multiple fields in prioritizing the projects – and it’s somewhat unclear how it all works under the hood…..hence, this post.

Thanks to fellow UMTer Catalin Olteanu for explaining the topic to me and providing the calculations.

The Setup

To test out the calculations, I have created three projects. I have then created three custom number fields (Prioritization1 through Prioritization3), and more or less randomly assigned values to each project. In theory, if you have an external system providing these numbers, you could use custom code against the PSI or even the Bulk Import solution starter to import these values against existing projects.

I then go create a new portfolio in Portfolio Analysis with these three projects. When creating the portfolio, I select the option to prioritize projects using custom fields.

In the next screen, I select the Modify button from the ribbon and add my three fields.

The Basic Calculations

So, off the bat, we see three parameters for each of the custom fields. (You will note that each of these parameters is editable.)

Weight – the relative weight of one field against the other fields. This starts out at 100%, but will be recalculated to 33.33% in the above example when I click the option to Normalize Weights. With three custom fields, essentially, I total the weight column to 300%, then divide the value for each field by the total. In the scenario above, that yields a calculation of 100/(100 + 100 + 100).

Minimum Value – the minimum value allowed for the specific field.

Maximum Value – the maximum value allowed for the specific field.

Here’s what the scenario looks like after I normalize weights.

Single Value Calculation

Under the hood, here’s what happens….for each field.

Values are adjusted to the minimum and/or maximum values. That means if you set a minimum for a specific field of “2”, and someone has entered a “1”, then the project value will automatically be converted to a “2” for that specific field. Similarly, the same calculation is performed on the maximum column. This yields what I call the adjusted minimum and maximum values.

The adjusted minimum and maximum values are then measured against the interval for each field. For example, if a project is set to “5” on an interval stretching from a minimum of “2” and a maximum of “10”, the value for the field will be (5-2)/(10-2). The formula is (Adjusted Value – Minimum)/(Maximum – Minimum).

Why subtract the minimum value from the adjusted value? If the minimum value is set to “2” and the adjusted value is set to “3”, then we want to adjust the value to account for the fact that the value entered is only one above the minimum, in effect a “1” and not a “3.” This yields our absolute priority.

The value is then normalized to yield the normalized priority for each field. In the example above we do that by calculating .375 / (0 + .375 + 1) to yield 27.27.

Multiple Value Calculation

So if you have a single custom field, that’s pretty much it. If you have multiple custom fields, then you need to combine the scores for each project for each field – which is pretty simple once you identify the order in which the calculations are performed.

First off….take the absolute priority (not the normalized priority) for each of the fields for each of the projects.

Multiply the absolute priority times the weighting for each field. In this case, I am multiplying the values by 33.33, 33.33, and 33.34 respectively.

Sum the results of the weighted prioritization calculation. I sum them into the Weighted Sum column.

Normalize the results to generate the project priority. For example, in Test Project 2, I calculate 43.19 / (47.48 + 43.19 + 92.67) to generate a normalized score of 23.56%.

In previous posts in this series, I talked about how change management is rightly stratified throughout the organization and how each change management process shares certain, very specific structures.

In this post, I’d like to wrap that up by looking at the ramifications of this phenomenon.

To review, we identified that each change management organization at each tier of the organization shares the same functions:

Problem Sensing

Supply/Demand Modeling

Decision Making

Then doesn’t it make sense to embrace that structure and actually create a framework around it? To essentially catalog and develop a community of practice around problem sensing, and to make that knowledge available to each identified change management office within the organization?

Similarly, if each of the change management offices have to do some form of supply/demand modeling, doesn’t it make sense to catalog the different mechanisms and tools in use, and perhaps even to provide standardized resources and practices around how to model supply and demand? For instance if one part of the organization has embraced Theory of Constraints, wouldn’t it make sense to educate the other parts of the organization in the same model?

What’s an organization to do? Why implement a Meta PMO, of course.

The Meta PMO would serve the role of process repository, advisor, and toolmaster for each of the identified structures common to all change management offices, whether they be authorized “PMOs” or change management structures in the ITIL definition of the word.

Tomorrow….back to the technical with a review of specific functionality in the Project Server 2010 portfolio analysis module.

In this, (almost) the denouement of my little series that grew out of the PMO Symposium presentation I gave this week, I’d like to talk about what happens if change management isn’t managed at the appropriate level.

Going back to the example in yesterday’s post, specifically, the discussion of the different decision levels involved in our technical resource driving from Columbus to Cleveland, let’s examine what would happen if the decisions were made at different levels – or at the inappropriate levels.

Let’s say that the night before the drive, the account manager sat down with the technical resource and pulled up a satellite overview of the route, then planned each and every lane change for the entire 2.5 hour drive. The technical resource agrees to follow the plan exactly.

The result? Most likely the technical resource ends up plowing into another car or a construction site, and never makes it to Cleveland.

The conclusion from that example? When making decisions at a level too far removed from reality, we suffer from overly rigid planning that cannot be adapted to the needs of the moment. The second conclusion we can draw is that when decisions are made at that level of detail, we end up spending a lot of time focusing on the detail – to the detriment of the larger picture, i.e. planning the meeting narrative and defining a strategy to work with that potential client.

Now let’s flip that example. Let’s leave the strategery to the technical resource, who, if you follow the Taylorist approach to organizational management, is at the bottom of the organization….which is perhaps a fairly judgmental term. Maybe we could say he’s at the more action oriented end of the organizational spectrum.

At this point, we’re tasking the technical resource with maintaining a laser like focus on the technical delivery of projects, while still expecting him to immerse himself in the meta-details of the organization’s operation – and to be able to identify where the gaps will happen.

The inevitable result? The technical resource focuses on the immediate present and neglects planning for the future. The organization becomes reactive and not proactive. That’s no different than pushing portfolio planning to the siloed functional level – where you inevitably end up with localized optimization.

Moral of the (somewhat long winded) story?

Change management is stratified at various levels of the organization as a natural response to requirements to manage complexity….yet, within each strata, the change management mechanism follows the same inherent structure:

Let’s stop right now and start looking at a concrete (albeit contrived) example of change management…

All of the top managers of an organization gather together and identify that their sales are too geographically limited. Perhaps most of their sales are in the Columbus market and this opens the organization up to too much risk – as if the Columbus market, highly dependent on public sector spending dries up, the organization’s livelihood would be in jeopardy.

So they decide that they need to expand into other markets. To develop this plan, they then invite their sales and product managers to the next round of meetings. Perhaps at one of these meetings, one of the sales managers identifies Cleveland as a new target market.

The word goes out to all of the account managers…and maybe a new incentive is published. The account managers are tasked with finding new opportunities in the Cleveland market. One account manager in particular picks up a phone and sets up a meeting with a prospect in Cleveland. Since the meeting is technical in nature, he assigns one of his technical resources to participate in the meeting.

Now the technical resource gets a simple directive….get to Cleveland by 8:30 AM on Tuesday to demonstrate this product. Maybe the account manager will sit down with the technical resource to review a map of the route (which is unlikely in this day and age – especially as they would probably just drive up 71 for a couple of hours.)

So on the appointed day, the technical resource hops in the car and starts driving. The plan is set, and there shall be no changes. He needs to get to Cleveland by 8:30 AM.

That’s the normal process. Each layer within the hierarchy makes the decisions and modifications to the plan appropriate to the level on which they operate. Would it make sense for the account manager to sit down with the technical resource the night before the meeting and identify exactly which lanes to drive in?

No, that would be silly. The account manager would never be able to effectively plan at that level. This is the nature of complexity – which many organizations deal with naturally by pushing it down in the organization.

Next up….the implications of mismanaging the level of complexity at which we operate….