Blogroll

Month: July 2013

I’m gradually working my way through Charles Betz’s excellent book, Architecture and Patterns for IT Service Management, and parts of it are resonating to me. I’ve only gotten to the section on Demand Management, but it’s definitely starting to correlate to what I’m observing in our clients. Let me see if I can put my own spin on it….

The basic gist is that demand comes in many different forms. In IT, demand shows up as formal project requests, service requests (i.e. minor changes that don’t rise to the level of projects), and the output of various systems monitoring services. The latter category in an IT setting would include all of the outputs of availability and capacity monitoring. In a non-IT setting such as capital equipment, I would extend that to include the output of our maintenance scheduling systems, which spit out tickets for required maintenance. As ITIL is really the extension of capital equipment management best practices to the relatively immature field of IT, that logic seems to make sense.

So that’s the work intake process, or if you want the sensing mechanism that determines what work the organization could do. Let’s go to the other side of the intake process, the work queuing mechanism. This is the viewpoint from the technician in the field, the person who must actually perform the work that’s coming through the intake funnel. In a perfect world, this is all funneled into the same assignment queue. That way, I can see my queue, and see all of the work assigned to me, whether it originated as a project task, a service request, or the output of a maintenance system.

In a perfect world, all of that work is prioritized – either by the work, or through some sort of prearranged prioritization mechanism, and every time I finish a task, I can go back into my queue and determine what the next task in order of priority is. I might also throw other characteristics into the task selection, such as how long it would take to perform the task. If I only have an hour, I’ll pick the next task that can be completed within a single hour.

The reality is that this rarely happens. In the organizations we see, there are fragmented intake and queuing mechanisms. As a result, we have different work streams that end up in different queues tracked with different metrics but assigned to the same individual. As a member of IT, I will have tasks assigned in the ticketing queue, the release queue, and the project task queue. Each of those tasks are competing for my bandwidth.

This takes us to the holy grail of enterprise work management. In essence, the goal of an organization, IT or otherwise, whether they realize it or not, is to centralize all of that demand into a central location, prioritize it, and then spit it out the other end into an appropriate work management system. I won’t say a consolidated work management system, as that may not make sense – especially when I have resources that are dedicated to performing preventive maintenance and can easily live within a single queue. However, when I have resources that are pulled across multiple queues, then that requires a more rationally designed work management system. (More on that in another post.)

Which leads us to the logical fallacy that has shaped the world of portfolio management for years. There’s been this underlying assumption that we can take all of the work that comes through the demand intake and prioritize it against other work. That’s the fundamental premise of many PPM systems, i.e. that I can throw all of my projects into a single bucket, define their priorities, and then map the priorities against the cost to optimize my portfolio. As long as I am prioritizing similar work streams, this more or less works.

The problem comes when I try to compare apples and oranges, when I try to compare a project to support Application A to a ticket to support Application B. At that point, the portfolio optimization breaks down. The inevitable result is either a logjam of work that kills the adoption of the PPM system, or a regression to the multiple siloed work intake and management systems with their own on board prioritization and fragmented queuing systems.

Enter the world of portfolio analytics. In portfolio analytics, we’re not looking to prioritize individual work, but instead, we’re looking to tie that work to a specific organizational asset. In IT, each project, ticket, or release supports an application, which in turn, supports a service. In a non-IT scenario in Oil and Gas, for instance, each project or ticket or release supports an asset such as a rig or well, which then can be quantified in terms of production and profit. If I can identify the business criticality of the service, then I can assess the priority of each element of work in supporting the service, and therefore derive a cohesive, comprehensive framework for work prioritization. I don’t look at the individual work items, but instead at the work in aggregate in terms of the assets it supports.

The first step in performing this analysis is to map the costs of the work to the assets. While that sounds simple, it gets complicated when you throw in the fact that we have to model shared services, work that supports multiple assets, outsourced and insourced models, etc. By mapping the relationship between our logical work entities and our logical assets, we can identify the total cost of ownership of the asset. That’s the first step in portfolio analysis.

The next step would be in defining the value of the asset, whether it be in quantitative profitability terms, or in qualitative benefits. Once the benefit cost ratio can be determined, that prioritization can be fed back into our demand intake structure – provided each of the demand entities can be coded appropriately back to a prioritized asset – either in financial or material terms. This gets us that much closer to being able to prioritize all work that comes into the system….prioritization through association.

Disclaimer: What I am about to discuss is bad, and wrong, and should never, ever be done. I would never encourage anyone to do this, nor would I ever do this myself. Hence, this post is entirely hypothetical in nature.

(Weird mornings tend to beget weird ideas.)

Let’s say that hypothetically you’re working in an organization that has had a less than smooth transition to the world of centralized scheduling. Perhaps the initial implementation didn’t go very well, or the network infrastructure wasn’t quite up to the task, or you were just plagued with bad luck. In these kinds of environments, it becomes quite common to begin blaming the system for any data issues – and not pinpointing specific user behavior.

I’m not saying the system isn’t at fault. I am saying however, that people always tend to blame the system first, and as a result, our hypothetical system administrator has to spend a fair amount of time reviewing older versions of existing schedules to verify that “Yes, that task has never been marked as completed….it probably didn’t get completed 3 months ago and then spontaneously revert to 67% complete last Friday,” and other queries of that nature.

Performing these sorts of forensic explorations can be time consuming and also risky, as they often involve restoring archived copies of the project over the live version. (Other options would be periodically to do a bulk export of your project schedules to an offline repository using the macro I documented here.)

One more option would be to directly query the Archive database (on-premises installations only). I wouldn’t recommend doing so to support serious reporting, but if an administrator were to say, spill a cup of coffee on their desk, and in the process of mopping it up, accidentally triggered a query against the Archive database, why, that wouldn’t be so bad, would it?

Hence, assuming you actually have scheduled backups running, which most organizations tend to, the following query (or a variation thereof) will net you the status of specific tasks in each of the backed up project versions:

How does one track resource allocation? What does resource “allocation” even mean? These are the sorts of questions that project managers often struggle with when mandated to account for resources within an enterprise project management tool. On the surface, this would seem like a simple problem: I simply define the tasks in sufficient detail to support resource estimates, slap a couple of resources on said tasks, and call it a day.

The question arises however….what about consultants? What if I have consultants that are dedicated to my project that charge a specific daily or hourly rate? In that case, I can pretty much assume that I will be paying these consultants for 40 hours / week over the lifetime of the project. The question these PMs typically ask me is how do we track these consultants? Do we track them at a set 40 hours / week – because that’s what we’re budgeting them at? Or do we track them at the actual number of hours we’ve got them assigned to tasks in any given week?

The correct answer is that you should be tracking both. This is not a one or the other situation. Instead, we’re looking at two different dimensions of the resource puzzle. One dimension is the budget for the resources – either in dollars or in hours. The other dimension is the allocation of these resources – which is typically measured in hours.

Think of the question more in these terms: Given that I’ve budgeted for a 100% of this consultant, why do I only have them allocated for the next four weeks at 65% of their availability? Does this mean that they’re really working on unplanned work for the remaining 35% of their time or does this mean that they’ll be sitting around waiting for the work to proceed through the queue? Why would an expensive consultant be working on unplanned work in the first place?

For employees that are not dedicated to a project, this may or may not be a problem – as it is typically assumed the employee is returning to their day job to support the organization. Hence, their resource allocation really does represent the true cost of the project. If the employee’s costs are carried in whole or part by the project though, this could become an issue.

The Resource Plan as the Budgeted Dimension

Project Server gives us the ability to track budgeted work through the much maligned resource plan. To use the resource plan, I simply have to navigate to the project within PWA and add resources. (Microsoft has posted a lot of guidance on the use of resource plans, so I won’t rehash it here.)

In the example above, I am assigning resources by FTE by quarter. That’s the cost that will hit my project budget.

The Project Plan as the Allocated Dimension

I then allocate the resources to specific tasks within the project plan. This allocation may or may not target 100% allocation. In fact, I would generally recommend an 80% target, as that allows for some schedule buffer.

Comparing the Two

Unfortunately, there’s nothing native that compares the resource plan and project plan values within Project Server. There are however some quick reports that can be generated with relatively simple SQL code, such as the query below:

Throw that into an ODC and open up Excel to generate something like the following report:

…which if I were managing that PMO would indicate that we’re either underutilizing expensive external labor or my project managers aren’t adequately planning the tasks that the resources will be performing.

Throwing Financials into the Mix

That accounts for the hours assigned to a resource. How do we convert all of that back into dollars for incorporating into the larger financial picture? Check out this feature from UMT’s flagship product, UMT 360. I can import my resource plan back into my budget to map my financial estimates that much closer to my resource estimates.