a good start would be to read through the thread "Problem fix timescales".

I'm not sure how you can have implemented the process without it having already identified the metrics since the identification, gathering and meaning of metrics should be integral.

I think it is very difficult to measure the effectiveness of problem management because the pro-active role avoids problems before they reach a state where you can measure what they would have cost.

Reduced numbers of incidents can also be misleading because each level of improvement allows people to use services more aggressively and stumble across new problems. Of course black marks if incidents occur caused by problems that were thought to be resolved.

Numbers of problems resolved and numbers of problems outstanding tell you something, but that gets complex when you consider whether thet are waiting for external input, future event, availability of resources or are just stuck because nobody can fathom out what the cause is.

In many organizations the technical resources are the same ones that do development and/or support roles and their availability has to be prioritized; you have to be able to measure just what resource is available.

You can measure the stages - evaluation, root cause analysis, design of resolution, development, implementation - in terms of staff time, elapsed time, cost (against benefit)._________________"Method goes far to prevent trouble in business: for it makes the task easy, hinders confusion, saves abundance of time, and instructs those that have business depending, both what to do and what to hope."
William Penn 1644-1718

If we were to look at the Root Cause Analysis for example, would a metric such as avg. # of day's to cause be a good measure or is that too simple?

Ask yourself what that is measuring. What is an acceptable figure? Do you have enough problems to average out the easy ones and the difficult ones? Can you set a target - or can you only measure retrospectively?

It would be of little use if you had only a couple of problems a year, but if you have twenty a day then you could look for averages and patterns and you could explain the anomalous problems, by their complexity for instance.

I think my answer is "yes, perhaps"._________________"Method goes far to prevent trouble in business: for it makes the task easy, hinders confusion, saves abundance of time, and instructs those that have business depending, both what to do and what to hope."
William Penn 1644-1718

Not sure I would use time as a metric for RCA. Problem Mgt shouldn't have any type of OLA or SLA associated with it. The goal is to fix the problem and fix it right the first time...not the fastest. Leave that to Incident. Some of the metrics we're using for Problem Mgt are Problems that have generated Changes (or a potential for an official RFC), those that have spawned a project (which could potentially create an RFC) and the number of known errors discovered, recorded and communicated to the Service Desk.

I'm not sure if I can understand you right.
But I never heard metrics used as progress tracking.
Metrics are usually used as a basis to measure the effectiveness of a process

I'm a Change Manager. To help me track progress, I divide the activities in CM Process in stages and controlled as milestones, such as:
- Registered
- Analyzed
- Approved
- Developed
- Tested
- etc
There I can track in what stage a RFC is on.
I hope this can make an analogy to Problem Management

Not sure I would use time as a metric for RCA. Problem Mgt shouldn't have any type of OLA or SLA associated with it. The goal is to fix the problem and fix it right the first time...not the fastest. Leave that to Incident. Some of the metrics we're using for Problem Mgt are Problems that have generated Changes (or a potential for an official RFC), those that have spawned a project (which could potentially create an RFC) and the number of known errors discovered, recorded and communicated to the Service Desk.

"the goal is to fix..."

That is absolutely correct. ... However, it does leave the question of how you measure the quality of your Problem Management. You can't just say "okay folk, take as long as you need" without any controls.

OLA and SLA are a bit of a red herring. Problem Management has to deliver value to the business - or you might be asked to half the resource applied just to see what difference it might make.

Measuring generated changes and projects and even numbers of known errors are useful but do not conclusively demonstrate how well the process is working (could they all have been achieved at half the cost with a less tortuous process or better trained staff?; was the prioritization managed effectively? could a better focus on the pro-active role have prevented some incidents? has the incident process been impeding problem resolution?).

My feeling is that there is a lot of judgement in evaluating Problem Management, and because of this it is important to have as many good strong measures as possible just to inform the judgement.

In the end you want to be able to say "well it took three months, but it was complex and we really got to the bottom of it; and the resolution will see us much stronger in many areas"

...and "I know they were minor irritants, but we resolved the lot in a week and that helps morale as well reducing the load on the service desk."_________________"Method goes far to prevent trouble in business: for it makes the task easy, hinders confusion, saves abundance of time, and instructs those that have business depending, both what to do and what to hope."
William Penn 1644-1718

Reduction in incident volumes for the config items is one way.
As mentioned, throughput of problem records is another but that is more of a way for continuous improvement of the process. E.g. try to remove waste or rework.

The measurement to workaround solution would be a good one as at least that returns service, the permanent fix is the one that can take considerable time to achieve.

Big M, your question is kinda vague because you did not tell us what the metrics will be used for and who your audience is. One metric you can use is a "pipeline" report. Using phases or stages to show progress through your process you can graphically report how many problems you are workign on and how many are at each stage. If this is a monthly report you can show the changes (plus or minus) from the previous month to demonstrate progress. Linking time to this you can report on the average number of minutes/hours/days that a problem is taking at each stage.

A more time consuming report to compile is a "Call Reduction Report". Basically for each problem you work on, you generate an incident report to build histroical data. Once you have implemented a solution you continue to track incident volume for up to a year. This way you can show on a monthly or annual basis how many calls you saved the Service Desk and through some financial calculations report to senior management the number of hours of productivity you put back into the organization. If you can link this into your organzation to show each business unit what you have saved them. This is information your CIO craves but doesn't know it yet.

As Brian1 already pointed out, it all depends who's asking the question, and also what they're asking.

IS PROBLEM MNGMENT PROCESS BEING ADOPTED?
- Show # of Problem records over time.
- Show Problem records non-conforming to proces (leave you to determine how that could be measured)

IS PROBLEM MNGMENT PROVIDING VALUE?
- Show # of KEDB articles used by service (provided you captures service on both Problem and Service Request/Incident. Also requires a means of determining if KEDB article was used)

There's a report you can build for every question out there... the key is: "What are the questions"?