Program Activities

Although the success of a measurement program cannot be guaranteed, IS managers can increase the odds that implementation will prevail by paying attention to the individual activities composing the program. Exhibit 3 shows the activities necessary to implement and maintain an IS measurement program. Each activity is described in the sections that follow. Exhibit 3. Activities of an IS Measurement Program

Assessment The three primary functions of assessment are:

1. Evaluating the current position of the organization

2. Identifying the goals of a measurement program

3. Establishing specific measurement goals

Since the mid-1980s, formal software process assessments such as those from the SEI and Software Productivity Research (SPR) have been available to evaluate the software development processes of an organization. Assessment provides a clear picture of the current organizational environment and serves as a starting point from which to gauge future improvements. For example, it would be unreasonable to state that a new development methodology provided increased programmer productivity unless the level of productivity before its implementation was known and documented.

During the assessment phase, it is also important to define the goals of the measurement procedure. Another activity performed during assessment is selling the measurement program to management and IS staff. All participants in the program must understand the relationship between measurement and improvement so that they will support the resulting program.

Formulation A measurement program requires the formulation of specific, quantifiable questions and metrics to satisfy the program goals. The previously discussed suggestions for choosing appropriate metrics and sample goals/questions/metrics provide a good starting point.

Collection The collection of specific metrics requires a cost-accounting system aimed at gathering and storing specified attributes that act as input data for the metrics. This process should be automated so that collection takes as little time as possible and the danger of becoming mired in amassing huge amounts of data is avoided. Careful planning in assessment and formulation helps avoid the gathering of too much data.

Analysis The physical collection of a metric does not, by itself, provide much information that helps in the decision-making process. Just as gross sales do not reveal the financial condition of an organization, the number of function points for a project does not explain how many person-months it will take to produce that project.

A metric must be statistically analyzed so that patterns are uncovered, historical baselines are established, and anomalies are identified.

Interpretation The function of interpretation is to attach meaning to the analysis, in other words, to determine the cause of the patterns that have been identified during analysis and then to prescribe appropriate corrective action. For example, if analysis shows that users are consistently dissatisfied with systems that require an ever-increasing number of user and analyst meetings, then it may not be a good idea to schedule more meetings. A more effective approach is to look for other reasons behind the dissatisfaction. Perhaps the meetings are unproductive, communication skills are ineffective, or business problems are being incorrectly identified. The interpretation of metric analyses furnishes a direction in which to start looking for different problems and solutions.

Validation As shown in Exhibit 3, validation occurs throughout each phase of the measurement program. It involves asking a set of questions to ensure that the goals of the measurement program are being addressed. For example, the results of the formulation phase should be validated with two key questions:

1. Are we measuring the right attribute?

2. Are we measuring that attribute correctly?

The following scenario illustrates how validation questions are applied. Assume that one of the overall goals of an IS organization is to improve the performance of user support. Using the G/Q/M approach, the IS organization establishes a goal of improving user satisfaction with IS support. A question that supports this goal is, "What is the current level of user satisfaction with IS support?" IS personnel then formulate a questionnaire they believe measures the level of user satisfaction with IS support. The questionnaire is used to collect data, which is analyzed and interpreted.

Analysis shows that there is no relationship between the type, amount, or level of IS support and user satisfaction. Why not? It could be because there is no relationship between user satisfaction and IS support, or because the questionnaire was not measuring user satisfaction with IS support. Validating the questionnaire in the formulation phase helps ensure that it is measuring what it is intended to measure.

What you need to know about… Project Management Made Easy! Project management consists of more than just a large building project and can encompass small projects as well. No matter what the size of your project, you need to have some sort of project management. How you manage your project has everything to do with its outcome.