Attributes of Sound Performance Measurement System

Multinational Enterprises need to measure performance of all its organizational participants/elements and subsidiaries. Efficacy of organizational control depends on efficient measurement of performance. A right selection of a range of performance measures which are appropriate to a particular company/context is needed. This selection ought to be made in the light of the company’s strategic intentions which will have been formed to suit the competitive environment in which it operates and the kind of business that it is.

There are at least three major attributes expected of any good performance measurement system. These are: Reliability, Validity and Objectivity. Each characteristic is examined in detail.

1. Reliability

Reliability of a measurement mechanism refers to the dependability or consistency of the measures provided by it. It refers to “the accuracy of the data in the sense of their stability, repeatability, or precision” There are two ways of looking at dependability. One is comparability of measures provided by different parts of the same test. Second is comparability of measures provided by the test on different occasions. In both the procedures, we produce two sets of measures which can be correlated to provide an estimate of reliability.

Comparability of measures provided by different parts of a measurement system: This procedure is based on the rationale that different parts of the measurement mechanism (different items) should make comparable estimates of performance of an entity. Let us illustrate this with an example. Suppose ‘Item 1, say cash-flow of a subsidiary” shows that the firm is a very superior achiever. Normally ‘Item 4, say market value addition’ should also make the same assessment, superior performance, of the firm, if the measurement mechanism is a satisfactory measure of achievement. If for some reason ‘Item 4’ makes a poor estimate of the firm’s ability (suppose ‘Item 4’ assesses the firm as one of inferior achievement), then both the items, 1 and 4, will be looked upon with disbelief. The method used for estimating reliability using this argument is called the ‘Split-Half Method’. A narration of this method is called “Odd ~ Even method”. This form of reliability is called Internal Consistency.

Comparability of measures provided by a measurement system on different occasions: This method of assessing reliability is based on the rationale that a good measurement system must give almost the same measurement when applied on the same entity on different occasions. Suppose a test of emotional intelligence of the chief of the overseas subsidiary shows his EIQ (emotional intelligence quotient) as 8 on a 1-10 scale, which stands for superior EI. But suppose we used the test on the same person after one month and found that his IQ as revealed by the test as 6 (which indicates just an average performance), the measures provided by the test are not dependable. Ideally, the two scores must be the same. But when we take into consideration the inaccuracies which enter into mental measurement (factors like maturation, forgetting, varying test conditions etc.), we are willing to admit small differences. Any way we will be satisfied with the test only if the test provides comparable measures from time to time. This form of reliability is also called temporal stability.

2. Validity

Validity refers to the ability of a measurement tool to measure what it is supposed to measure. Validity is defined as ‘the extent to which the procedure actually accomplishes what it seeks to accomplish or measure what it seeks to measure’. Validity has been classified mainly into three forms , namely, content, construct and criteria.

Content Validity: The contents of the measurement tool must adequately and comprehensively cover the major elements d of the performance dimension that is measured. For instance, measurement of ’intangibles’ of a firm must cover its goodwill, brand equity, corporate governance, corporate social responsibility, environmental concerns, cherished values, and so on. A very simple example would be: the question paper as a measurement tool of student learning, must cover the whole syllabus and not few areas only.

Construct Validity: Construct validity measures the logical or the underlying factor that explain a performance. Performance is measured through performance boosters like ability, behavior and commitment. From these performance level can be constructed. Alternatively, we can establish this type of validity by logically analyzing the contents of the measurement yardstick.

Criterion related Validity: Criterion validity makes use of the statistical comparison of the performance scores of a firm with some independent criterion. Reasonable agreement between the two measures is interpreted as evidence of this type of validity. The external measure for comparison is termed the criterion for validation. The external criterion is justified on the basis of some logical connection which should exist between the test and criterion. For example firms with higher profitability must also come good on the external criterion factor namely, higher market valuation as well.

3. Objectivity

The extent to which a measure is a function of the trait measured, is referred to as objectivity. This is exact opposite of the term called subjectivity. A subjective measure is one in which the human being who makes the measurement permits his own values, judgments and prejudices to enter into the measurement. Examples of subjective judgments are many.

Some techniques of measurement are more likely to be subjective. Interviews, for example, can yield results which are not completely objective, unless adequate precautions are taken to make them objective. The behavior, appearance, voice, etc of a person may make the evaluator to mark the person high or low, though the person is neither high nor low, but average.

Objectivity in measurement is now ensured by providing standard measurement tool, veiling the identity of the person/entity assessed, standardized conditions for scoring and interpreting measurement scores and specific directions using the scores, etc. Many of the precautions taken for attaining reliability of measurement will very often be of help in ensuring objectivity.