Welcome to Guidelines in Practice. This site uses cookies, some may have been set already. If you continue to use the site, we will assume you are happy to accept the cookies anyway. Read about our cookies..

Our panel of experts answers your questions on audit

Dr Mark Charny explains the purpose of standards and how to set them, and how case review relates to clinical audit

QHow do standards relate to guidelines?

A Guidelines are systematically developed statements to help clinicians (and patients) make decisions in specific clinical circumstances. They normally define best practice and are often expressed in qualitative as well as – or instead of – quantitative terms. For example, if a blood pressure reading is raised on one occasion, the patient should have further readings to establish whether his or her blood pressure is generally raised. If it is, anti-hypertensive treatment should be started.

An indicator is a measurable element of performance for which there is evidence (or at least consensus) that it can be used to assess the quality of care that is given. For example, patients over a certain threshold of blood pressure should have a further reading within one month.

A criterion is a definition of the indicator in terms that are so explicit that care can be unambiguously labelled as good or bad. For example, patients with a blood pressure of >160/90mmHg should have a further reading within one month.

A standard is the degree to which the care given should meet [target standard] or did meet [standard achieved] the criterion. For example, the target standard might be that 100% of patients with a blood pressure of >160/90mmHg should have a second reading within one month. The audit may show that 90% of patients with readings of >160/90mmHg had a second reading within one month.

This demonstrates that there is a close relationship between guidelines and standards. Indicators and criteria provide the link between these two. They are a way of defining more general statements in operational terms which allows data to be collected, and the gap between desired and actual care to be measured.

The larger the discrepancy between desired care and actual care, the greater the need to take remedial action.

QWe recently audited the care of patients with dyspepsia suggestive of a peptic ulcer, and one of the standards we set was that 70% of patients should be tested for H. pylori. Some of us thought that every patient should be tested and that a 70% standard implies that it is acceptable for 30% of patients to have poor care. Should we have set the standard at 100%?

A Setting standards involves making fine judgements about reconciling ideal care with the realities of an imperfect world. Idealists like to audit against standards of 100% on the basis that everyone deserves care that conforms to best practice. On the other hand, pragmatists argue that this is unrealistic, and that a standard should take account of the contingencies that affect practice.

I feel sympathetic to both points of view. It is true that care will usually fall short of the ideal, and aiming for perfection is likely to act as a disincentive if those providing care feel that they can never reach the target. On the other hand, setting a standard at a lower level (say, 70%) may institutionalise substandard care, and leaves open the question of which 30% receive ïsecond-classÍ care – and why.

Fortunately, by using exceptions, it is possible to reconcile these two apparently conflicting points of view. Exceptions exclude from the standard certain defined groups of patients or patients being treated under certain conditions. The standard that applies to the remaining patients is an ïall or nothingÍ (i.e. a 0% or 100%) standard.

This approach requires an explicit statement about the circumstances in which the ïfrictionÍ of the pragmatistÍs real world applies and brooks no excuse where this is not true. For example, instead of suggesting that 90% of eligible patients should have a cervical smear test every 3 years (which may allow a certain mental sloppiness about what happens to the other 10% of patients), we might generate exceptions. These might be women who have not had sexual intercourse, women who refuse a smear test, and women who have had a hysterectomy, for example. In the remainder, we could aim for 100% uptake.

Framing exceptions in this way imposes a useful mental discipline. First, explicit statements mean that you are more likely to examine the circumstances in which the standard does not apply and satisfy yourself that this is, in fact, reasonable.

Second, it may suggest setting another standard for the groups covered by exceptions. In the example of cervical smears, you might decide that women who refuse a smear test should have a visit from a health visitor and be given detailed information about smear tests. You could then set a 100% standard for this aspect of care in these women.

QIs a review of records a type of clinical audit?

A Clinical audit has a nomenclature of its own. Unfortunately, there are no universally agreed definitions, so the answer to your question might be either yes or no!

The answer depends on the purpose of the case review. For example, if the purpose were to validate data quality or to support an invoice, it would not be clinical audit.

If the purpose were to look at a random sample of cases to establish the current pattern of care, perhaps to ascertain whether certain standards appear to be achievable or to see whether there was a cause for concern, it would not be a clinical audit – although it might be a precursor to such an audit and be part of the ïaudit processÍ.

If, however, a case review of the last five cases of termination of pregnancy were undertaken to establish whether there had been deficiencies in the contraceptive care offered, this is a way of carrying out an audit.

As I have argued previously (Guidelines in Practice November 2002), a clinical audit of care should be carried out against pre-determined standards. Case review often implies a rather more wide-ranging enquiry with less explicitly defined criteria and standards than a formal clinical audit, but this is a matter of degree rather than a qualitative difference.

Of course, case record review is often undertaken as part of the peer review process when standards of care have been measured through a more classical audit approach and found to reveal substandard care. Here, the review is intended to clarify the reasons for the findings, so that action can be taken to improve future care.