[[Evidence-based practice]] describes a healthcare system in which evidence from published studies, often mediated by [[systematic reviews]] or processed into [[medical guideline]]s is incorporated into clinical practice. The flow of information is one way; from research to practice. However many interventions by health systems and treatments by their staff have never been, or cannot easily be, subject to research study. Of the rest, quite a lot is from research that is graded as low quality.<ref>{{cite journal |author=Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schünemann HJ |title=What is "quality of evidence" and why is it important to clinicians? |journal=BMJ |volume=336 |issue=7651 |pages=995–8 |year=2008 |month=May |pmid=18456631 |pmc=2364804 |doi=10.1136/bmj.39490.551019.BE}}</ref> All health staff intervene in their patients on the basis of both information from research evidence and from their own experience. The latter is personal, subjective and strongly influenced by stark instances which may not be representative.<ref>{{cite journal |author=Malterud K |title=The art and science of clinical knowledge: evidence beyond measures and numbers |journal=Lancet |volume=358 |issue=9279 |pages=397–400 |year=2001 |month=August |pmid=11502338 |doi=10.1016/S0140-6736(01)05548-9}}</ref> However when information on these interventions and their outcomes are collected systematically it becomes "practice-based evidence"<ref>{{cite journal |author=Horn SD, Gassaway J |title=Practice-based evidence study design for comparative effectiveness research |journal=Medical Care |volume=45 |issue=10 Supl 2 |pages=S50–7 |year=2007 |month=October |pmid=17909384 |doi=10.1097/MLR.0b013e318070c07b}}</ref> and can complement that from academic research. To date, such initiatives have been largely confined to primary care<ref>{{cite journal |author=Ryan JG |title=Practice-Based Research Networking for Growing the Evidence to Substantiate Primary Care Medicine |journal=Annals of Family Medicine |volume=2 |issue=2 |pages=180–1 |date=1 March 2004|pmid=15083861 |pmc=1466650 |url=http://www.annfammed.org/cgi/pmidlookup?view=long&pmid=15083861 }}</ref> and rheumatology.<ref>{{cite journal |author=Pincus T, Sokka T |title=Evidence-based practice and practice-based evidence |journal=Nature Clinical Practice. Rheumatology |volume=2 |issue=3 |pages=114–5 |year=2006 |month=March |pmid=16932666 |doi=10.1038/ncprheum0131}}</ref> An example of practice-based evidence is found in the evaluation of a simple intervention like a medication. [[Efficacy]] is the degree with which it can improve patients in randomised controlled trials– the epitome of evidence-based practice. [[Effectiveness]] is the degree with which the same drug improves patients in the uncontrolled hurly-burly of everyday practice; data which are much more difficult to come by. Routine health outcomes measurement has the potential to provide such evidence.

[[Evidence-based practice]] describes a healthcare system in which evidence from published studies, often mediated by [[systematic reviews]] or processed into [[medical guideline]]s is incorporated into clinical practice. The flow of information is one way; from research to practice. However many interventions by health systems and treatments by their staff have never been, or cannot easily be, subject to research study. Of the rest, quite a lot is from research that is graded as low quality.<ref>{{cite journal |author=Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schünemann HJ |title=What is "quality of evidence" and why is it important to clinicians? |journal=BMJ |volume=336 |issue=7651 |pages=995–8 |year=2008 |month=May |pmid=18456631 |pmc=2364804 |doi=10.1136/bmj.39490.551019.BE}}</ref> All health staff intervene in their patients on the basis of both information from research evidence and from their own experience. The latter is personal, subjective and strongly influenced by stark instances which may not be representative.<ref>{{cite journal |author=Malterud K |title=The art and science of clinical knowledge: evidence beyond measures and numbers |journal=Lancet |volume=358 |issue=9279 |pages=397–400 |year=2001 |month=August |pmid=11502338 |doi=10.1016/S0140-6736(01)05548-9}}</ref> However when information on these interventions and their outcomes are collected systematically it becomes "practice-based evidence"<ref>{{cite journal |author=Horn SD, Gassaway J |title=Practice-based evidence study design for comparative effectiveness research |journal=Medical Care |volume=45 |issue=10 Supl 2 |pages=S50–7 |year=2007 |month=October |pmid=17909384 |doi=10.1097/MLR.0b013e318070c07b}}</ref> and can complement that from academic research. To date, such initiatives have been largely confined to primary care<ref>{{cite journal |author=Ryan JG |title=Practice-Based Research Networking for Growing the Evidence to Substantiate Primary Care Medicine |journal=Annals of Family Medicine |volume=2 |issue=2 |pages=180–1 |date=1 March 2004|pmid=15083861 |pmc=1466650 |url=http://www.annfammed.org/cgi/pmidlookup?view=long&pmid=15083861 }}</ref> and rheumatology.<ref>{{cite journal |author=Pincus T, Sokka T |title=Evidence-based practice and practice-based evidence |journal=Nature Clinical Practice. Rheumatology |volume=2 |issue=3 |pages=114–5 |year=2006 |month=March |pmid=16932666 |doi=10.1038/ncprheum0131}}</ref> An example of practice-based evidence is found in the evaluation of a simple intervention like a medication. [[Efficacy]] is the degree with which it can improve patients in randomised controlled trials– the epitome of evidence-based practice. [[Effectiveness]] is the degree with which the same drug improves patients in the uncontrolled hurly-burly of everyday practice; data which are much more difficult to come by. Routine health outcomes measurement has the potential to provide such evidence.

This article is in need of attention from a psychologist/academic expert on the subject.Please help recruit one, or improve this page yourself if you are qualified.This banner appears on articles that are weak and whose contents should be approached with academic caution

.Evidence-based practice describes a healthcare system in which evidence from published studies, often mediated by systematic reviews or processed into medical guidelines is incorporated into clinical practice. The flow of information is one way; from research to practice. However many interventions by health systems and treatments by their staff have never been, or cannot easily be, subject to research study. Of the rest, quite a lot is from research that is graded as low quality.[1] All health staff intervene in their patients on the basis of both information from research evidence and from their own experience. The latter is personal, subjective and strongly influenced by stark instances which may not be representative.[2] However when information on these interventions and their outcomes are collected systematically it becomes "practice-based evidence"[3] and can complement that from academic research. To date, such initiatives have been largely confined to primary care[4] and rheumatology.[5] An example of practice-based evidence is found in the evaluation of a simple intervention like a medication. Efficacy is the degree with which it can improve patients in randomised controlled trials– the epitome of evidence-based practice. Effectiveness is the degree with which the same drug improves patients in the uncontrolled hurly-burly of everyday practice; data which are much more difficult to come by. Routine health outcomes measurement has the potential to provide such evidence.

The information required for practice-based evidence is of three sorts: context (e.g. case mix), intervention (treatment) and outcomes (change)[6]. Some mental health services are developing a practice-based evidence culture with the routine measurement of clinical outcomes[7][8] and creating behavioral health outcomes management programs.

Contents

There are many similar, overlapping definitions of health outcomes. They all involve change in health status; some stipulate that the population or group has to be defined (different outcomes are expected for different people & conditions), whilst others specify also that health outcomes are the result of interventions or their lack, rather than simply change over time. A strong example is that of Australia’s New South Wales Health Department: health outcome is

"change in the health of an individual, group of people or population which is attributable to an intervention or series of interventions"[9]

In its purest form, measurement of health outcomes implies identifying the context (diagnosis, demographics etc.), measuring health status before an intervention is carried out, measuring the intervention, measuring health status again and then plausibly relating the change to the intervention.

An early example of a routine clinical outcomes system was set up by Florence Nightingale in the Crimean War. The outcome under study was death. The context was the season and the cause of death– wounds, infections and any other cause. The interventions were nursing and administrative. She arrived just before the barracks in Scutari were accepting the first soldiers wounded at the battle of Inkerman in November 1854, and mortality was already high. She was appalled at the disorganisation and standards of hygiene and set about cleaning and reorganisation. However, mortality continued to rise. It was only after the sewers were cleared and ventilation improved in March 1856 that mortality fell. On return to the UK she reflected on these data and produced new sorts of chart (she had trained in mathematics rather than "worsted work and practising quadrilles") to show that it was most likely that these excess deaths were caused by living conditions rather than, as she initially believed, poor nutrition. She also showed that soldiers in peacetime also had an excess mortality over other young men, presumably from the same causes. Her reputation was damaged, however, when she and William Farr, Registrar General, collaborated in producing a table which appeared to show a mortality in London hospitals of over 90% compared with less than 13% in Margate. They had made an elementary error in the denominator; the true rate for London hospitals was actually 9% for admitted patients.[10] She was never too keen on hospital mortality figures as outcome measures anyway:

"If the function of a hospital were to kill the sick, statistical comparisons of this nature would be admissible. As, however, its proper function is to restore the sick to health as speedily as possible, the elements which really give information as to whether this is done or not, are those which show the proportion of sick restored to health, and the average time which has been required for this object…"[11]

Here she presaged the next key figure in the development of routine outcomes measurement

Codman was a Boston orthopaedic surgeon who developed the "end result idea". At its core was

"The common sense notion that every hospital should follow every patient it treats, long enough to determine whether or not the treatment has been successful, and then to inquire 'if not, why not?' with a view of preventing similar failures in the future."[12]

He is said to have first articulated this idea to his gynaecologist colleague and Chicagoan Franklin H Martin, who later founded the American College of Surgeons, in a Hansom Cab journey from Frimley Park, Surrey, UK in the summer of 1910. He put this idea into practice in Massachusetts General Hospital.

"Each patient who entered the operating room was provided with a 5-inch by 8-inch card on which the operating surgeon filled out the details of the case before and after surgery. This card was brought up 1 year later, the patient was examined, and the previous years' treatment was then evaluated based on the patient's condition. This system enabled the hospital and the public to evaluate the results of treatments and to provide comparisons among individual surgeons and different hospitals"[13]

He was able to demonstrate his own patients’ outcomes and those of some of his colleagues but unaccountably this system was not embraced by his colleagues. Frustrated by their resistance, he provoked an uproar at a public meeting and thus fell dramatically from favour in the hospital and at Harvard, where he held a teaching post, and he was only able to fully realize the idea in his own, struggling small private hospital[14] although some colleagues continued with it at the larger hospitals. He died in 1940 disappointed that his dream of publicly available outcomes data was not even on the horizon, but hoped that posterity would vindicate him.

In a classic 1966 paper, Avedis Donabedian, the renowned public health pioneer, described three distinct aspects of quality in health care: outcome, process and structure (in that order in the original paper).[15] He had misgivings about solely using outcomes as a measure of quality, but concluded that

"Outcomes, by and large, remain the ultimate validation of the effectiveness and quality of medical care."[15]

He may have muddied the waters a bit when discussing patient satisfaction with treatment (usually regarded as a measure of process) as an outcome, but more importantly it has become apparent that his three-aspect model has been subverted into what is called the "structure-process-outcomes" model, a directional, putatively causal chain that he never originally described. This subversion has been the justification for repeated attempts to improve process and thus outcomes by reorganizing the structure of health care, wittily described by Oxman et al.[16] Donabedian himself cautioned that outcomes measurement cannot distinguish efficacy from effectiveness: (outcomes may be poor because the right treatment is badly applied or the wrong treatment is carried out well), that outcomes measurement must always take into account context (factors other than the intervention may be very important in determining outcomes, and also that the most important outcomes may be the least easy to measure, so easily-measured but irrelevant outcomes are chosen (e.g. mortality instead of disability).

Perhaps because of instances of scandalously poor care (for example at the Bristol Royal Infirmary 1984-1995[17]) mortality data have become more and more openly available as a proxy for other health outcomes in hospitals,[18] and even for individual surgeons.[19] However Florence Nightingale’s astringent judgement and Donabedian’s reservations retain their full force for most health services, where routine non-mortal health outcomes measurement remains the most appropriate method.

All three dimensions (context, intervention as well as outcomes) must be measured. It is not possible to understand outcomes data without all three of these.

Different perspectives on outcomes need to be acknowledged. For instance, patients, carers and clinical staff may have different views of what outcomes are important, how you would measure them, and even which were desirable[20]

Prospective and repeated measurement of health status is superior to retrospective measurement of change such as Clinical Global Impressions[21]. The latter relies on memory and may not be possible if the rater changes.

The reliability (statistics) and validity (statistics) of any measure of health status must be known so that their impact on the assessment of health outcomes can be taken into account. In mental health services these values may be quite low, especially when carried out routinely by staff rather than by trained researchers, and when using short measures that are feasible in everyday practice.

Data collected must be fed back to them to maximize data quality, reliability and validity.[22] Feedback should be of content (e.g. relationship of outcomes to context and interventions) and of process (data quality of all three dimensions)

One can find reports of routine health outcomes measurement in many medical specialties and in many countries. However, the vast majority of these reports are by or about enthusiasts who have set up essentially local systems, with little connection with other similar systems elsewhere, even down the street. In order to realise the full benefits of an outcomes measurement system we need large-scale implementation using standardised methods with data from high proportions of suitable healthcare episodes being trapped. In order to analyse change in health status (health outcomes) we also need data on context, as recommended by Donabedian[15] and others, and data on the interventions being used, all in a standardised manner. Such large-scale systems are only at present evident in the field of mental health services, and only well developed in two locations: Ohio[7] and Australia[8],even though in both of these data on context and interventions are much less prominent than data on outcomes. The major challenge for health outcomes measurement is now the development of usable and discriminatory categories of interventions and treatments, especially in the field of mental health.

Can form the basis of effectiveness data that complement efficacy data. This could show the actual benefits in everyday clinical practice of interventions previously tested by randomised clinical trials, or the benefits of interventions that have not been or cannot be tested in Randomized Controlled Trials and systematic reviews

Can identify hazardous interventions that are only apparent in large datasets

Can be used to show differences between clinical services with similar case mix and thus stimulate search for testable hypotheses that might explain these differences and lead to improvements in treatment or management

Can be used to compare the outcomes of treatment and care from different perspectives– e.g. clinical staff and patient

Data about individual patients

Can be used to track changes during treatment over periods of time too long to be amenable to memory by an individual patient or clinician, and especially when more than one clinician or team is involved

Can, especially when different perspectives are available, be used in discussions between patients, clinicians and carers about progress[23]

If attempts are made to purchase or commission health services using outcomes data, bias may be introduced that will negate the benefits

Inadequate attention may be paid to the analysis of context data, such as case mix, leading to dubious conclusions

If data are not fed back to clinicians participating then data quality (and quantity) will fall below the thresholds necessary for reasonable interpretation.

If only a small proportion of episodes of health care have completed outcomes data, then these data may not be representative of all episodes, although the threshold for this effect will vary from service to service, measure to measure.

Some risks of bias, widely foretold,[25] are proving to be insubstantial but need guarding against

Experience suggests that the following factors are necessary for routine health outcomes measurement

an electronic patient record system with easy extraction from data warehouse. Entry of outcomes data can then become part of the everyday entry of clinical data. Without this, aggregate data analysis and feedback is very difficult indeed.

resources and staff time set aside for training and receiving feedback

Routine health outcomes measurement is in its infancy, but technological advances in clinical information systems mean that it is now feasible. It is still possible that it will be implemented badly (e.g. without feedback) or for the wrong purposes (e.g. purchasing or commissioning services) or both, but as long as these are avoided it promises to be of great benefit to individual patients, clinicians and, ultimately, health care overall. It is strongly supported by patients.[26]

↑Codman EA. A study in hospital efficiency. As demonstrated by the case report of the first five years of a private hospital. Published privately 1817. Reprinted 1996 Joint Commission on Accreditation of Healthcare Organizations Oakbrook Terrace, IL, USA:

↑Long A, Jefferson J. The significance of outcomes within European health sector reforms: towards the development of an outcomes culture. International Journal of Public Administration, 1999;22(3):385-424