This is an important document because it describes the challenges faced with existing evaluation methodology, and proposes a new approach to measuring the impact of new, complex models of care. It is aimed at anyone who is involved in service redesign in the NHS, and is a comprehensive guide to this innovative methodology. Throughout the guide, the authors demonstrate how retrospective matched control analysis can be used to evaluate the clinical and cost effectiveness of one way of working compared with another.

Overview of potential evaluation methods

The Nuffield Trust defines Retrospective matched control analysis (RMCA) as “routinely collected data is used to construct a matched control group, and the impact of the intervention (or service redesign) is measured in terms of differences in the outcomes relative to the matched control group.” In other words, RMCA is when outcomes such as unexpected hospital admissions, are compared with groups of patients who were treated differently. Patients are matched for demographic data, such as age, sex, socio-economic factors, existing conditions, etc. It helps people who design services to see which care model was most effective, but has a high risk of bias, which is important for effective planning of services.

Retrospective matched control analysis is a useful evaluation tool for service redesign

The authors of this guide have compared this method with other evaluation methods, such as:

Randomised controlled trials, where there are issues with recruiting patients.

Longitudinal studies, comparing populations over time, where some outcomes may occur, regardless of the care model applied.

Ecological studies, looking at a local geographical area, which might have different outcomes to a wider population.

They list the benefits and limitations of using RMCA, and conclude that it is a more robust method to use for evaluation purposes. RMCA uses existing data, routinely collected within hospitals, which means that patients are not actively recruited, and the outcomes have already occurred. Therefore the evaluation can take place behind the scenes, with no need of direct patient involvement.

The authors of this guide have compared this method with other evaluation methods

Ten steps towards retrospective matched control analysis

To facilitate the application of this method of evaluation, the guide outlines ten key steps that evaluators need to follow:

Clarify the aims of the service and the evaluation – it is important that there is a purpose to the evaluation and that everyone understands what that purpose is.

Decide on the number of people needed to demonstrate an effect – this point goes into great detail about power calculations, which provide an estimate of how many people must be included to get a significant result.

Ensure permission is granted to access person-level datasets – information governance is extremely important in the NHS, to ensure the privacy of service users. Either permission must be granted or anonymised data should be used.

Ensure there are data on who received the new service, and some information about the service received – if this data is not available, the evaluation process will not glean useful results.

Identify the potential control population – in this step, the authors provide useful guidance describing the choices for controls available.

Create longitudinal patient-level histories of service use – this step describes how best to do this, to ensure that a useful amount of data is collected to present a clear picture of the history and care activity over time.

Identify matched controls – while the authors go into great detail about how to do this, they do recommend that this task should be undertaken by a specialist analyst, someone with expertise in carrying out this sort of research methodology.

Monitor outcome variables for those receiving the new service and matched controls – this means referring back to step 1, and seeing what the initial outcomes are. The guide provides an example of how the Nuffield Trust has carried out this step.

Undertake summative analysis – again, the authors have provided an example on how to do this, but specialist expertise would make the process easier, and make the results more robust.

Continuously monitor – here the authors explain why it is so important to continuously monitor the situation. Health systems are constantly change, and therefore, continuous evaluation is necessary, to ensure that current methods are still the most appropriate.

To facilitate the application of this method of evaluation, the guide outlines ten key steps that evaluators need to follow

Commentary

Evaluation methodology is vital to service redesign, as it provides the rationale for making changes to existing care models. Published international research evidence of new models can inform policymakers and commissioners about potential new care models, but this evaluation methodology, carried out in the relevant populations, will ensure that the service is designed to suit local, regional, and national needs. The authors of this guide have provided the theory, alongside practical steps and examples to help you implement this evaluation methodology. They have provided the argument for using RMCA above other study methods, and provided a clear description of the process.

When implementing this methodology, step 7 should perhaps be thought of nearer the start of the process, because without specialist expertise in this area, it will be difficult to carry out this research in a robust manner. Within your areas of responsibility, do you have access to experts in the areas of research methodology and data analysis? Perhaps this is something to think about when undertaking an evaluation project using RMCA methodology.

Particularly with so many stakeholders involved in the commissioning of new services, it is essential that rigorous, unbiased data is available for all decision-makers to see, so that the right decision can be made, with regards to service redesign.

It is essential that rigorous, unbiased data is available for all decision-makers to see, so that the right decision can be made, with regards to service redesign

Caroline has been a medical librarian in a variety of NHS and academic roles since 1999, working in academic, primary and secondary care settings, service improvement, knowledge management, and on several high profile national projects.
She has a PhD in Computing and currently develops resources to support evidence-based cost and quality, including QIPP @lert, a blog highlighting key reports from health care and other sectors related to service improvement and QIPP (Quality, Innovation, Productivity, Prevention). She also delivers training and resources to support evidence identification and appraisal for cost, quality, service improvement, and leadership.
She is co-author of the Searching Skills Toolkit, which aims to support health professionals' searching for best quality clinical and non-clinical evidence. Her research interests are health management, commissioning, public health, consumer health information literacy, and knowledge management.
She currently works as a Knowledge and Evidence Specialist for Public Health England, and works on the Commissioning Elf in her spare time.