Telecare Soapbox: Evaluation of telehealth services: How good is ‘good enough’?

David Barrett, Lecturer in Telehealth at the University of Hull takes a hard look at how local trials are often evaluated and looks forward to a time when more rigorous approaches will provide solid evidence for the benefits of telehealth.

Regular readers of TelecareAware cannot fail to notice the frequency with which new evaluations of telehealth services are published. In recent months, we’ve seen documents from, amongst others, Kent, SE Essex and Argyll & Bute. These evaluations are almost always positive, suggesting that further deployments of technology can supply huge savings for health and social care organisations, whilst proving immensely popular with users, carers and practitioners.

This growing evidence base in favour of telehealth services strengthens the argument that technology can deliver substantial benefits for individuals and organisations. However… if the evidence is as unequivocal as many evaluations suggest, then why do we continue to see a relatively slow rate of adoption? If the benefits are proven, then why do localities feel the need to repeat the types of evaluation carried out elsewhere? If – as local evaluations suggest – telehealth can substantially reduce the number of hospital admissions for patients with long-term conditions (LTCs), then why is it not advocated in national clinical guidelines?

My view – and it is only my view – is that this reflects that though these evaluation reports provide evidence that is good enough to convince local commissioners of services, they are generally not good enough to convince others outside that locality or the wider clinical community.

This is largely because local evaluations demonstrate a number of methodological weaknesses. I realise that as soon as phrases like ‘methodological weaknesses’ raise their head, this whole article is open to accusations of academic snobbery. However, I think that if an evaluation report claims great results from an intervention such as telehealth, then readers need to be convinced that those findings come from a study that was rigorous, robust and reliable.

Sadly, this is not always the case. Many evaluations demonstrate problems that leave the scale of positive benefits open to question. A particular issue is that most evaluations use a ‘before-and-after’ approach to identifying benefits. This is a simple approach to use: for example, let’s assume that Mr B has a telehealth system installed because he has Chronic Obstructive Pulmonary Disease (COPD). We can record how many emergency hospital admissions he had in the six months before the equipment was installed, then look at how many times he was admitted in the six months whilst the telehealth service was deployed, and describe any changes. By calculating the cost of an admission, this method then allows cost savings to be identified, which can be extrapolated to a wider population. Here’s a made-up example of how that might look in a report with a few more patients;

“In the sample of 25 COPD patients, there were 47 emergency hospital admissions in the six-month period before the telehealth equipment was deployed. During the six-month period of deployment, there were only 31 emergency hospital admissions amongst the cohort of 25. This is a 34% reduction in hospital admissions. Assuming a mean cost per emergency admission of £3000, this demonstrates cost savings of £48000 in the pilot group over six months, or £3840 per patient, per annum. Given that there are 1568 COPD patients in the local area, this demonstrates that telehealth has the potential to save the local NHS over £6M per year”

Sounds great! All the maths adds up, and the analysis seems to follow a logical progression. But… this type of analysis has a number of flaws, often seen in real evaluation reports. Firstly, there is the issue of seasonality. What if the ‘before’ period was October-March and the ‘after’ period was April-September? Emergency admissions are likely to be lower during the spring and summer anyway, so how much of the effect is due to telehealth? Seasonality can be ‘ironed out’ by comparing like-for-like periods of the year, but not every evaluation does this.

Even if we correct seasonal changes, before-and-after studies still provide some problems. The condition of people with LTCs will change – for better or worse – over time, regardless of whether they have telehealth installed or not. Local evaluations have no way of ‘filtering out’ other variables that may have affected healthcare utilisation.

Another problem often encountered – and demonstrated in the example above – is the tendency to extrapolate from small samples. Even if we accept that there was a 34% reduction in hospital admissions, and that this was just because of the telehealth deployment (and not seasonal effects, or changes in other areas of care, or just chance), then we can’t assume that the same thing will necessarily happen with a wider population. Finally, the example above doesn’t take into account any of the costs of the telehealth service itself, thereby providing an over-estimate of economic benefits.

The example above exaggerated the types of claims made in local reports, but it was indicative of some of the problems encountered. So, am I saying that local evaluations are of little value? Not at all: the better reports – which solve methodological problems when they can and acknowledge limitations when they can’t – make an important contribution to the telehealth evidence base with regards to the effect on local services. In other words, local evaluations are often ‘good enough’ when we consider telehealth to simply be a tool for service redesign.

But telehealth needs to be more than just a way of working differently. To see large-scale adoption of telehealth services in patients with LTCs there needs to be evidence of clinical benefits, and that is where local evaluations aren’t good enough. To demonstrate clinical benefits – such as reductions in bed days or increased life expectancy – evidence is required where the benefits are attributable to telehealth alone (and not other variables), and where we can say with confidence that improvements were not just down to chance.

Bluntly, this is where academic snobbery has a place. We need randomised controlled trials, qualitative studies of patient experience, complex economic evaluations, systematic reviews and meta-analyses to demonstrate the clinical, financial and quality of life benefits of telehealth in patients with LTCs. It is this type of high-quality evidence that will convince GPs, hospital consultants, community matrons and other healthcare professionals that this technology can enhance their care delivery and improve the lives of patients. It is this type of evidence that will persuade the writers of clinical guidelines and allow telehealth to become embedded in care pathways.

At the moment, the evidence is not quite there. Though a recent Cochrane review of telemonitoring for chronic heart failure (CHF) patients was extremely positive, the NICE guidelines for CHF say that more research is required. The academic evidence bases for telehealth in COPD, diabetes and hypertension are all under-developed. The bad news is that high-quality research evidence is expensive and slow to develop. The evidence base for telehealth is not going to appear overnight, though the results of the Whole System Demonstrator in 2011 will hopefully go a long way towards convincing others of the benefits.

What we therefore need is a mixed economy of telehealth evidence. Local evaluations – if carried out to a high quality – can give us an indication of the potential benefits of service redesign in terms of resource savings and user experience. However, to fully convince clinicians, we also need a robust evidence base that stems from large-scale research studies, providing confirmation of clinical and economic benefits.

I believe strongly that telehealth offers huge possibilities to individuals and organisations. It allows patients to play a greater role in their own care, it gives carers the reassurance that they are not solely responsible for their loved one, and it helps practitioners to better organise their workloads. In addition, I believe that technology – if properly used – can help to improve clinical outcomes whilst providing cost savings.

Comments

I cannot disagree with your arguments, but have a view that the most significant patient benefits are qualitative in nature. To provide ‘robust’ i.e. quantitative data would require a huge investment in technology and manpower. As I am about to evaluate my own local telehealth pilot, based on 60 installations over a six month period, have you any suggestions as to data sets that would be considered more robust??

Thanks for you posting. You are quite right to say that many of the benefits of telehealth come from improved quality of life for users and carers. The only problem is that these benefits alone are not usually enough to convince commissioners to invest.

I’ll drop you an email to discuss your project and share some ideas around evaluation.

This is excellent advice for anyone thinking of a pilot – rather than thinking of how to evaluate something that’s already happened. It seems pretty clear that to achieve the intended outcomes a telehealth service must involve more than simply supplying boxes of technology to patients with a particular condition; it must address many other issues concerning installation, training and clinical service redesign.

Fundamentally, it must be delivered as an end-to-end solution – and this must involve a partnership between commissioners and providers both at a national/regional and at a local level. This is easier said than done – and has rarely been achieved in local pilots up until now. Perhaps Telehealth Service Providers have yet to offer the delivery of robust and affordable services that meet a well-defined specification. A Telehealth Code of Practice will solve many of these problems – so the moves by the Telecare Services Association to develop such a code are to be welcomed.

The procurement exercise in Northern Ireland took a long time to complete – and it will be a further few months before a service is rolled out. However, this time was not wasted and has resulted in a service specification that meets the needs of the commissioners from a business perspective. It also includes a financial tariff for the service provider with the prospect of penalties for poor performance. Put it all together – with the expected clinical outcomes – and the financial case is made.

Future pilots and services could be evaluated by comparing outcomes and costs with the Northern Ireland service.

[quote name=”Paul Larvin”]where is there any evidence published that compares outcomes against costs.The reason is I am looking into moving our service forward as currently we are in Telecare stage one[/quote]

David has indicated a number of recent areas from which there are published evaluations, maybe try an internet search using those areas combined with “telehealth evaluations”?

However the key message of David’s article is that the evidence is not quite there for a number of reasons – pilots have been small scale and not run for a long enough period; test sites have selected patients to exclude certain ‘complicating’ factors but the outcomes are surely skewed because most of our patients will have other factors to contend with, in addition to their LTC, when using the telehealth equipment; and the evaluations have not been robust enough. In addition I think it is now becoming accepted that the weakness of pilots overall is they rarely scale and so the extrapolations are unproven.

Having said that my approach would be from the other direction … what do your service users/patients and clinicians need to gain from telehealth? Do you have senior level buy-in to the concept? Do you have solid partnerships that will develop and support this change? If you do not know the answers or they are not affirmative you have some way to go … by which time there will undoubtedly be better, more robust evaluations available to support the strategy your area has defined.

Our definitions

Telehealth and Telecare Aware posts pointers to a broad range of news items. Authors of those items often use terms 'telecare' and telehealth' in inventive and idiosyncratic ways. Telecare Aware's editors can generally live with that variation. However, when we use these terms we usually mean:

• Telecare: from simple personal alarms (AKA pendant/panic/medical/social alarms, PERS, and so on) through to smart homes that focus on alerts for risk including, for example: falls; smoke; changes in daily activity patterns and 'wandering'. Telecare may also be used to confirm that someone is safe and to prompt them to take medication. The alert generates an appropriate response to the situation allowing someone to live more independently and confidently in their own home for longer.

• Telehealth: as in remote vital signs monitoring. Vital signs of patients with long term conditions are measured daily by devices at home and the data sent to a monitoring centre for response by a nurse or doctor if they fall outside predetermined norms. Telehealth has been shown to replace routine trips for check-ups; to speed interventions when health deteriorates, and to reduce stress by educating patients about their condition.

Telecare Aware's editors concentrate on what we perceive to be significant events and technological and other developments in telecare and telehealth. We make no apology for being independent and opinionated or for trying to be interesting rather than comprehensive.