Infrequent, but hopefully not inaccurate, postings about anything that captures my attention or pushes my buttons. Consideration currently being given to adding kittens as the background picture to try to increase number of readers above 0.

Wednesday, 9 November 2011

Don't have a cow, man

Above is a particularly rubbish attempt to link in to a post about Barts and The London NHS Trust. And here's a totally unfair, knee-jerk generalisation for you... what is it with journalists? I'm not on a crusade and I haven't got an axe to grind, but I keep finding myself in conflict with them.

And my crime? Well, reading what they write... and, um, querying what looks odd or surprising to me... and, well, that's about it really. Naively, I would think that's exactly what they'd want. When I'm a sports journalist (and surely it can only be a matter of time before a major newspaper comes knocking at my door, begging me to watch every Everton, Bath and Essex game on their time and money), I would want the self-affirmation provided by knowing that someone reads what I write, and engages with the topics that I write about. But hey, as I say, what do I know?

I've got two sagas rumbling on at the moment. The first is looking like being interminable (the Independent have now amended the offending article, but they've made it worse and more inaccurate than the original, which is pretty special of them). And the second is the topic of this blog posting. Here goes...

On the 26th of August 2011, this article was published by The Guardian - "One in 13 A&E patients return within a week - despite being seen". Now, various things about this article irritated me - for example, it muddies the picture between different data sources without any acknowledgement or proper explanation, and it presents one of those data sources, which is clearly published as provisional and experimental, as if it is robust and established. Beyond all that, there was an absolutely huge clanger in it.

It's not entirely dissimilar to the BBC clanger about A&E waiting times, with The Guardian article reporting that:

"Barts and London NHS Trust saw 95% of people waiting more than eight hours in A&E"

Now, because I know that the vast majority of people attending A&E nationally wait less than four hours, this claim leapt out at me. So in the interest of accuracy, I queried it:

And obviously Twitter isn't always the best medium to get your point across, so I simplified my point as well (I also employed the patented Ben Goldacre tactic of being polite and humble... partly to be nice, and partly because it may well be that I've made the stuff up!):

Now, I was going to give you a blow by blow, tweet by tweet account of what happened over the next few days and weeks, but ultimately it gets very dull and repetitive. And at least a little bit odd. What I will do is give you a brief summary.

It took one day for the major mistake in the article to be pointed out, but it took over a month for it to be corrected... I say corrected, it would be more accurate to say made a lot less wrong.

During those five weeks, I was repeatedly told that I was wrong and the article was right, that I should take it up with the Department of Health, and that I should declare my interests! I in turn repeatedly explained the massive difference between '95%' and the '95th percentile', and why it's important to recognise data quality caveats.

When numerous tweets failed to get the message across, I also spent the time to write two long emails explaining absolutely everything. The email exchange is below (skim read or skip past it, see if I care! It's fascinating, honest... ahem):

Good afternoon

This should hopefully be easier, freed from the 140 character tyranny.

As you know, DH published new provisional A&E indicators on 26 August, 2011:

They've been open and honest about the provisional and experimental nature of the data:

These A&E HES data are published as experimental statistics to note the shortfalls in the quality and coverage of records submitted via the A&E commissioning data set. The data used in these reports are sourced from Provisional A&E HES data, and as such these data may differ to information extracted directly from Secondary Uses Service (SUS) data, or data extracted directly from local patient administration systems.

Provisional HES data may be revised throughout the year (for example, activity data for April 2011 may differ depending on whether they are extracted in August 2011, or later in the year). Indicator data published for earlier months have not been revised using updated HES data extracted in subsequent months.

These A&E HES data are published as experimental statistics to note the shortfalls in the quality and coverage of records submitted via the A&E commissioning data set. The data used in these reports are sourced from Provisional A&E HES data, and as such these data may differ to information extracted directly from Secondary Uses Service (SUS) data, or data extracted directly from local patient administration systems

One of the key facts listed is:

Several organizations reported data that did not meet the data quality checks required by the A&E indicators. The 95th percentile and longest single wait information are particularly sensitive to poor data quality, outliers and data definitional issues, which contributes to why some unusually high values may be observed for these measures

On to the data itself, and Barts in particular.

Barts - row 22.

Time to departure - columns AT to BA.

Because the data is provisional and experimental, DH and the IC have gone heavy on data quality measures, which is a good thing. They want to drive up the quality before deeming the new data set to be properly robust. They've also included helpful footnotes, one of which states:

"The 95th percentile is particularly sensitive to poor data quality and definitional issues, which is why some unusually high values may be observed"

Sorry to bang on about the data quality caveats, but it is important. Your article makes it sound like it is an established data source, which it really isn't - and one of your twitter responses mentions data revisions, which are standard. But the caveats published along with the new A&E data aren't standard.

Of all the trusts listed, Barts has by far the highest proportion of departure times recorded at exactly midnight - 5.1%, compared to a national average of just 0.2%. That means they've almost certainly got a data quality problem.

Their median waiting time is 129 minutes (roughly 2 hours), shorter than the national average of 131 minutes (roughly 2 hours).

Their 95th percentile wait is 521 minutes (roughly 9 hours), much longer than the national average of 258 minutes (roughly 4 hours).

So we know that there's a data quality problem with 5% of Barts' data, and therefore it most likely follows that there will be a data quality problem with looking at Barts' 95th percentile performance.

Also, if Barts had seen 100 patients in A&E (just to keep it simple - they actually saw 11,541), and we listed them all out in order of shortest to longest wait, then the indicators are saying that:

Barts and London NHS Trust saw 95% of people waiting more than eight hours in A&E.

Even if we ignore the data quality concerns (which we shouldn't), all that could actually be said is "Barts and The London NHS Trust saw 5% of people waiting more than eight hours in A&E". We really shouldn't say even that though, as we know that 5% of Barts' data looks dodgy (5.1% of records set to a departure time of exactly midnight, 00:00 - I'm willing to wager a decent sum that that's not a departure time, it's a default setting).

Given that Barts' median (middle) waiting time is better than the national average, they really shouldn't be singled out. And it would be good if the provisional and experimental nature of the data was mentioned in your article.

Hope the above is a better, more comprehensive explanation than my poor twitter based attempts.

Cheers

Chris

Thanks for this Chris

Sorry have been busy with another set of things here.

I completely see your point. The 95 percentile is actually the data measure used by DoH so apologies for not taking the time to comb through the data. DoH pointed out that Barts was the first trust on that list not to have the 24-hour error that systematically wrecks the data. That being said it must contain a fair few of these as the average time to departure is so high. So really the story is

Dodgy data set used by DoH shows Barts top 5% longest wait was 9hours.

I will correct when next back in the office. (Tuesday i think). Why do you thnk they did not use median figure?

Thanks

Surprised that DH highlighted Barts - they should really understand the data.

I personally think that you get the most value and accuracy when you report multiple measures in conjunction.

A&E is high volume, so I'd go mean (the average gives you your headline, instantly understandable figure), and then I'd keep an eye on the median, 25th and 75th percentiles (to give you an understanding of the distribution). And I'd throw in the 95th percentile if (!) I was confident that there weren't data quality issues that undermined the measure. The 95th percentile gives you a good idea about the 'tail' of your distribution.

To be honest, as the data set is provisional, experimental and in its infancy, I'd go heavy on the data quality measures and working with the trusts to understand any oddities.

Barts being a case in point!

Thanks again for taking the time to go through it all. Good to get a correction.

Cheers

Chris

So what, I don't hear you ask, was the end result of my patient, comprehensive explanation of why Barts had been unfairly and erroneously singled out? On the 30th of September 2011, one line of the article was amended to read:

"Barts and London NHS Trust saw 5% of people waiting more than eight hours in A&E"

What a load of wasted effort on my part. And how arrogant to deny, deny, fob off, fob off, deflect, deflect, and ultimately only partially correct the original major mistake after a ridiculous length of time. And not take on board any of the other points.

P.S. You may have guessed by the name of the post and its content that I've struggled to give it a coherent structure. I've also wrestled with whether to post it at all (hence the time elapsed), as there's obviously a clear argument that a journalist taking the time to reply is better than one that just ignores you completely. I absolutely concede that, but ultimately I guess my point is that in my admittedly very limited experience there seems to be a real resistance from journalists to being questioned. Read my stuff - great. Agree with me - fantastic. Question me - what, what?!? Who are you? Don't you know who I am? Go away. I'm very busy etc. That's a real shame. Helping correct articles is surely a good thing, and the number one concern should always be the wronged party - in this case the hard working staff at Barts and The London's A&E department. Particularly as they are so open and transparent about how they are performing... if you don't believe me, look here!

P.P.S. It would also be wrong of me not to mention the good - George Monbiot, in my humble opinion, is an absolute star. Engaging, provocative, and above all - well researched, and approachable. Look at the glorious selection of heavily referenced articles on his website:

Nothing special, you might think. But compared to others, it is a revelation. And he goes further... a lot further. A comprehensive biography, as well as helpful career advice, AND a full registry of interests: