Words on the value of data

I’m not known for my immaculate office or my attention to detail – I’m the sort of big picture or conceptual person in a team. I care about values and finding a way to achieve an end result, but I’m less fascinated by detail. There is, however, a time when record-keeping and data comes into its own, and I am very grateful to those people who do look after the nuts and bolts. And today’s post is about why.

I work in a tertiary pain management centre in a teaching hospital in a large health organisation with multiple layers of administration and management all wanting a piece of the fiscal pie allocated to us from central government. Throughout my years in health care, I can’t ever recall being told ‘we have lots of money, let’s go shopping’ – in fact, the cry has inevitably been ‘health care needs to economise, minimise waste, maximise outcomes, see more people, get the same outcomes and just make do with what you’ve got.’

And one of the things managers often like to ask is whether it’s really necessary to do ‘all that stuff’ that isn’t face-to-face with patients. You know the things: team meetings, writing reports, reviewing assessment results, tracking down GP’s or case managers or employers, professional development… yes, all that stuff. The desire from on high is about seeing as many people as possible within the allocated funding – but sometimes with scant attention to whether seeing those people has made a lot of difference to their lot!

Over the past couple of months, I have had occasion to be very grateful for some of the systems we have in place to track who is seen, what they’re like, and what happens to them after they’ve seen us – as well as a couple of instances when I wish that systems to collect data when a project was initiated had been considered. Here are a couple of examples.

‘In the olden days…’ every person seen at the pain management centre I work in was sent a set of questionnaires before they were seen for their interdisciplinary assessment. For each person, a broader set of information was also collected – on diagnosis, pain site, referrer, duration of pain, employment status etc. The questionnaire that really ‘did the work’ in terms of allocation to either a three week full-time programme or a part-time six week programme was the Multidimensional Pain Inventory, which classifies patients into three profiles – dysfunctional, suggesting the person would probably become increasingly distress and disabled if they continued coping the way they were; interpersonally distressed, suggesting the person’s main concerns were around their relationships with others; and adaptive coping, suggesting that if they continued to cope the way they were, they would generally be managing reasonably well. Along with a set of other questionnaires such as mood, catastrophising, self efficacy, pain anxiety and disability, these questionnaires helped clinicians make important allocations – those who were not coping well would generally attend the three-week programme, while those managing better would attend the six week part-time programme, and only a few would receive individual input.

Now for one reason or another, this data set stopped being used. After a gap of several years, a new data set was introduced – another whole new lot of questionnaires, with some cross-over with the previous set, but much of the ‘biographical’ data no longer available for analysis. What this meant is that it’s difficult to work out how many people are referred with a specific diagnosis, it’s hard to tell how many people are referred from specialists vs GP’s, and it’s not as easy to stream patients off to various interventions on the basis of questionnaire results. In fact, in trying to establish a new group-based intervention, it’s nigh on impossible to find three questionnaire ‘cut-off’ points that will reliably identify enough patients over a specific time period to fill the group quota!

Another example where collecting information prospectively is with our three week group programme. In this programme, we ask participants to complete questionnaires before the programme, at the end of the programme, and again at 4-6 weeks, 6 months and 12 months after the programme. We are in the process of analysing this information to produce an ‘annual report’ detailing the characteristics and outcomes of people who have been – (1) referred to the programme; (2) screened but not accepted; (3) screened and accepted but not completed; and (4) completed the programme. What we are able to tell from all this information is something about the kind of person who seems to benefit from the programme, those who don’t get accepted and why, and some information about the processes that we use.

And the final example is where we are reviewing a new approach for assessing selected patients who it is thought may not need a medical review. After 12 people had been through this process, we were able to review the data we had collected, and could identify that more people than were anticipated had been referred from outside of the centre, that those who had been referred this way were also being then referred for medical input – so we weren’t necessarily saving anything in terms of efficiency – and that maybe the referrals that were being read didn’t necessarily contain the kind of information needed to make a judgement as to whether someone needed a full comprehensive medical, psychosocial and functional assessment.

Information. Information is power, it is said, and when it comes to making effective decisions about how to deliver health care, or whether an intervention is useful or otherwise, you just can’t beat it. And even on a small scale, as a solo practitioner (and yes, I have had my time working this way), it makes sense to set up a database and start collecting as you go. Asking things like – who refers to me? What are they asking for? What are the main diagnoses I work with? Are the people I’m seeing employed, or are they on a benefit? Where is their pain? How long have they had it? What do they see as their main problems and what are their goals for treatment?

Not to mention adding in some really helpful information from questionnaires before treatment, after treatment – and most importantly, at two points of follow-up. It’s only by doing this that it’s possible for you, me or anyone else that is interested, to tell who you’re seeing, and whether what you offer them is useful.

Down with ‘I think…’ or ‘it looks like’! Long live data and analysis. Yes, even for people who firmly believe in individualising treatments and goals – there is most certainly a place for collecting standardised measures.

Post navigation

2 comments

Ah, you’re singing my song. What is hardest to get through people’s heads is that most of the value of data accrues over time — if, and only if, it’s collected consistently. The minute you change your information collecting protocols, you render all the data you’ve collected up to that point nearly useless.

I’m a database administrator in my day job, and much of my job boils down to enforcing consistency. Yes, it would be lovely to capture that information too. No, we’re not going to do it, not unless 1) I can get a promise from every single person who enters data that they’re going to do it the same way every time according to the same criteria, and train their successors to do the same, and 2) it won’t compromise our current data-collection processes.

YES, yes yes YES YES!!! consistency, accuracy and longitudinal collection! I wept when we lost our previous database that had been maintained for about 6 years or more! Unbelievable incompetence from the then manager who didn’t see the value of maintaining it… And once a system is set up, it’s not really that hard to keep it going – it’s a bit like telling a kid to go and clean up, if you do it at the time, if you do it consistently, it’s not really such a big job!
Thanks for making my day!