Friday, 31 July 2015

So, I recently finished a 100 day challenge, where I gave up chocolate, cake, biscuits, sweets, etc., attempted to be more healthy about my eating and exercise as often as I could. This was to see if I could keep off the sugar for 100 days, and also in the hopes that I'd lose some weight.

At the end of my 100 days, I stood on the bathroom scales, and I'd lost a grand total of... wait for it... 0 lb. Bum.

And my brain being what it is, I instantly thought "well, that was a waste of time, wasn't it? Why did I even bother?"

Then my inner physicist kicked in with: "I like not this metric! Bring me another!" (So I found more metrics about how many km I'd run in the hundred days, and how many personal bests had been achieved, and I felt better.)

But that all got me thinking about metrics, and about how easy it is to doom good work, simply because it doesn't meet expectations with regards to one number. Currently, research stands or falls by its citation count - and we're trying to apply this single metric to even more things.

And that got me thinking. What we want to know is: "how useful is our research?" But an awful lot of metrics come at it from another angle: "what can we measure and what does that mean?"

So, citations. We are counting the number of times a paper (which is a proxy for a large amounts of research work) is mentioned in other papers. That is all. We are assuming that those mentions actually mean something (and to be fair, they often do) but what that meaning is, isn't necessarily clear. Is the paper being cited because it's good, or because it's rubbish? Does the citer agree with the paper, or do they refute it? This is the sort of information we don't get when we count how many times a paper has been cited, though there are movements to quantifying a bit better what a citation actually means. See CiTO, the Citation Typing Ontology for example.

Similarly for Twitter, we can count the number of tweets that something gets, but figuring out what that number actually means is the hard part. I've been told that tweets don't correlate with citations, but then that begs the question, is that what we want to use tweet counts for? I'm not sure we do.

We can count citations, tweets, mentions in social media, bookmarks in reference managers, downloads, etc., etc., etc. But are they actually helping us figure out the fundamental question: "how useful is our research?" I don't think they are.

If we take it back to that question, "how useful is my research?" then that makes us rethink things. The question then becomes: "how useful is my research to industry?" or "how useful is my research to my scientific community?, or "to industry?", or "to education?". And once we start asking those questions, we can then think of metrics to answer those questions.

It might be the case that for the research community, citation counts are a good indicator of how useful a piece of research is. It's definitely not going to work like that for education or industry! But if those sectors of society are important consumers of research, then we need to figure out how to quantify that usefulness.

This being just a blog post, I don't have any answers. But maybe, looking at metrics from the point of view of "what we want to measure" rather than simply "what can we measure and what does it mean?" could get us thinking in a different way.

(Now, if you'll excuse me, I have an important meeting with a piece of chocolate!)