Liberating the literature

Month

July 2015

For many years I’ve admired PubMed’s related articles feature. If I was searching for an answer to a clinical question and found a useful article, related articles was a great way to see similar articles. These similar articles had a good chance of being useful as they were so similar. PubMed has no renamed the feature Similar Articles and this is what it does:

The Similar Articles link is as straightforward as it sounds. PubMed uses a powerful word-weighted algorithm to compare words from the Title and Abstract of each citation, as well as the MeSH headings assigned. The best matches for each citation are pre-calculated and stored as a set.

Trip’s related articles use a completely different approach – clickstream data. Does it matter? Does it work as well, worse or better?

Below are three comparisons. But these are not necessarily fair. For instance, Trip’s approach relies on users clicking on the articles – so it won’t work on brand new articles. Also, as you’ll see below a couple of the examples only have 4 related articles. This is down to paucity of data.

In the examples below I believe that Trip’s approach is superior but I’m not sure with the other two examples, I’d call it close! But I’d value any input from others – those less biased than me!

Effect comparison of metformin with insulin treatment for gestational diabetes: a meta-analysis based on RCTs. Archives of gynecology and obstetrics. 2014

The efficacy and safety of DPP4 inhibitors compared to sulfonylureas as add-on therapy to metformin in patients with Type 2 diabetes: A systematic review and meta-analysis. Diabetes research and clinical practice 2015

Evaluation of the potential for pharmacokinetic and pharmacodynamic interactions between dutogliptin, a novel DPP4 inhibitor, and metformin, in type 2 diabetic patients. Current medical research and opinion 2010

Metformin vs insulin in the management of gestational diabetes: a meta-analysis. PloS one 2013

Earlier today in the post Article analytics I said “This latest feature will be released soon.” Little did I realise it would be live by the end of the day!

In the above image I’ve highlighted four key areas:

Analytics – appears under every link (for Premium users only), this is clicked to generate the data below.

Related by viewer – these are articles that have been clicked on during the same search session as they had clicked on the main article (Canadian clinical practice guidelines for the management of anxiety, posttraumatic stress and obsessive-compulsive disorders).

Viewers by country – this highlights where the users originate from who did the clicking!

Viewers by profession – as above but broken down by profession

NOTE: the above example is very rich as it’s clearly a very popular article. Others will have considerably less data, another reason why we’re keen to get users to login!

This latest feature will be released soon. For a given article premium users will be able to see related articles (based on clickstream data) as well as information on total views, views by country and views by profession…

I believe the main justification given for conducting systematic reviews is to obtain a really accurate assessment of the effectiveness (or ‘worth’) of an intervention. So, the thinking goes that spending 12-24 months is worth the cost (financial, opportunity, etc) due to the accuracy of the prediction it then gives.

My immediate response is that is demonstrably false. In my article ‘Some additional thoughts on systematic reviews‘ (just under 5,000 views) the evidence is clear that if you rely on published journal articles to ‘inform’ your systematic reviews (which is the case in the vast majority of systematic reviews) there is approximately a 50% chance that the effect size is likely to be out by over 10%.

But, even if we suspend being evidence-based and believe that systematic reviews can be relied upon to give us an accurate estimate of an effect size, is everything fine? I don’t think so and the image below illustrates my thinking.

It’s an hourglass! At the top are all the unsynthesised trials, all floating around and the uncertainty is moderate. Someone then spends 12-24 months pulling these together in a systematic review (likely of published trials and therefore ‘a bit dodgy’) and the certainty is reduced at the aperture of the hourglass. But then, when you apply it to the real world of patient care, the uncertainty flares out again. In the above example the intervention has a NNT of 6, so the intervention needs to be given to 6 people to obtain the desired outcome in 1 person. Which is the 1 person? Where’s the certainty?

If we were to spend significantly less time doing a review it might indicate a wider hourglass aperture (perhaps suggesting an NNT of 5-7). In what situations does that matter? I don’t think we’ve even started to explore these issues. In other words, when is it appropriate to spend 12-24 months on an systematic review and when is a significantly less resource intensive approach ‘ok’?

Is it irony that the reality is the type of review (systematic versus ‘rapid’) doesn’t alter the effectiveness of an intervention? After all the compound remains the same, untroubled by the efforts of trialists. Sorry, getting sociological there – must be time to sign off for now.

At the start of the year I posted Ok, I admit it, I’m stuck, which was a cry for help from the Trip community to help me make sense of all our lovely clickstream data. We had a few responses and one was from an Australian research and management consultancy QSPectral, a company specialising in providing strategic insights and predictions through advanced data science and analytics. They have been working with us to help us make sense of our clickstream data.

Article AssociationQSPectral used their data science expertise to investigate the connections between the articles based on the user access data contained within the Trip Database.

In the above image the Y-axis represents individual search sessions and the X-axis is the documentID (each article in Trip has a unique document ID). So, we can see what professions are looking at which articles. We can actually see what articles individuals are looking at, but the above image shows it on a profession basis.

Figure 2 A more focused snapshot of the previous image

As a user do you want to see what other articles are similar to the one you are reading?Do you want to know what others like you thought were similar?

To provide answers to these questions, QSPectral developed an algorithm based on association rules to explore the relationships between articles on a per session basis. We intended to identify links between articles based on different criteria of interest.

The strength of the links was measured by statistical measures such as confidence and support factors. These led to association rules, which were of the form if {article x is accessed then articles y and z} were also accessed were further enhanced by including additional user characteristics – information such as the profession (nurse, doctor..) as well as country of origin were used to moderate the previously established article relationships.

Figure 3 Snapshot of related article numbers – if the articles on the y axis are accessed it implies those on the x axis would also be of interest.

The data can be further augmented by adding clickstream data that includes the area of speciality (such as cardiology) for a user, where the for example, if you are a doctor from Spain only relationships between articles that doctors from Spain accessed could be isolated and uncovered. It was also possible to group the related articles in clusters based on this multi-dimensional relationship – defined by colour in the figure.

Figure 4 clusters of articles based on relationships

The purpose of this initial investigation was to set the stage for providing users with recommendations based on their initial article of interest and their particular user characteristic. A slightly different approach to PubMed’s ‘related articles’ feature.

As well as finding closely related articles QSPectral have helped us explore recommendations of new articles. So, if we know a user’s activity on Trip we can start to understand them and then – with QSPectral’s help – recommend new articles that should be of interest.Article Recommendation

How will TRIP recommend articles for you?

Machine learning methods based on clustering and classification are being investigated for providing reliable recommendations.

We believe that initial article clusters should be identified using an algorithm known as k-means clustering. Each user will then be classified as being interested in articles within a cluster based on attributes such as their first choice of article and user attributes (profession, country etc.) using a method where a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility is created.

Figure 5 Example of a Decision Tree where the top node could represents you and the other nodes represent related articles based on branch criteria.

QSPectral determined that decision trees are the most appropriate concept for meeting the requirements. Decision tree methods can accommodate more data inputs over time. Various other transformations of inputs are possible and are robust to inclusion of irrelevant fields in the data, and produces transparent models for on-going analysis.

Further, we will use other methods that take a number of simple decision trees and combine them in some way to yield a final overall picture. We propose techniques for iteratively averaging multiple deep decision trees, trained on different parts of the collected data, with the goal of reducing the variance. Each iteration creates a simple decision tree on randomly selected subsets of input variables and input data. The final result where recommendations are provided will be formed through classifying a user through the aggregation of all such trees.

One change we introduced recently is the increased user ‘pressure’ to log in. A few people have contacted me to raise this as an issue and it made me realise we’ve added a barrier to use of Trip but we’ve not communicated why. So, here goes…

Ultimately it’s part of a longer-term strategy to improve Trip and this requires us to better understand our users (which requires the user to be logged in).

Some background; my partners Dad was an eminent Professor of Anaesthetics (now retired) and I showed him Trip, and he said he’d use it for a bit. He came back unimpressed! His interest was in awareness, and a search for awareness on Trip (click here) returns no articles on awareness under anaesthesia, which was his interest/intention (see for yourself).

While this is an extreme example it does highlight that, without knowing the user, how can we optimise the search results? Our system should have realised that the user was an anaesthetist and adjusted the results accordingly. We’re doing lots of work on this area and are making real strides. I blogged about in March with the article The important breakthrough which contained the following image:

As you can see from the results (in this experimental test system) we have detected the example user as a dentist and adjusted the results accordingly. For an information retrieval ‘nerd’ (like myself) this is amazing. I can think of no other innovation Trip has introduced that will come close to improving the search results as this.

And there are loads more things we can do if we know the user. For instance improved email alerts – better linking users with evidence that is likely to be interesting and useful, as opposed to our current crude efforts!

But for it to work we need to know the user, which requires logging in.