Liberating the literature

Month

October 2014

Barb, one of our volunteers on our Twitter accounts, commented a while ago about seeing some strange results on Trip, so I asked her to send any news ones she found to help me understand what was going on. She was looking for new articles in Trip that are returned for the search ‘immunisations’. Many were fine but a few weren’t, for instance:

Economic Evaluation of Complex Health System Interventions: A Discussion Paper

British Guideline on the management of asthma

Developing and Evaluating Methods for Record Linkage and Reducing Bias in Patient Registries

Now, these are not specifically about immunisations but they’ll all make reference to it. For instance the top result has the following:Vaccinations1.4.2 Be aware that live vaccinations may be contraindicated in people with MS who are being treated with disease-modifying therapies.

We return all the results that match the search terms (and/or synonyms). However, our algorithm is designed to emphasise those results which are more relevant. So, ordinarily, if you do a search with lots of results the relatively irrelevant results don’t appear (well they do, but not till way down the results). However, if you look for things with few results (perhaps an unusual condition or you heavily restrict the results) you are more likely to see ‘strange’ results.

So, what can we do? I see three options:

Leave it ‘as is’ and hope people don’t get put off by the occasional result they find strange.

We allow users to set a relevancy cut-off themselves. Each search result gets a score from 0 to 1 (with 1 being very relevant) and every result that matches the search term gets at least 0.0001 and therefore can be shown in the results. We could give users a ‘slider’ to allow them to chose what cut-off they want, So some might chose 0.1 while others might chose 0.3.

We effectively borrow a concept from PubMed’s Clinical Queries which has a narrow and broad search. The narrow search returns fewer results, they’re more relevant but you may miss a few (it’s a specific search) while the broad search gets more results but more irrelevant results (it’s a sensitive search). So, in effect, Trip currently does a highly sensitive search. You can see the effects in PubMed for a broad and narrow search for prostate cancer screening:

My ‘gut’ instinct is the third option. We, at Trip, experiment to try and arrive at a reasonable relevancy cut-off which is introduced by default on all searches. On the result’s page we highlight that the search is narrow and to make it broad simply press a button.

With the move to the next upgrade – and a freemium Trip the notion of the Answer Engine appears again. I’ve talked about the Answer Engine for at least 4 years but previously I’ve never had the conviction that it’ll work. The idea is great: infer a question from the search terms and show ‘the’ answer.

It’s because I like the idea so much I keep coming back to it. I’ve done a mock-up of how it might look.

I’m waiting to hear from one publisher about using their content. If they agree that I can re-use their content it’ll be thousands of Q&As ready to go and I’ll be ready to commit to getting it off the ground.

I’m also talking to other publishers about their willingness to participate. We get Q&As and they get their content in a prime position on Trip, a win:win in my book! Other than that I’ll be undertaking another user survey and will ask then if people want to volunteer to add a few Q&As. If everything falls into place we’ll have a reasonable chance of making it work!

Trip users are amazing – in less than 48 hours of releasing the survey we had 1,0001 responses, at which stage SurveyMonkey closed the survey saying we’d reached the limit! Apologies if you feel your voice hasn’t been heard, if that is the case email me (jon.brassey@tripdatabase.com), I’d love to hear from you. Given your generosity of time I thought I’d share the initial results highlights…

The top 5 professions represented in the survey

Doctor – secondary or tertiary care

Doctor – primary care

Librarian

Other

Researcher/scientist

75% of respondents have been using Trip for more than a year with 35% using it for longer than 3 years.I asked about the most important features relating to our content and these are the top 6 responses (those that were highlighted by more than 30% of the respondents):

Largest single searchable collection of ‘evidence-based’ content

Largest global collection of clinical guidelines

Many more systematic reviews than Cochrane

Content is from around the globe, for example USA, UK, Canada, Australia, New Zealand, France, Germany, Japan, Singapore, South Africa

Selected collection of PubMed’s leading clinical journals

Database of over 500,000 clinical trials

I also asked if there were many surprises – and there were lots of responses. The main one being the lack of awareness of our image and video collections. We clearly need to work hard on getting that message out.

I asked about the most important key features of Trip, the following are all those that polled over 30%:

We asked about a Trip Evidence Service and most thought it was a good idea. However, only 11% thought they would be able to find the money within their organisation. But I’m encouraged as 11% is still high, given our large user base.

Most people appeared to be broadly supportive/understanding of our need to move to a freemium business model.

I listed a number of possible new premium features and those that polled greater than 20% (only the top 3 were higher than 30%):

Add in additional full-text articles

Creation of an ‘Answer Engine’ giving you instant answers to your clinical questions

PICO+. Based on the popular PICO search make it more user friendly and powerful

A ‘Help’ feature so if you can’t find what you need you can ask the wider Trip community

Providing education points based on your time using Trip

Improved emails highlighting evidence that is more likely to be useful to you

Introduce a ‘People who looked at this article, also looked at these articles’ features to highlight related articles

Improved export of records

Due to us using colour we asked about colourblindness and 3.2% said they were colourblind. I’ve no idea how that compares to the wider population. nearly 30% of the users reported “I am not colour blind and I was not aware that you used colour to help highlight the quality of the results”. So, another communication challenge for us.

Finally, in looking through the ‘Any other comments’ section I was completely overwhelmed by the messages of love and support. Knowing that makes my work so much easier.

“Researchers in dominant paradigms tend to be very keen on procedure. They set up committees to define and police the rules of their paradigm, awarding grants and accolades to those who follow those rules. This entirely circular exercise works very well just after the establishment of a new paradigm, since building systematically on what has gone before is an efficient and effective route to scientific progress. But once new discoveries have stretched the paradigm to its limits, these same rules and procedures become counterproductive and constraining. That’s what I mean by conceptual cul-de-sacs.”

I quoted Trish as I felt that Cochrane had come to dominate and lead the systematic review paradigm. But one thing I didn’t write-up at the time and linked with Trish’s quote was my feeling that the methodological rigour and standards set by Cochrane was actually an economic barrier to entry for competitors. The Wikipedia article on barriers to entry reports:

“In theories of competition in economics, barriers to entry, also known as barrier to entry, are obstacles that make it difficult to enter a given market. The term can refer to hindrances a firm faces in trying to enter a market or industry—such as government regulation and patents, or a large, established firm taking advantage of economies of scale—or those an individual faces in trying to gain entrance to a profession—such as education or licensing requirements.

Because barriers to entry protect incumbent firms and restrict competition in a market, they can contribute to distortionary prices. The existence of monopolies or market power is often aided by barriers to entry.”

Cochrane, due to their dominance, effectively set the standards of what’s deemed acceptable (irrespective of the significant evidence to the contrary – see the previous two blog posts for further information). This effectively stifles competition. If systematic reviews could be done quickly and easily by anyone the business model of Cochrane would be severely compromised – I can see no other losers (except perhaps pharma).

Perhaps it is a coincidence that most changes to systematic review methods over the years appear to have more to do with increasing the methodological burden (by squeezing increasingly small amounts of bias out of the results) than with reducing the costs?

What has prompted the above post has been the announcement of the winner of the Nobel Prize for Economics. Jean Tirole has won for his work on market power and regulation. The BBC reports:“Many industries are dominated by a small number of large firms or a single monopoly,” the jury said of Mr Tirole’s work. “Left unregulated, such markets often produce socially undesirable results – prices higher than those motivated by costs, or unproductive firms that survive by blocking the entry of new and more productive ones.”

Now, that’s got to be a good link – EBM, Cochrane and the Nobel Prize for Economics!

But the point of the post, is not to moan at Cochrane, but to suggest that the systematic review ‘market’ is problematic and there appears to be little appetite to radically change things. If we want to improve care we need more systematic reviews which means we need to innovate. And by innovate I don’t mean small iterative improvements, more substantial changes are needed.

Perhaps we could start at first principles and ask why do we do systematic reviews in the first place? I used to think it was to get an accurate assessment of effect size. However, if you look at the evidence it’s fairly clear that systematic reviews – based on published trials – are pretty poor in this regard. But if it’s not that, then why do we do them? Once we can clearly articulate why we can perhaps better understand how to produce them more efficiently.