Another book recommendation as I belatedly read books I should have read when they came out.

Bad Pharma is a book health librarians must read (anyone with an interest in how decisions are made on medicines would do well to do so) (wikipedia has a summary for the tl:dr crowd).

During the period I was reading it I ran more than my usual number of critical appraisal sessions and the book was a great source of new examples to call on as I discussed bias, ethics and the niceties of what gets published. It does cause some issues as the picture of the extensive failings of our publishing system may leave people feeling somewhat deflated (it did me).

The chapters “missing data” and “bad trials” offer the richest source for improving understanding and helpful stories. The final chapter on marketing is depressing in the extent of the work of drug companies in this area. More money is spent on marketing than research. Medical education is reliant on drug marketing money.

Very few of my trainees had read the book (though that might be why they were at my training) which surprised me given the profile of the author and the subsequent media coverage of All Trials.

I was recently lucky enough to spend a weekend locked in a hotel learning about critical appraisal at a two day workshop run by the Critical Appraisal Company. The plan was to build my knowledge while picking up tips from expert tutors.

Like all good NHS activity it started early both days and had fairly average coffee. The venue was smart enough and we were well fed. With the shorter events I run timing is definitely important in terms of fitting in to available slots. I wonder if anyone has systematically assessed which times of day are best for scheduling sessions aimed at NHS staff? While I do not offer food and drink at my courses we do need to think about these sort of hygiene factors – how do we minimise barriers to taking the opportunity to learn?

Ahead of the course (from when we paid and for six months afterwards) we were issued with a login for elearning materials – you can see the contents list (they also sell access to these without the face to face training session). We were strongly encouraged to complete these ahead of time and they added hugely to the value of the session in my view. The elearning takes the form of narrated slides with accompanying handouts. The tutor on the course mentioned that these will be updated soon but I found them steady and clear. You can jump from section to section and replay tricky bits. Something similar would be a great addition to the brief courses I run both for learning before and after (or without any input). Even in a two day sessions some sections went by very quickly and knowing I could review things later was a great reassurance.

On arrival we were issued with some very slick handouts. There was a workbook that had examples, exercises and reminders of major points. Alongside this was a tricky to physically handle A3 book. This consisted of a series of full papers from journals with appropriate IP permissions. The paper was printed down the middle of the page with boxes either side for practical exercises aimed at pulling out aspects of the paper, checking calculations and building skills. Finally there was a copy of the new edition of the book (Doctor’s guide to critical appraisal – a buy recommendation from me) by the course tutor (and partner). Throughout the materials there was cross referencing to the relevant sections of the book and of the elearning. All in all this was a very slick and integrated set of materials.

The content of the course was very similar to the elearning. The big difference was the additional degree of elaboration and the use of anecdotes to make it less dry. This was very much in line with the way I try and present similar material. Extensive use was made of clickers to add interactivity and test understanding. I think this was perhaps a little over done as we ended up running well behind schedule which impacted negatively on the time spent on aspects later in the agenda. It was interesting to see the extent to which people were still not grasping key concepts. The clickers provided a non threatening way to explain where people were going wrong and bring out helpful illustrations of various learning points. I know colleagues have had some success with online polls and it merits further thought.

With the full weekend to work with the session did include much of the methodological background that I have largely dropped from my sessions. I could see it was helpful for people but I think elearning and other options will be a better way to cover this in a tighter time scale. The explanation of randomisation techniques was helpful as this was an area I know less well and it may be something that warrants more attention than most librarians slides I have seen tend to give it.

We spent a lot of time looking at two by two contingency tables and this is something I will be adding into my sessions. At present I cover various CER, EER, ARR, RRR, NNT calculations using an example and point to information on the table method in a handout but I think this is an oversight. So much power is made available to people to check results and I think it warrants some time.

Generally I came away feeling happy about the quality of the sessions I run. I focus hard on the practical application of appraisal – why something matters with a bit less detail about what it is. The course is excellent and I would recommend it for people looking to build their skills. Librarians who have revised their subject should have no concerns about running introductory sessions. My impression is that librarians attend a lot more trainer the trainer sessions on critical appraisal than they deliver. People should take the plunge!

This paper covers a project where a student was paid to become a student liaison working directly for the Library. They worked 15 to 19 hours a week during term time reporting directly to a fairly senior member of Library staff. They were set three main goals – enhance communication with the student body, articulate student perspectives / determine priorities to meet student needs and increase student participation in library programmes.

In common with the paper about the Library street teams (discussed last time) the paper tells us about what they did but falls down on the evaluation. There are few attempts to address how the programme will be evaluated and where figures are provided they are frequently partial. For example we have no context to claims of an improvement in the affect score on Libqual+. Changes to enquiry levels are discussed but without absolute figures.

In critical appraisal terms it falls at the first hurdle with a focussed research question lacking. Like much LIS research we get a case study approach. Applicability of the model proposed is quite limited locally with a very different institution involved and large sums of money required (at least $5K in pay for student at 2006 prices). The commitment of staff time to managing the role was also substantial.

On the positive side we can see many of the initiatives that were proposed or introduced correspond to work we have in place or under consideration / development. It also prompted lots of discussion of various paths for student engagement and ways to gain the student perspective.

So not a paper to change our practice but plenty to stimulate debate (and a nice blast from the past with them proudly reporting making 192 friends on MySpace).

I spend lots of time training on basic critical appraisal. I will have run 20 sessions over the past year (I have one more to do this morning before things grind to a halt for Christmas). This is great as they are challenging sessions where I regularly learn new things. The quick turn over means I can also tweak as I go.

A couple of weeks ago I ran three sessions in as many days and these reinforced a few things. It also seems timely to think about this as I have had confirmation that I will be attending this weekend long course in March.

I have had to run some of these sessions in constrained time slots (about an hour). In practice this means there is no time for the practical group appraisal exercise which is a shame. However it also made me really focus on my slides. A cut down version does away with nearly all the stuff about what the different research methods are and how they are used. I found this seems to make for a more useful session – we can really focus on what people need to be doing as they read. For many (most?) attendees the methodology stuff is a low level rehash and their interest is low. There also isn’t enough time to do much justice to the topic even in a full two hour session. So a refreshed slide deck in prospect for next year.

Another lesson was that I should vary my papers. I think there is a tendency in this type of training to stick to familiar favourites. It makes for less preparation and you have the benefit of having heard others observations. However I think one of the more successful aspects of the course is having experience of a wide range of papers to illustrate the points. Carrying out more appraisal helps build skills and should make for a more engaging presentation. Plus it means less repeating anecdotes.

The one crazy tip? Running a session for a group of palliative care medics one observed:

The last sentence of the introduction is nearly always the research question