Fri, 07 Dec 2018 13:33:23 +0000WeeblyMon, 10 Apr 2017 23:30:00 GMThttp://www.iccueducation.org.uk/blog/probioticsThe human gut microbiome is truly mind-boggling. We are teaming with microorganisms and their presence has been implicated not just in C. difficile colitis but in obesity, cardiovascular disease and a whole host of other diseases. The suggestion has even been made that our genetic code should be described as not only the DNA found within our cells, but as an amalgamation of that and the genes found in the microorganisms that surround and cover us – that the microorganisms are an intrinsic part of ‘us’.

​Because I’m middle class and my wife had been shopping I had a Yakult this morning (other brands are available!). I also prescribe probiotics* occasionally for patients in the unit, but that behaviour is sporadic and based on little more than ‘seems like a reasonable idea and I’ve just thought of it’.

We know that our gut microbiome is altered by critical illness, and that gut translocation is a key component of the pathophysiology of multi-organ failure. Giving a probiotic aims to ‘restore the deranged microbiome to health’. Is that plausible? And if so do we really understand the implications of our meddling?

The most recent meta-analysis is this one published (open access) in Critical Care. They used a random effects model (as described in this post).

In summary:

30 RCTs were included. Five were considered to be high quality (by the authors own criteria).

For the 14 trials with infection as an outcome measure, there was a significant reduction in (all) infections (RR 0.80, 95% CI 0.68,0.95 P=0.009 I2=36%).

Perhaps unsurprisingly there was also a reduction of antibiotic duration (of a day).

There was no difference in mortality (the authors had to publish an erratum, but whilst the number changed the meaning did not. The error was pointed out in this letter, to which the response is here).

There was no difference in ICU or hospital length of stay (it always makes me suspicious when a reduction is shown in something that affects ICU length of stay but the ICU LOS remains unchanged – all other things being equal that shouldn’t be so).

There was no effect on diarrhoea.

Subgroup analysis was also undertaken. Interestingly the dose made no difference, which suggests if beneficial that only a low dose is required, but it might also suggest that probiotics are ineffective!

In some cases there was a significant benefit in one group but not the other, yet when they were compared there was no significant difference between the groups. This was the case for the presence of L. planetarium, and for probiotics not containing L. rhamnosus or symbiotics, as well as for the use of probiotics in groups with a higher mortality.

The only positive result for the subgroup analysis was that lower quality studies were more likely to give a positive result (P=0.03). Publication bias was also identified for the overall finding of reduced infection with probiotics which is perhaps unsurprising but worth noting.

Whilst this meta-analysis only considered potential benefit, it’s clearly important to look at potential harm. The ‘bad press’ if there is any comes from the PROPRIATA trial which looked at probiotics in pancreatitis; the abstract of which is certainly enough to put you off. A subsequent meta-analysis has calmed the waters a bit as far as use in pancreatitis is concerned. The British Society of Gastroenterology guidelines make no reference to probiotics at all (they are however 12 years old).

We only have two interventions to influence the gut flora (3 if you include faecal transplant) – probiotics and selective gut decontamination. Both are blunt instruments for a complex problem and both seem resigned to a series of small studies, multiple meta-analysis and patchy implementation. This is an area with huge potential, but maybe with our current understanding it’s just too difficult. To future might be exciting though!

What you you think – probiotics for all??

*By way of definitions:

A probiotic is a living microorganism which is non-pathological and administered to prevent disease.

A prebiotic is a substance given to aid the growth or activity of microorganisms that are beneficial to the host. In the gut this is commonly fibre to act as a nutrient.

A symbiotic is a combination of the two.

]]>Fri, 10 Mar 2017 00:00:00 GMThttp://www.iccueducation.org.uk/blog/airway-assistance-in-the-edYou might have already seen a link on the homepage of this site to free e-learning for airway assistants in the ED. One of the challenges with FOAM resources is that it's not always clear what the background to them was or what process was undertaken in their creation. We've therefore published our method; the link to the article is here. Unfortunately it's not open access but you may be able to get it depending on your subscriptions.

Please have a read of the article, I hope you find it interesting and there may be aspects of the method you want to apply in another setting. I would also ask that if you don't have a system in place for training your ED airway assistants you consider whether this resource might be useful to you.

If anyone has any questions about the process, please post them below and I'll do my best to answer them. If anyone has any criticisms, either post those too or feel free to keep them to yourself!

]]>Fri, 10 Feb 2017 16:52:55 GMThttp://www.iccueducation.org.uk/blog/your-patients-bleeding-whats-their-hbWhen a patient bleeds they lose whole blood, but we also see a reduction in Hb concentration. The reason for this is apparently ‘transcapillary filling’ which always sounded to me like it could be bullshit (see earlier post). ​

I’ve had a look for the research to support transcapillary filling, and unsurprisingly much of it was conducted in animals. The first human study is from almost 100 years ago, (1919) with further work conducted in the 1940s when new techniques for measuring plasma volume became available. The studies were all essentially similar; fasted ‘volunteers’ were bled and then observed while they had their cardiovascular collapse and then slowly recovered. Results varied as to rate and completeness, but a restoration of plasma volume was consistently found. I would recommend reading this article as an example, if only for the descriptions of the poor volunteers. This work also concluded that the fluid shift is driven by a movement of (probably pre-existing) protein into the vascular compartment with resultant osmotic flow (Starling’s forces are central to transcapillary filling, so how we square this with the model of the endothelial glycocalyx presented in this article I’m not sure).

Another interesting finding was that transcapillary filling appeared to be delayed by exogenous noradrenaline (by increasing the venous pressure therefore increasing hydrostatic pressure). The use of pressors in the case of traumatic bleeding is contentious, and in the days of aggressive resuscitation I’m not sure this effect is still relevant – it's still interesting though.

The clinical consequence of transcapillary filling is the usefulness of [Hb] to detect bleeding and to guide transfusion. If bleeding is recognised and i.v. fluid is given, a drop could be expected through haemodilution if nothing else. But if not immediately recognised (as can be the case in intensive care), a drop in [Hb] can only be expected if transcapillary filling has occurred – which the above studies tell us requires an uncertain and sometimes lengthy time period (I also wonder whether the low protein levels we see in critical care reduces transcapillary filling further). I can’t find any data for the critically ill, although I suppose by definition, studying unrecognised blood loss can be problematic! There is however data in the setting of trauma.

This study looked at the Hb concentration for all patients presenting to a level 1 trauma centre over a two-month period, and found that a [Hb] in the first 30 mins of <10 g/dl was associated with a 95% specificity for intervention, but that it was only 15% sensitive. This means that if the [Hb] is low, the patient is very likely to be bleeding BUT that relying on a positive test (low [Hb]) would lead to missing many bleeding patients. There are many weaknesses of this study and I don’t agree with the conclusions, but the fact that there were so many patients without a reduced [Hb] only reinforces that transcapillary filling occurs at a varying rate if at all.

The same authors conducted a second study looking at whether a low Hct predicted blood transfusion (to quote the authors verbatim “At first glance, it might appear that the practical impact of this study is minimal”). Even for this outcome, the sensitivity of a Hct <30 was only 29% - only 29% of patients requiring blood had a Hct <30 (a [Hb] of approx 10 g/dL). Giving a patient blood is a human behaviour rather than a physiological endpoint, but I can see how it is used as a surrogate for bleeding. Unfortunately, despite claims made throughout the article, the authors can only conclude that “… use of the initial Hct might not help clinicians identify all patients who will require a transfusion…”.

This study took a different approach. They looked at a cohort of approx. 200 trauma patients who required surgery within 4hrs. of presentation. 43% of patients with a Hct within the normal reference range (>40 for males) had an estimated blood loss of >1000ml (which I think might be an underestimate unless their theatres are ultra-efficient – something made the surgeons take them to theatre quickly and usually that’s only visceral injury or bleeding). Once again, the data in this paper doesn’t lead me to the same conclusion as the authors that Hct is useful to diagnose significant blood loss.

If we can make sense of this by saying that transcapillary filling might not have had enough time to occur when [Hb]/Hct is measured, one strategy might be to make serial measurements. Unfortunately, this doesn’t seem to help as this refreshingly honest paper shows (when reading it you can almost see the author banging their head off the desk!). The investigators found that repeating normal [Hb] measurements in 2 hours resulted in another normal result in 70% of patients. In those whose repeat [Hb] was low (14%) it was in keeping with a known diagnosis, and there were no clinical consequences in the 16% that did not have a repeat test.

This data shows that we cannot rely on a normal [Hb] as a rule out test (but we frequently do), and in the critically ill patient the sensitivity may be even lower than in trauma (this is only my suspicion, don’t take it as fact). If the [Hb] doesn’t drop but we suspect bleeding, what should we do when our mental models are based on the combination of a bleeding patient and a low [Hb]?

By definition the intravascular volume will be reduced – if we give volume (crystalloid) that will temporise matters but at the cost of subsequent tissue oedema (incidentally the ‘weakness’ of this strategy was shown in this paper in 1932!), so is giving blood the answer? We know that a high [Hb] target is harmful in critical illness in general, sepsis, and in upper GI bleeding, but we also know that aggressive resuscitation of the bleeding patient is required. If we give blood, it is conceivable that the [Hb] would be expected to rise further. If protein is key what about Albumin? Or hypertonic saline to increase osmolality? What about giving crystalloid to haemodilute thereby justifying subsequent blood? - I think that’s what often ends up happening even if not by intention.

All of this perhaps raises more questions than it answers, particularly for the critically ill patient on the unit with occult bleeding. Do we really see a fall in [Hb], and is transcapillary filling the cause? The more I read the less sure I am...

Finally, if you haven’t lost the will yet here’s a review of the pathophysiology of haemorrhagic shock and an here’s an article about the methods used to measure plasma volume.

]]>Mon, 09 Jan 2017 15:42:20 GMThttp://www.iccueducation.org.uk/blog/auto-peepThe use of positive end expiratory pressure (PEEP) when ventilating patients in critical care is pretty universal, but it’s also true that we don’t entirely know how much to use and when. The rationale for PEEP is to prevent alveolar collapse (and possibly to take some role in recruiting areas already collapsed) and improve oxygenation. The costs of PEEP however are a rise in intrathoracic pressure (with the associated haemodynamic effects) and the risk of overdistension of lung units (leading to cytokine release etc.).

If we’re looking for an evidence base to tell us how to use PEEP we’re stuck. As with all lung injury studies, the populations include patients with different pathology, background lung state, stages of disease etc. And even within that we know that the diseased lung is far from homogenous – a strategy that works for part of the lung may well be detrimental for another with the overall outcome being a balance between the benefits and harms caused.​Auto-PEEP (reviews here and here) occurs when the lung has not finished exhalation at the start of the next inspiration. Because of elastic recoil, this generates a pressure. The volume that remains (above the FRC) doesn’t go anywhere, and with subsequent breaths this results in an increasing lung volume i.e. dynamic pulmonary hyperinflation (DPH or breath-stacking). DPH doesn’t result from extrinsic PEEP (set by us) because there is no flow at end expiration – the lung has returned to FRC. Auto-PEEP has been said to occur in all ventilated patients with COPD, and about 1/3 of those without. The adverse effects of auto-PEEP are those of excessive PEEP as outlined above, but also those caused by DPH.

Auto-PEEP should be suspected whenever the flow seen to be not returning to the baseline (i.e. zero), but also in any patient observed to be ‘fighting the ventilator’ or to have higher than expected airway pressures. Thankfully, ventilators can measure auto-PEEP (Drager calls it intrinsic PEEP or PEEPi); they this by occluding the expiratory limb at end expiration and measuring the rise in preasure until it plateaus.

The causes of auto-PEEP are essentially anything that limits exhalation. These include; mucus plugs causing a ball valve effect, a fixed resistance such as a partially blocked tube, a change in compliance (such as in ARDS), or a set expiratory time that is unrealistically short. The most important pulmonary cause however is airway collapse during exhalation (expiratory flow limitation, EFL), the reason being that the application of extrinsic PEEP is beneficial if that is the cause (by ‘holding the airways open’). In all other cases, the addition of extrinsic PEEP could be expected only to add to the problem by reducing the pressure gradient for expiration and increasing the total pressure.

The level of PEEP that should be applied in the case of EFL is less than the level of auto-PEEP. Because the latter is not fixed, 80% of it is often mentioned. If the level of auto-PEEP is exceeded, the theory goes that total PEEP will increase thereby reducing any beneficial effect (described as the waterfall effect – the article is here).

Other treatment options are to:

Reduce inspiratory time or respiratory rate – both will increase expiratory time.

Use bronchodilators if there’s a responsive element (although not empirically)

Check your circuit for obstruction (secretions etc.)

Anyhow, after all that the paper that triggered looking at auto-PEEP is this one:

I’m not convinced I know what the authors were expecting from this study, but it illustrates some of the points above. 100 patients with auto-PEEP of 5 or higher were ventilated with an extrinsic PEEP set at of 80% their auto-PEEP. They tested for EFL to look closer at the concept that, if present, total PEEP should not rise but if absent it should. What they found was that patients with EFL were as likely to increase their total PEEP as not, but that patients without EFL were very likely to increase their total PEEP. A respiratory rate of >20 was also predictive of an increase in total PEEP.

None of the tests described to identify EFL seem to be clinically reliable to my me, and in any case I think this study again shows that lungs are not either one or the other. Artificial ventilation is just that, and also potentially harmful. PEEP is a simple manoeuvre with complex and contradictory effects just within the lungs of one patient; I don’t think we will ever have a unifying recipe for how to use it and if we do I think we’d need to be sceptical. I also don’t think we give auto-PEEP enough credit in its milder forms; the consequences of that I’m not sure of though.

I’d recommend having a look at the reviews – they go into a lot more detail if you’ve got this far…

]]>Sun, 04 Dec 2016 01:00:00 GMThttp://www.iccueducation.org.uk/blog/metronidazole-resistance-worth-worrying-about​The addition of Metronidazole to an antibiotic regimen is something generally done without much concern. We ‘trust’ Metronidazole to not cause resistance, and to do the job we ask of it. Why is that? I honestly can’t remember ever looking after a patient with an anaerobic infection resistant to Metronidazole.

For a bit of context, Metronidazole is active against anaerobes. There are two types of anaerobic bacteria – obligate anaerobes which cannot survive in atmospheric oxygen (a subcategory is microaerophiles which require oxygen, but need a sub-atmospheric concentration for their survival e.g. H. Pylori) and facultative anaerobes, which will use oxygen for metabolism if present but can also survive without it. The obligate anaerobes can also be divided into spore forming (Clostridium sp.) and non-spore forming species.

Non-spore forming anaerobes form part of the commensal flora of the GI and GU tracts, therefore infection is usually endogenous and opportunistic. We are absolutely teaming with bacteria (the number of commensal bacteria is greater than the number of cells in the body). In one gram of faeces there are upwards of 10,000,000,000 anaerobes – to put that number into perspective, ten thousand million seconds is 317 years! Anaerobic infections are polymicrobial, and will include organisms that have not yet been identified.

In the same way that obligate anaerobes don’t survive in oxygen, Metronidazole doesn’t work in the presence of oxygen, which explains it’s spectrum of activity. It is a pro-drug, and once inside the cell is reduced (gains electrons) to the active form (a free radical that destroys DNA). This doesn’t happen in the presence of oxygen because the oxygen molecule has a higher affinity for the electron than the Metronidazole – oxygen can even turn activated Metronidazole back to the pro-drug.

Bacterial resistance to Metronidazole is rare, but it does occur (there’s a couple of review articles here and here). H. Pylori resistance is common (20-45%), but the resistance we’d be more interested in from a critical care perspective is that of the Bacteroides and Clostridium species.

Bacteroides resistance was first reported in 1978 in in a patient receiving long term Metronidazole for Chron’s disease. In 1998, the UK Anaerobic Reference Unit reported the incidence of Bacteroides resistance may be as high as 7.5%, but this figure is open to some interpretation as some of the samples tested were only included because resistance was suspected. Other estimates are lower, in the region of <1%. The mechanism of resistance is explained better in the review articles than I could reproduce here.

Clostridium resistance has been found in horses, and whilst C. Diff resistant strains have been isolated in humans, they have fortunately been in the non-toxigenic forms.

There is, however a twist………It could be however that resistance is simply under-recognised…....

In many microbiology labs, the specificity of Metronidazole is used to identify the presence of obligate anaerobes. When the plated sample is incubated in an anaerobic environment, a disc of Metronidazole is added. After growth, if there is a zone of inhibition around the Metronidazole this is taken to signify the presence of obligate anaerobes (as Metronidazole can only inhibit the growth of anaerobes). Organisms that are not inhibited by the Metronidazole are assumed to be facilitative anaerobes (which have reduced susceptibility) and are not investigated further. But what if there’s a resistant obligate anaerobe?

I think it’s fair to say that Metronidazole’s niche mechanism and activity profile make it a fantastically useful drug. To lose it to resistance would be a disaster, not least because resistance to Metronidazole means resistance to all other 5-nitroimidazoles (none of which I’ve heard of – Tinidazole, Nimorazole, Dimetridazole, Ornidazole, Megazol and Azenidazole). Clearly, Metronidazole should be subject to the same stewardship as all other antimicrobials.

In the meantime, it might be interesting to ask how your lab identifies obligate anaerobes.…

]]>Thu, 03 Nov 2016 20:51:56 GMThttp://www.iccueducation.org.uk/blog/subglottic-suction-and-meta-analysis​If ever there was a publication type that encourages me to just read the abstract it’s a meta-analysis. Essentially the abstracts all seem to say the same thing:

Background – the answer to this question is unknown.

Methods - We searched for evidence, did some word-heavy methodology (that I don’t understand and gloss over) and used the GRADE system (which I’ve forgotten too many times).

Results – Some RCTs were found but they all looked at different things and the level of evidence was poor. We did find X and Y (but putting X and Y together the conclusion may or not make sense e.g. we found a higher rate of amputations but no reduction in mobility).

Conclusions – A repeat of the main results.

What follows in the body of the paper is generally something I don’t really understand, other than the introduction, the discussion and maybe a Forest plot in-between.

The reason for this rant is that we’re re-considering whether we should start using subglottic suction (we don’t currently) and there’s this meta-analysis that’s just been published. In a fit of enthusiasm I thought I might try and de-code the methods….

The protocol for the review was published on PROSPERO, which is a database of health related systematic reviews. The advantages of using PROSPERO are that someone else can see if a review on a topic has already been done / is underway, and a comparison can be made between what was planned and what was produced. The entry for this meta-analysis can be found here. It all looks pretty lingo-free other than the ‘risk of bias assessment’ and ‘strategy for data synthesis’ sections:

Risk of bias assessment:

The tool they used for the review was the ‘Cochrane collaboration’s tool for assessing the risk of bias’. There’s a paper about that tool here, and the results of that are in figure 2 of the paper. Looks straightforward, at least in theory.

The protocol then talks about using MINORS grading for non-RCTs, however the protocol also says RCTs only will be included – not sure what that’s about (I’ve checked and the paper does only include RCTs).

It then reports that the GRADE system will be used in the article. The GRADE system doesn’t look at an individual study, but instead looks at all the available evidence about a particular outcome. There’s a link to more information here but essentially it’s a guided judgement call about the strength of the evidence with four possible outcomes. In the paper, table 2 is where to find the GRADE scores for each of the outcomes to do with subglottic suction.

Essentially therefore, they will use two standard scoring tools to look at each paper (Cochrane) and then each outcome (GRADE).

This part explains the majority of the systematic review part of the study. The meta-analysis is described in the next section….

Strategy for data synthesis:

Not the most readable. It starts with “We will provide a narrative synthesis of the findings from the included studies, structured around the type of intervention, target population characteristics, type of outcome and intervention content.” which could be translated to they will produce a written summary of the studies! (I remain jealous of people who can write ‘fluent science’ though, and the fact that the authors are Chinese doesn’t make me feel any better).

They will use risk ratios for dichotomous outcomes. A dichotomous outcome is when A OR B happens. There is no C outcome, or A AND B. If you code A and B as 0 and 1 it becomes a binary outcome (pedantic stats pub trivia!). Risk ratio is the ratio of how often an event occurs in one group compared to another (i.e. pneumonia in those getting subglottic suction vs those not).

For non-dichotomous outcomes, they will use something called standardised mean differences. This method standardises results to correct for differences in the way they were measured, and is done by dividing the difference in means by the standard deviation of the measurements. For example, a study compares travelling speed down the M1 vs the M25. They use mph as the unit and find the mean difference is 10mph. To standardise that, 10 is divided by the SD of all the speed measurements. Doing that would allow comparison between a different study that used km/hr as the unit of measurement. For more info about standardised means click here.

The next section talks about combining results from several studies to calculate an improved result where possible (making it a meta-analysis rather than a systematic review). To do this they plan on using a random effects meta-analysis.

There are two methods of doing a meta-analysis – fixed effect and random effect. A fixed effect model assumes that the effect of the intervention is the same in all studies, and studies only come out with different results because they are talking different samples (i.e. they are different by chance). The alternative is a random effects model. This assumes that the outcome from the intervention is not the same in different trials – this could be because the population is different, the intervention differed slightly, or the method of measurement and analysis was different etc. The different results seen between studies are in part because of chance (different samples), but also because of these differences (referred to as trial heterogeneity).

To quantify the effect of chance and hererogenity, something called an 'I squared' statistic is used. The result comes out as a number which is the percentage of the differences between results that are due to heterogeneity. I think (although am not sure) the difference between fixed and random effect meta-analysis in practice is down to how you use the stats software.

The punchline is that it’s worth looking at whether the use of fixed or random effect seems valid for a particular meta-analysis, and to remember that if a fixed effects model is used the final result is interpreted as the effect of the intervention, whereas if the random effects is used it is an average effect across similar but different populations.

The next sentence is “we will conduct sensitivity analysis based on study quality”. I’m not sure what this means (and the methods bit of the paper doesn’t make it clearer), but I think it means that if they find the studies identified by the literature review aren’t what they hoped that might alter how they do the meta-analysis i.e. they will do the best they can with what they can find??? If anyone can explain this please do so in the comments.

Finally, they talk about doing stratified meta-analysis to explore heterogeneity in effect estimates. This means that they will repeat the meta-analysis including, for example only the high-quality studies, or certain patient groups etc. We see this quite a bit and not just in meta-analysis, for example in ARDS studies where a meta-analysis doesn’t show a difference so the result is repeated looking at only patients with severe (as was) ARDS.

In the last sentence, they drop in that they’ll assess evidence of publication bias. They don’t say how, but frankly at this point I’m thankful (in the paper they say they looked at a funnel plot – for more info about that click here).

If you’ve read the paper another technique appears in the method - ‘trial sequential analysis’. If you’ve got this far you will probably agree that TSA should be left until a later post. I do sometimes worry that trial design will get so complicated that only certain academics will be able to understand it. Whilst the research would be ‘scientifically faultless’ there would be something lost if as clinicians we can’t understand how the results were reached.

The original question (if you remember!) was whether we should we start using sub-glottic suction? Based on this I’m not convinced we should. We don’t see a lot of early VAP in our unit (we’re non-neurosurgical). Even if ventilator days are reduced, the difference looks to be small (1 day), and there is only a moderate GRADE quality associated with the finding. There doesn’t seem to be any other benefit.

Congratulations if you’ve come to the end of this – shame on you if you skipped the middle part. Please use the comments below to correct anything I might have got wrong – I think it’s all OK but if you know otherwise….

]]>Mon, 03 Oct 2016 11:48:39 GMThttp://www.iccueducation.org.uk/blog/bullshit​Recently we asked for a diabetes opinion for one of our patients. My colleague duly arrived and asked if we’d continued a medication I had no idea that the patient was on, or even which of the new groups of diabetes meds it belonged to. So I told him “I think so, yes” and snuck out of vision to check.

There’s not much written about bullshit, but it’s a big part of any workplace and indeed life in general. This essay by Harry Frankfurt is definitely worth a read (it became a NY Times bestseller), and a lot of what is written below comes from it (as well as personal views that as always you’re more than welcome/encouraged to disagree with).

Defining Bullshit

This is not easy as it turns out. Part of the issue is distinguishing between lying and bullshit.

The key features of bullshit are that:

Bullshitting is intentional – if you believe you’re being honest you’re not bullshitting, you’re either right or mistaken. The intent behind bullshit is to get away with something or to serve benefit in a particular situation. In the example above it was to make it look like I knew what was going on!

Bullshit is unconnected with a concern for the truth. The bullshitter doesn’t care whether they are right or wrong (unlike the liar who is trying to convince an alternative truth), they are merely saying something. The only criteria for the content is that it achieves the purpose at hand. The bullshit is more about the person saying it than the content therefore. I didn’t believe that whether we’d been giving the drug or not was important; we could easily start it straight away after all.

The bullshit is not necessarily false – the bullshitter might be right, but being right isn’t the intention rather it is to be believed.

A subcategory of bullshit is ‘pretentious bullshit’. This is bullshit when the desired outcome is to impress by suggesting greater merit in the bullshitter than is warranted!

It’s probably also worth highlighting a related term – that of ‘bull tasks’. These are are those which are thought to be unconnected with the primary activity being engaged in (i.e. tasks which someone may feel are pointless and don’t contribute to the desired outcome). Every hospital has one or two of those!

Importantly if a task is perceived as bull, it doesn’t mean it is but maybe more effort should be put into getting across the true value.

What’s the harm of bullshit in critical care?

The main issue with bullshit is that it shows contempt for the truth, and the consequence of false bullshit is the same as lying.

We occasionally see this when patients are presented or referred. If a fact isn’t known, a vague ‘non-answer’ is given simply to make the question go away. The history then becomes less reliable, accurate and complete with all the consequences that come with that.

From the educational standpoint it also reduces learning. If for every question there’s someone willing to bullshit an answer, what’s to make you go and find out the truth? And if you’re passing on that bullshit then you’re honest but mistaken, but still perpetuating potentially false information.

Culturally it’s also not where we want to be. Do we really want to be in a speciality where it’s more important to say something other than “I don’t know”?

Why do we bullshit?

Critical care is a far reaching and fast moving specialty. There’s that statistic about having to read a journal every 3 seconds to keep up to date, yet we often expect ourselves to know more than we can (whether we expect it of other people too is interesting). When we don’t know something we have 3 options – say nothing (which is just rude), say “I don’t know” which isn’t good for our ego, or bullshit. With the amount of unknowns we are faced with on a daily basis, maybe some bullshit is inevitable.

Is there something about critical care or the people who work within it that makes them less likely to say ‘I don’t know’ or to use pretentious bullshit? As educators do we give the impression that we’d prefer to be ‘bullshitted’ than to not get an answer? As trainees is bullshitting seen as a useful coping strategy, and is the ability to master bullshitting ever seen as a positive attribute?

Recently Michael Gove declared that Britain had had enough of experts, and Frankfurter himself highlights that in today’s society everyone is expected to have an opinion on everything. Research is now subject to public peer review by the FOAM as well as the research communities (putting aside ongoing debates about the rigour of journal peer review). Social media FOAM is a popularity reliant medium, so there’s a risk that to maintain ‘market share’ a degree of bullshit might creep in (yes I can see the irony).

A tweet or blog can also under-represent the amount of work or thought that has gone into it or the credibility of the source (not always though). The danger is that social media may increase the feeling that ‘everyone else knows far more than me’, leading to more bullshit as a coping strategy to keep up.

Finally, some conferences are now using entertaining speakers as their USP, but how do these speakers remain entertaining? There’s only so much ‘talking like TED’ that will get you through, so there’s a potential to creep into big, bold statements that may or may not be entirely true. They will be attention catching and entertaining, but they might also be bullshit.*

So over to you…..​

Do you think bullshit can sometimes be useful?

Is a consultant is simply a registrar who is more credible when they bullshit?

Can you spot the planted bit of bullshit in this article?

Would a world without bullshit be a less interesting one?

I look forward to reading your comments; next month we’ll return to something more academic!

*Before people get upset, I’m not saying FOAM and conferences are rife with bullshit but simply pointing out that it’s a ‘high risk’ area and we should keep an eye out.

]]>Wed, 31 Aug 2016 21:01:08 GMThttp://www.iccueducation.org.uk/blog/sepsis-associated-atrial-fibrillation-and-stroke-preventionWhen a doctor diagnoses atrial fibrillation, 'the guideline' says that a CHA2DS2VASc and HAS-BLED score should be calculated, and depending on the outcome an anticoagulant given. We see a lot of AF in the critical care (not as much as in CICU but that’s a different game), but we don’t follow the guidance. Or at least I don’t, but why?

A systematic review published last year (open access) highlighted the lack of an evidence base for new onset AF in critical illness. The authors identified 12 studies in total, but graded them all as being low quality evidence.

The largest observational study of AF in sepsis looked at admissions to Californian hospitals over a 1 year period. In this study, AF occurred in 5.7% of patients with severe sepsis and 2.6% of those patients suffered a stroke. The issue of anticoagulation (i.e. reduction in stroke risk) was not addressed.

This study (by the same lead author) is therefore both really interesting and useful. It's observational, looking at a huge patient data-set (113 511 patients with sepsis and AF) to describe outcomes from sepsis associated AF but also current practice. The study is American, but I still think there’s context validity.

Some of the more interesting findings and points (to me anyway) detailed below, in no particular order:

53% of patients were excluded from the study because they didn’t receive IV rate/rhythm control. The justification given was to include only those with ‘clinically significant’ AF. I can see the logic, but I’m not sure the association between IV therapy and clinical concern is a strong or logical one.

11% of the remaining patients had another indication for therapeutic anticoagulation (so were also excluded). This seems pretty high, and shows either that the same patients who get AF also have other associated co-morbidity, or maybe just goes to show just how common anticoagulation is these days.

The study looked at stroke (and bleeding) in hospital. We have no idea how many of these patients went on to have a later stroke, or even in how many the AF resolved with recovery from sepsis.

520 hospitals were included, and in about 1/3 anticoagulation practice differed significantly from the mean hospital. The 1/3 are not ‘outliers’ as there is no ‘normal’ rate, but this is useful to demonstrate the influence of culture and practice as well as just individual clinician and patient factors. Maybe my practice is simply the practice of our unit?

Approximately 1/3 of patients received an anticoagulant and 2/3 didn’t - that’s pretty close to equipoise.

The patients who received anticoagulation were different from those who didn’t. The important thing here is that whether they received it was based on whether it was prescribed. What I mean is that we should look at these differences with respect to decision making, not with respect to which patients should be anticoagulated. We can only guess how those factors influence decision making. Some are relatively easy to guess; patients with prior bleeding or haematological failure were less likely to be anticoagulated. Patients with heart failure received more anticoagulation – is this because they were being looked after by cardiologists? (which is possible as if the attending physician was a cardiologist the rate of anticoagulation was higher). Patients with renal failure were less likely to be anticoagulated – could this be as simple as the need to adjust the dose making a prescription less likely? Some differences are more difficult to explain (for example women were less likely to be anticoagulated than men, patients receiving vasopressors less likely but patients in intensive care more likely).

CHA2DS2VASc could not predict ischaemic stroke in this sample; the C statistic was 0.526 (a value of 0.5 is chance, 1.0 is perfect prediction). It’s interesting they didn’t look at HAS-BLED to predict bleeding.

Approximately 1.5% of patients in both groups (anticoagulated and not) had an ischaemic stroke.

Bleeding was more frequent in the anticoagulated group (8.6% vs 7.1%) giving a NNH of 67.

The data relating to pre-existing vs. new AF is difficult to get your head around. Patients only receiving oral anticoagulation were excluded (excluding anyone with AF in whom the Warfarin is just continued). The rates of IV anticoagulation didn’t differ between those with new and existing AF, and the pre-existing AF group includes those where IV therapy was added to Warfarin but also those in whom the Warfarin was replaced. There are conflicting results in comparing new and existing AF and essentially I don’t think this paper really gives any answers.

We don’t have any information about the population rate of bleeding. We could assume it is 7.1% (same as in the group not given anticoagulation) but that would be a big assumption. What if the background rate is 9%? We also don’t know the population stroke rate.

So what should we make of this data? My practice of not routinely prescribing anticoagulation in sepsis associated AF goes with the majority. Even if I knew which patients were at a higher risk of stroke (which I don’t from this data) anticoagulation may not help, and could harm. I’m none the wiser as to what happens to the patient post discharge, or whether anticoagulation makes any difference long-term. For patients with pre-existing AF all bets are off.

Whilst there aren’t many answers, this paper was about something more interesting than that; it describes practice, hints at how we make decisions and informs as to what is happening to our patients now rather than what might happen if the results of an RCT can ever be replicated in the ‘real world’.

As always, please leave a comment to get some discussion going......

]]>Mon, 18 Jul 2016 00:00:00 GMThttp://www.iccueducation.org.uk/blog/something-to-reflect-onA recent opinion piece in BMJ careers argues that written reflection is ‘dead in the water’. I’d suggest you read it, but I personally disagree with most of it. I’ve tried to articulate my own counter-view in the five points below. What follows are my own beliefs, and I’d encourage you to leave a comment if (when?) you disagree.

​1 - Written reflection is not a choice…

The General Medical Council states that every doctor should regularly reflect on their own performance, professional values and their contribution to any teams in which they work. The FICM don’t specifically mention reflection as an expected behaviour in the syllabus, but there are several behaviours that it would be hard to demonstrate if reflection wasn’t part of your approach (and one of the five interview stations for ST3 ICM is a reflective practice station).

The NHS revalidation team is also hot on reflection being core to the appraisal process, explaining that it is the reflection on the supporting information rather than the information itself that informs the appraisal.

Whilst the need to write it down is not necessarily specified, It would seem that the author of the opinion piece needs to convince some pretty influential organisations of their view.

2 - You’d be hard pushed not to reflect…

Reflection is part of so called experiential learning, one of those educational concepts that is pretty obvious once it’s written down. The most well-known description of experiential learning is Kolb’s learning cycle, paraphrased below:

To my mind this is integral to being human, and I doubt you’d be able to survive without it. That’s not to say that we reflect and learn from every single experience, or that all learning comes from reflection (no-one reflects on reading the anatomy of the Brachial Plexus), but we certainly learn from many experiences, to do so requires a degree of reflection.

3 - Reflection is difficult….

To reflect on an experience is to challenge a belief. We may consciously choose not to do so (choosing to hold onto prejudices for example), struggle to come up with a new concept (finding difficulty making sense of the unexpected or idiosyncratic) or reject the experience as untrue (as a ‘freak’ exception). A requirement to reflect is a prompt to challenge ourselves and to develop as a result.

What makes this process even more difficult is to write it down, we can kid ourselves that the writing adds nothing. It’s easier when talking or thinking about an event however to bluff your way through, ignoring aspects of it, cutting corners and not coming up with any meaningful conclusions or challenges. The conclusions you may come up with can be difficult to articulate, but writing forces you to do so (writing this post, for example, seemed much simpler before I started it!).

Another feature that makes written reflection difficult is an extrinsic motivator i.e. doing it because we ‘have to’. Taking ownership and developing an understanding of the process might be the only way to avoid this, but to do so might require some ‘reflection on reflection’!

4 - Reflection has been misinterpreted for inappropriate means….

Reflection sometimes appears to be the solution to every problem, and it’s influence in the educational process is far reaching. It is a process that an individual undertakes, sometimes subconsciously, and not a process that can be done ‘to them’. When suggesting to an individual that they might want to reflect on an episode, the only possible favorable outcomes are that the individual records/verbalises an ongoing process of reflection, or that they are prompted to test whether an experience challenges their own beliefs and constructs. If it does not do either of these things, there can be no meaningful reflection; rather the ‘educator’ projects their own mental construct to the individual as an alternative which may or may not be accepted (a construct which may, of course, be completely wrong!).

The use of reflection after critical incidents etc. is rife, with the rationale that when something goes wrong we should do all we can to learn from it. Whilst there’s learning opportunities in pretty much everything, they do not all require a change in mental construct. For example, if a doctor gives the wrong dose of drug would reflection always be the right ‘remedy’? If the wrong dose was intentionally given, any reflection is merely a confession and a statement of facts would be more appropriate. If the doctor thought the dose was different (i.e. their belief was that they knew the dose and were giving the correct dose) then an educational intervention is appropriate. If the system is such that the doctor had been coerced into working a 72 hour shift, a root cause analysis would be the way forward (on the basis that the doctor believed that mistakes are more likely when tired and they still do).

​Written reflection as punishment or an indicator that ‘something has been done’ does nothing to encourage the process.

5 - Reflection shouldn’t lead to a life behind bars…

A position statement has been issued by the local HEE teams in London and the South East recently that caused a fair bit of angst. Essentially, it was based around a trainee (I have no details of this at all) releasing a reflective piece which was then apparently used to incriminate them. The statement says to focus on the learning from the event and to maintain confidentiality.​I’m no lawyer, but I would suggest there are several aspects to this:​

Confidentiality is a no-brainer, and even if the account doesn’t get you into bother, breaking confidentiality alone will. An article written by two medicolegal advisors suggests maintaining confidentiality so that the account can’t be linked to the case as evidence, but I don’t think that should be your motivation.

Access to reflective practice is said to have changed with the move to e-portfolios and online appraisal. I’m not convinced, in that surely it’s the existence of a document rather than where it’s stored that means a lawyer could ask for it?

The whole point of written reflection is to demonstrate a change in conceptual perspective. Unless therefore your pre-existing perspective was that you should deliberately or negligently do harm to a patient I can’t see the issue.

If the reflective piece does demonstrate ‘wrong-doing’ and the reflection is truthful, then the only issue with a lawyer getting hold of it is that more facts are available for a just outcome!

As I said at the beginning of this piece, please leave your own thoughts in the comments as these are only mine. Has reading this challenged your mental construct in any way?

(​Any Sunderland trainees not leaving a comment will be told to write a reflective piece on why that haven’t!)

]]>Fri, 17 Jun 2016 09:01:01 GMThttp://www.iccueducation.org.uk/blog/trophic-vs-full-feeding-in-acute-lung-injury​The EDEN trial is one of the go-to trials for feeding in the ICU. The paper can be found here, but essentially it recruited 1000 patients with respiratory failure, comparing different calorie targets for a period of 6 days. One group received 25% of requirements (the trophic feeding group), with the other aiming for full requirements (receiving 80% of goal).* The primary outcome was ventilator free days to day 28, with the study powered to detect a 2.25 day difference. 60 day mortality was a secondary outcome.

​The rationale for the primary outcome was the authors hypothesis that reducing gastrointestinal intolerance (by the use of trophic feeding), would result in the patient being liberated from the ventilator sooner.

Does this make sense? Does gastric stasis stop people weaning? What about diarrhoea/constipation? Vomiting/regurgitation? For this trial to make sense, the authors must believe at least one of these is true and also that trophic feeding reduces GI intolerance. If not, there can be no causal difference in ventilator free days.

Is your bias that 6 days of a different feeding regimen would result in extubation more than 2 days sooner?

The results showed that the number of ventilator free days was the same in both groups, as was 60 day mortality. There were small but statistically significant differences in gastrointestinal complications favoring trophic feeding. Whether these are clinically significant is a matter of judgement.

My take therefore (and yours may be different) is that this was a negative trial either because of the low rate and severity of gastrointestinal intolerance, or because intolerance does not lead to prolonged ventilation. The difference in fluid balance is also interesting, and the conclusion is worth a good read as the authors pretty comprehensively appraise their results.

Perhaps unsurprisingly, this 1 year follow up of patients in the EDEN trial with respect to their physical function (measured by SF-36 scores) showed no difference either.

It’s also probably worth highlighting that at the same time this study was being conducted, this trial was also running – it is a smaller RCT and is single centre yet shares many similarities. 200 patients were recruited, again to trophic vs full feed (the groups received 15% vs 75% of daily requirements). Again the primary outcome was ventilator free days to day 28, again there was no difference. Neither was there a difference in mortality to hospital discharge but the improved tolerance of trophic feeding was again shown.

The whole issue of feeding the critically ill is complex, and I must admit I’ve never really got my head around the conflicting results available, in part I think because of the number of variables. If you’re as confused as I am, this review might help (if only to highlight the complexity of the problem).

Based on these papers, do you think we should continue our usual practice of initiating full feed from day 1 (which we do very well at achieving), or should we change to trophic feeding – if so in which patients and for how long?

*The difference in calories between the groups was 900 kCal/day, which is roughly the same as a Big Mac and medium fries.