Two weeks ago I was approached by Radio 4 saying they were doing a programme, presented by Ben Goldacre, on nutritional therapy and wanted to do a pre-recorded interview, that would be edited.

I declined the interview for the reasons given below but I provided Radio Four with comments in answer to their questions to be read out verbatim on the programme. They would not agree to reading these out unedited however and so did not use them

Anyway, given the quality of these responses, I’m sure that the BBC would have loved to have been able to quote from them. I can’t cover them all here, so I will just pick out a few choice examples. What I find especially interesting is that – not only is this an epic failure on Holford’s part – but he seems entirely unaware of this fact, even claiming victory at the same time as he digs himself in deeper.

Holford’s unique approach to research is especially clear in Food for the Brain’s attempts to defend their widely reported Child Survey. The one that we explored and discussed in detail that has been described as ‘mercilessly thorough’. FFTB acknowledge a number of problems with the research:

While the survey is touted as including 10,222 children, Holford admits that he ended up comparing two of the groups of children analysed; these groups contained just 32 and 42 members respectively. Most researchers would see this as a problem: for example, a handful of extended families could constitute all of one or both of these groups and thus completely skew the results. However, Food for the Brain are still proudly insisting that – for comparisons between these two tiny groups – p<0.05. Do you think it is appropriate to pronounce on the optimal diet for UK children based on 74 children from a unrepresentative sample of 10,222? Some of us would call those groups outliers.

In the survey, “The ‘effect sizes’ were calculated by looking at the prevalence of respondents in each (high or low) consumption band giving a ‘very good’ rating.” This is a very non-standard – to the best of my knowledge, it’s completely unique – way to calculate effect size. While one may need to introduce novel approaches to statistics from time to time, these need to be argued for: for Holford to blithely use and calculate ‘effect size’ in such a unique way, without mentioning this in the survey, is simply wrong. And wrong at a very basic level.

One more thing to note about the survey – again, using technical language in a very unique and unclear way, without explaining why – is a simple Q&A that will stand by itself:

Q. The word “variance” seems to be used throughout not in the usual statistical sense of the word. Am I right in supposing that it is being used here as a synonym for ‘difference’?
A.Yes.

In Holford’s ‘full response’ on his own site he – entertainingly – continues in his habit of confusing food allergies and intolerances. To quote from (just one) part of the response where he does this:

While the conventional view is that IgE antibodies are responsible for most immediate onset allergies, there is growing evidence that IgG antiobidy [sic] mediated reactions, may indeed be responsible for more ‘hidden’ allergies.

Holford was also asked “His views on a famous, some might say, infamous, paper by Bjelakovic et al. [PDF] which describes the possible risks of taking certain antioxidants. Mr Holford’s article was called “Antioxidant Review is a Stitch Up”.”

Given his own rather classic series of blunders in both the literature review and statistical analysis of the FFTB Child Survey 2007, it is with breath-taking and rather touching bravado that Holford offers criticism of the Bjelakovic et al meta-analysis:

[my] two main criticisms of this paper were that the one study, by a Dr Correa from the pathology department at the Louisiana State University Health Sciences Centre, that apparently skewed results for antioxidants overall towards a negative, showed a clear protective effect of antioxidant supplements against gastrointestinal cancer. I decided to contact Dr Correa and he was ‘amazed’, he said, because his research, ‘far from being negative, had shown clear benefit from taking vitamins.’ Correa told us, there was no way the study could show anything about mortality. ‘Our study was designed for evaluation of the progress of precancerous lesions,’ he said. ‘It did not intend, and did not have the power, to study mortality and has no value to examine mortality of cancer.’

Also, the summary of this study states ‘treatment with beta carotene, vitamin A, and vitamin E may increase mortality’ creating the impression these antioxidants are no good. What it failed to say in the summary, all of which are clearly stated in the results, is that ‘vitamin C given singly, or in combination with other antioxidants did not affect mortality, and selenium given singly or in combination with other antioxidant supplements may reduce mortality’. It also fails to say that ‘beta-carotene or vitamin A did not show increase in mortality if given in combination with other antioxidants’, or that ‘vitamin E given singly or combined with 4 other antioxidants did not significantly influence mortality’.

Now that is interesting: we addressed this paper in HolfordWatch’s first ever post. Aw, I’m all misty-eyed with nostalgia… Firstly, one might note that Holford’s ‘Antioxidant Review is a Stitch Up’ piece complained bitterly – and rather strangely – about quite a range of things. For example, Holford objects that:

The next way to investigate whether an analysis is a stitch up is to see if all trials are included, how trials are excluded, and what the trials actually say. Two classic primary prevention studies, where vitamin E is given to healthy people, are those of Stampfer et al, published in the New England Journal of Medicine, the first of which gave 87,200 nurses were given 67mg of vitamin E daily for more than two years. A 40 per cent drop in fatal and non-fatal heart attacks was reported compared to those not taking vitamin E supplements (1). In another study, 39,000 male health professionals were given 67mg of vitamin E for the same length of time and achieved a 39 per cent reduction in heart attacks (2). Guess what? They are not included.

However, as noted in the original post, these two studies aren’t included in the Bjelakovic et al. meta-analysis of RCTs on antioxidants because, um, they’re not RCTs. Holford’s complaint about this suggests some confusion as to what an RCT is. This is rather odd in someone that Teesside University has raised to professorial status. Happily, Holford now acknowledges that arguing for these studies to be included was incorrect (though it would have been nice if he had let us know about this, and thanked us for pointing this out, when we posted on the topic about a year ago). As will be shown below, though, it is unfortunate that Holford failed to take on board our other criticisms.

In answer to the criticisms of the Bjelakovic et al. article that Holford does cleave to in his response to the BBC, though, I will quote from this blog’s first post: they were all addressed in there.

The abstract of the JAMA article says that “The potential roles of vitamin C and selenium on mortality need further study.” It therefore does say that there’s a need for further study on the role of vitamin C and selenium supplementation, and that they may have reduce (or increase) mortality…The meta-analysis doesn’t explicitly say that different combinations of antioxidants may have different effects (or, for example, that antioxidant may have different effects if you exercise regularly, smoke 30 a day, etc.) It didn’t seek to analyse every possible combination of factors, and it wouldn’t have been feasible to do this…

I’m not sure how Holford concludes that the JAMA meta-analysis assumes that Correa et al’s paper is negative: the results would have been included in the meta-analysis as would the results of all the other trials with low bias risk. Others may have misread Corea et al’s article as showing that taking vitamin E could have been damaging, but I can’t see any evidence that the JAMA meta-analysis has done the same thing.

Right, after that blast from the past I’ll allow myself to look at one more point from Holford’s response to Radio 4: the rest can wait till another day. Holford also claims that:

My views on the benefits of fish oils are shared by many doctors and scientists. The recent Associate Parliamentary Food and health [sic] Forum ‘The Links Between Diet and Behaviour’, makes this clear.

No: it really doesn’t. It’s a bit odd that Holford tries to argue from the authority of the Food and Health Forum. What’s odder, though, is that we have read this report [PDF]: one may or may not agree with it, but the report does not entirely concur with Holford’s position on essential fats. For example, whereas Holford argues that “Most people are deficient in both omega 6 and omega 3 fats”, the Food and Health Forum report is more nuanced. It suggests for example that “If high omega-6 intake is shown conclusively to inhibit EPA and DHA absorption or synthesis, an alternative to increasing fish and seafood consumption would be to reduce the intake of LA (omega-6) as each would result in more equal tissue ratios of EPA and DHA.” [p. 11] The report also notes that in the past 30 years “intake of Linoleic acid (LA) (omega-6) has risen in many northern European counties” and that this may be problematic [pp. 10-11].

Holford is quite entitled to agree or disagree with the Food and Health Forum. After all, although he made a written submission to the Forum, there is no record of him having been called to speak there. However, to claim that they agree with his position whereas they actually don’t is both unfortunate and rather confusing.

In summary, then – when offered a chance to respond to some questions from the BBC, Holford has only managed to dig himself an ever deeper hole (some of the BBC’s questions look as if they overlap with our summary of outstanding issues in our Holford Myths page). Given that his responses are clearly a failure – we have barely started on the problems with these responses, and I haven’t yet heard what sounds like it will be a rather critical segment about him – one does wonder what will be left of Holford’s credibility as a scientist after the second part of The Rise of the Lifestyle Nutritionists airs.

10 responses to “Holford tries to respond to questions raised by BBC documentary. He fails.”

So a self-selected survey of 10,222 children actually ends up being a cross-comparison of 32 with another 42! So all the data analysis and pretty (if mis-leading) bar graphs in the report are based on 74 kids?!! Or, to put it another way 0.72% of the respondents.

If I was a parent of one of the 99.28% who have been ignored, I’d feel a bit miffed at having wasted my time.

Although, I do wonder if PH’s original hypothesis was that the vast majority of kids would be in the v. poor/poor diet category, and, when the results showed they were almost all “neutral”, faffing around with a few outliers was all he had left to “prove” his points. Hey, why change your mind when the data doesn’t support you when you can change the analysis post-hoc?

“In the survey, “The ‘effect sizes’ were calculated by looking at the prevalence of respondents in each (high or low) consumption band giving a ‘very good’ rating.” This is a very non-standard – to the best of my knowledge, it’s completely unique – way to calculate effect size.”

To be fair to Holford and co, it isn’t completely mad to look at the proportion giving a categorical rating of “very good” – similar methods have often used in pain research and other subjective fields (although you’d usually group all ‘good’ and ‘very good’).

just to let you know i’ve double-checked, and those are definitely NOT the questions which were sent to patrick holford in asking for a response, after he refused to be interviewed, i’m going to have a dig and find out what he is responding to there.

i can clarify that those are also definitely NOT the responses he sent to the BBC, although the responses he did send were equally long and flawed.

pj – I’m a fan of effect sizes and wish researchers would use them more regularly in straightforward digests of research findings. However, we wanted to check the effect sizes because the reported ones looked wrong – which is why we asked for the SDs.

We belatedly worked out that they were just using jargon to sound technical but hadn’t actually calculated any effect sizes. Hence our annoyance – if you had wasted a lot of time trying to generate useful SDs that would approximate to anything like what they claimed were effect sizes – you might be less generously disposed. Particularly when you might strongly suspect that they didn’t realise that it is a technical term – despite all their ludicrous posturing.

This isn’t particularly coherent – I might have a bash some other time.

dvnutrix – sure, I was just pointing out that this kind of analysis can be used – I note they were sufficiently unfamiliar with statistics to be unable to put confidence intervals (or the like) on their estimates!

Disclaimer

At the risk of sounding like Arthur Weasley, information on this blog is not intended as a substitute for advice from a qualified medical practitioner. If you have health concerns, see a Dr or dietician (a blog is not the place to diagnose a health problem).