Category: Uncategorized

This is in response to the blog post by Fiona Fox who was reacting to criticism of the Science Media Centre (SMC) for its support of the SMILE trial (1). Although both her blog and this post refer to myalgic encephalomyelitis (ME), the issues are much broader and raise major questions about the organization.

Two things immediately stand out. First, in a blog post claiming that she and the SMC are free of bias, Fox reveals her bias: she links ME to climate change and to animal experiments; she makes a loaded comment about ‘vocal critics’; she refers to ‘ME activists’; she contrasts ‘ME activists’ with friends ‘in science’. Second, in an attempt to address questions about SMC bias, Fox responds with anecdotes. ‘I’m not biased: some of my best friends are…’

For Fox’s response to carry any weight, she needed to talk about transparency and procedures. A number of questions remain unanswered:

1. What steps are taken to ensure SMC governors have no influence on day-to-day decisions taken by the SMC?
Simon Wessely for example is seen as one of the creators of the ‘false belief’ model of ME. He is also a governor of the SMC. How does the SMC ensure he has no influence, including indirect influence, over anything the SMC has to do with ME?

2. Who decides what research is covered by the SMC? And on what basis?

4. Who decides and on what basis which researchers are supported by the SMC?

5. Who decides and according to what criteria who counts as an ‘expert’?

6. Who decides and on what basis which ‘experts’ are asked to respond to any particular piece of research?

7. What steps are taken to ensure that the ‘experts‘ who do respond are not self-selecting?

8. Why are experts with a clear conflict of interest allowed to give reactions to research?
Michael Sharpe is deeply involved in promoting the ‘false belief’ model of ME. Why was he allowed anywhere near the response to the SMILE trial?

Dorothy Bishop has a particular view of ME ‘as someone who is familiar with the condition both from family members and colleagues’ (2). Bishop considers criticism of the PACE trial amounted to an ‘orchestrated and well-funded harassment campaign’. Why was she allowed to give a reaction to the SMILE trial?

Alastair Sutcliffe is Professor of General Paediatrics at UCL, where paediatrician Crawley, lead investigator on SMILE, did her PhD. Is there any link between the two which may create a possible conflict of interest?

9. How does the SMC ensure the ‘expert reaction’ is balanced?

10. Who determines and on what basis whether this balanced ‘expert reaction’ has been achieved?

11. What steps are taken to ensure that research supported by SMC funders is not treated more favourably?

An organization set up ‘to promote more informed science’ doesn’t seem to understand the nature of bias or the concept of a biased sample.

As every GCSE student knows, everyone is biased and the easiest person to fool is oneself. If the SMC were serious about avoiding bias, then it would have proper procedures in place to guard against its own. Fox should not need to resort to a few stories to make her case, but should be able to point to established safeguards.

The problem for the SMC, though, is that as an institution it is inherently biased. It exists to be biased. A handful of people have set themselves up as judges of what does and does not constitute ‘good science’ and who does and does not qualify as an ‘expert’. The SMC is predicated on an idea that there is one true science and that a group of ‘wise people’ can find other ‘wise people’ to make sure the media get the true picture.

It’s not necessary to be a Gove-ian sceptic of experts or a devout believer in the Kuhn cycle to know that this view of science is deeply flawed. With new methods, new technology and new approaches, science, all knowledge, is constantly evolving. As so often happens, by concentrating on one problem (false equivalence), the SMC magnifies the risk of others (eg false consensus). The SMC is a brake on science’s self-correction.

That Fox either does not understand or cannot see these problems is deeply concerning. What is more, her blog post reveals something else. The gushing, schoolgirlish tone (3) is not just a question of personal style. Perhaps because of her background, Fox does not seek to argue her point, but to charm. We are the good guys, she is saying. Join us, be in our gang and on the side of true science. If you don’t, you’ll be one of them. You’ll be on the dark side, with the ME activists, the ‘vocal critics’, the climate-change deniers, the animal-rights extremists. It’s politics, and not good politics at that; rhetoric and not substance.

The SMC was set up in 2000. Perhaps it did then have a purpose, but it is hard to see why it exists today. Any science journalist worth their pay has no need for it. In the time it would take to go to the SMC for a briefing, they could read the study, contact half a dozen ‘experts’ whose views they trust, and send an email to the investigator with any questions. Everyone now is online and most researchers are on social media. Access to scientists is easy; there is no need for a go-between.

The existence of the SMC reveals a collective failure of nerve by science journalists in the UK. They have contracted out their job to a self-appointed group led by someone with no scientific qualifications. Journalists who are churning out SMC briefings are effectively saying they can’t do their own jobs. They need Fiona Fox to decide which research is important and which ‘experts’ are trustworthy.

The SMC has nothing to offer on the current issues in science. With major concerns such as the ‘reproducibility crisis’, the SMC is more of a hindrance than a help. It is entrenching the status quo; it is delaying the funerals. Fox’s failure to comprehend these issues reveals she is unsuited to her role, but in any case the Science Media Centre is no longer, if it ever has been, part of the solution. It’s become part of the problem.

‘And all around the room, pouring wine for the panellists, offering tiny pastries, and gently inquiring about everyone’s careers and interests, or simply posing, very upright, against the shiny white walls, were the correct young staff of Living Marxism.

The men wore suits, or close-fitting shirts with pressed trousers. They had disciplined hair: shaven, cropped or gelled back. Their shoes were gleaming as tap dancers’. As they stood in twos and threes, clicking their heels, coughing into their palms and clasping their hands behind their backs, something else about them became apparent. They were mostly wearing black: black shirts and black ties, black socks and black polo necks, everything spotless.

The women were similar. They wore suits and tied-back hair, or short skirts and tight tops. Few of them seemed older than 30. And, like their male colleagues, who slightly outnumbered them, they asked lots of questions. They always made eye contact. They smiled a lot, and stood very close, and tried fleeting, flirty touches. Near the end of the reception, at about midnight, a well-dressed couple in their thirties walked across. They had, as the man put it, “a driving situation”. Could I drive? Would I drive them home? He did not say where they lived. Their eyes shone pleadingly, but they seemed quite sober. We had known each other for all of a minute.’

The last of three blogs on the SMILE trial. Part one is here and part two here.

The SMILE trial was deeply flawed: criteria used were too broad, participants self-selected, it was not properly controlled and it relied upon subjective measures when participants were unblinded. Like the Lightning Process (LP) itself, it is worthless.

It was completed almost four years ago, but results have still not been published. The concern is that this poorly conducted trial, based on criteria wide enough to include patients with generic ‘chronic fatigue’, did indeed find that some patients reported subjective improvement. If true, SMILE did nothing more than provide false scientific justification for quackery.

The trial has already been of considerable benefit to Phil Parker, a man with no professional qualifications who has designed an intervention with no scientific basis: he is said to receive a fee from each LP provider for every course participant. Since the trial was in the Bristol and Bath area, he may well himself have been one of the LP providers as he ‘leads experienced teams’ in the region.

He has also used the mere fact of the trial as a form of endorsement on his site:NHS and LPThe Lightning Process has been working with the University of Bristol and the NHS on a feasibility study; full information can be found here. Two papers have been published and you can find a link to them both here:1. The feasibility and acceptability of conducting a trial of specialist medical care and the Lightning Process in children with chronic fatigue syndrome: feasibility randomized controlled trial (SMILE study)2. Comparing specialist medical care with specialist medical care plus the Lightning Process® for chronic fatigue syndrome or myalgic encephalomyelitis (CFS/ME): study protocol for a randomised controlled trial (SMILE Trial)

The trial acts as an advertisement not just to patients but to potential trainers. The only way to ‘qualify’ as an LP provider is via one of Parker’s own courses which cost £2100 (including VAT). Parker insists anyone who wants to continue as a certified provider must pay him an annual licence of £495 (incl VAT) in the UK or £750 internationally. These practitioners then go out to find more potential patients to generate more money for Parker.

The actual cost of the trial is unknown as staff and NHS costs, so-called non-core costs, cannot be calculated. They were paid by public funds and must be added to that figure for the total trial expenditure.

I made a Freedom Of Information Act request to the University of Bristol to discover how much they paid for these courses and to find out if there was some kind of arrangement with Parker. At first they refused to tell me how much the courses cost, but an ICO decision rejected their claims the information was exempt from the Act.

The mean cost of a course for trial participants was £567, less than the figure (£620) given in the paper for the then current approximate cost. I asked the university if Parker offered a cut of some kind, but they replied: ‘There is no information held relating to any discount or special deal that was arranged with the providers of ‘Lightning Process’ courses.’ If there was some sort of discount, then Parker not only benefited from the trial and had a financial interest in the outcome but actually subsidized it and so effectively part-funded it.

25 in the group assigned to the intervention went through with the course, so 25 x £567, that is £14,175.

Over £14,000 wasted on a junk intervention.

If there is a ‘full study’ as Crawley concludes there should be, there’ll be a lot more spent on this quackery. Any slight evidence of effect will be used not just to promote this nonsense further but also, no doubt, to claim his mumbo-jumbo should be funded by the NHS.

A trial so flawed as to be worthless.
The second of three blogs on the SMILE trial. The first is here and part three here.

The SMILE trial was ‘a pilot randomized trial with children (aged 12 to 18 years) comparing specialist medical care with specialist medical care plus the Lightning Process‘.
A report was published in December 2013. The trial has been over for almost four years but the results have yet to be published. A paper was submitted to The Lancet Psychiatry in August 2016 but was unsuccessful. A paper was resubmitted (to an unnamed journal, which may or may not be The Lancet Psychiatry) on 11th May 2017 (revealed in ‘Freedom of Information Request: Reference FOI17 193 – Information from the SMILE study’; decision currently under review).

The report itself, though, reveals a number of flaws in the study:

First, the choice of trial participants meant there was no real randomization.

The trial population was drawn from the Bath and Bristol NHS specialist paediatric CFS or ME service (43 patients were excluded for being too far away). It is debatable how representative such a relatively affluent area is, particularly as a course of the LP normally costs over £600. The charity, the clinic and Phil Parker, the inventor of the LP, are all located there. It’s likely that publicity and word-of-mouth have increased awareness and demand for the course in Bristol and Bath, which in turn may have attracted people with ‘chronic fatigue’ and also made those patients more susceptible to belief in the intervention.

Whether the area is representative or not, trial participants were selected from a clinic led by a paediatrician known for a particular view of the illness and a particular approach to it.

The trial was for ‘mildly to moderately affected’ patients, a group which is more likely to include those with ‘chronic fatigue’ rather than ME. The criteria used were broad:‘generalized fatigue, causing disruption of daily life, persisting after routine tests and investigations have failed to identify an obvious underlying “cause”’ . National Institute of Health & Clinical Excellence (NICE) guidelines recommend a minimum duration of 3 months of fatigue before making a diagnosis in children… Children were eligible for this study if they were diagnosed with CFS/ME according to NICE diagnostic criteria.’
There is no mention of post-exertional malaise, which is now recognized as an essential part of ME (see, for example, the report to the US Institute of Medicine), nor of impaired cognitive functioning or even disrupted sleep. In fact, there is no more than ‘generalized fatigue’ for 3 months. Different case definitions have been shown to select disparate groups of participants and the use of such broad criteria increases once again the likelihood that patients were included who did not have ME but instead simply ‘chronic fatigue’.

Of the 157 eligible children, 28 declined to participate at the clinical assessment. The majority were ‘not interested’ (15) or said it was ‘too much’ (7).
Patients were only included ‘if the child and his or her family were willing to find out more about the study’.
Only patients who are prepared to join the study can, of course, be included, but such a test immediately filters the participants. Many patients with ME would know the LP to be worthless and so would not want to find out any more about the study.

It’s also true that for any trial involving children parents would need to approve, but again the same sort of bias presents itself. Participants were only chosen if both the parent and the child believed a trial of LP to be valuable. Since they held this belief, they would be invested in the trial and in making it a success, and so more likely to report improvement on self-measurement. Similarly, children may have felt encouraged or even under pressure to take part in the trial and then say they ‘felt better’ afterwards.

Even after this filtering of patients, more self-selection occurs:59 did not return consent forms.
In other words, over half those eligible (81/157) explicitly (by declining to participate) or implicitly (by not returning the consent forms) showed they did not want to take part in the trial. And then of the remaining 69 who were contacted, another 13 declined. Of the 157 contacted and invited to participate, 94 chose not to.

These numbers are devastating and turn the trial into a farce:
Despite claims patients wanted more information about the LP, most clearly are not interested. The justification for carrying out this misadventure isn’t supported by the evidence.
The suspicion is that patients with ME, who knew the intervention to be worthless, refused to take part, while those with ‘chronic fatigue’ enrolled.
So many patients excluded themselves, leaving only a small number prepared to go through with the trial, that any claims of randomization are baseless. The participants had self-selected.

Even after the trial started another three allocated to the LP group dropped out. A mere 50 out of a possible 157 (32%) completed the trial.

Three in the SMC group left to have the LP outside the trial. Which again suggests most of the patients who took part only did so because they wanted the LP, considered it effective and saw the trial as a way of getting it at someone else’s expense.

In effect, the researchers asked for volunteers from the clinic for a free course of the LP, gave it to some and not others, then asked everyone if they were happy. The likelihood is that those who received it would be grateful and say they were content and those who did not were not. SMILE was not a randomized trial.

Second, the Specialist Medical Care (SMC) group were not properly controlled. ‘Other interventions, such as cognitive behavioural therapy or graded exercise therapy (GET), were offered to children if needed (usually if there were comorbid mood problems or the primary goal was sport-related, respectively).’
In other words those receiving SMC were also offered other interventions and were not just receiving SMC.
While, of course, patients must at all times receive treatments deemed necessary by their care providers, the other interventions mean no proper evaluation can be made of the SMC group relative to the LP group. It is not an SMC group but an SMC-and-sometimes-other-things group

These interventions were provided ‘if needed’, which again seems reasonable: if patients are deemed to need an intervention they should receive it. But it also undermines the notion of any control. Patients are being assessed during the trial. The intervention of the assessor, one who may offer other possible treatments, is enough to mean the patients are not simply getting SMC.

There was no attempt to replicate with the SMC group the non-specific conditions of the SMC + LP group. All the participants knew full well whether they were receiving a much publicized intervention which they had been told was effective, or not. Indeed, three patients allocated to the SMC group quit the trial and went off to get LP for themselves. This lack of equipoise would have worked both ways: to influence those who were getting the intervention to ‘feel better’ and to make those in the SMC group feel they were missing out and so report negatively.

It has to be acknowledged that some of these flaws were difficult to avoid: no one can be forced to take part in a trial, parents must give their consent, patients are going to know whether they are receiving the intervention or not. But the researchers didn’t take steps to mitigate them, in particular:

Third, the trial was unblinded yet used subjective outcome measures:‘The following inventories were completed by children just before their clinical assessment (baseline) and follow-up (6 weeks and 3, 6 and 12 months): 11-item Chalder Fatigue Scale; visual analogue pain rating scale; the SF-36; the Spence Children’s Anxiety Scale; the Hospital Anxiety and Depression Scale (HADS), a single-item inventory on school attendance and the EQ-5D five-item quality-of-life questionnaire.’

The only objective measure was school attendance, which is obviously open to confounds. So obviously, in fact, that part way through the study the measure was dropped.‘During the study, parents and participants commented that the school attendance primary outcome did not accurately reflect what they were able to do, particularly if they were recruited during, or had transitioned to, A levels during the study. This is because it was not clear what ‘100% of expected attendance’ was. In addition, we were aware of some participants who had chosen not to increase school attendance despite increased activity.’
.
In a commentary on another trial, Jonathan Edwards, emeritus professor of connective tissue medicine at University College London, is clear:‘The trial has a central flaw that can be lost sight of: it is an unblinded trial with subjective outcome measures. That makes it a nonstarter in the eyes of any physician or clinical pharmacologist familiar with problems of systematic bias in trial execution.’

Some problems in the trial design may have been difficult to overcome, but this failure to use objective outcome measures could easily have been avoided. There was no reason why they could not have used, for example, actimeters. The choice to use subjective measures renders the whole exercise worthless.

Edwards made his comment referring to another trial, but it applies equally to SMILE. The trail was deeply, fatally flawed. It was a nonstarter.

The report brushes over the conflicts between SMC and the LP where patients are told to use different approaches. It ignores the possibility of group-think in the LP courses. It disregards the failure to recruit sufficient numbers and the self-selection of those who did participate. It ignores the obvious flaws. And it then concludes with a recommendation for a full study.

That would be an even bigger error. Only one person has benefited from SMILE and that person would gain even more from a full study: Phil Parker. How much he has benefited will be shown in part 3.

There is no evidence the Lightning Process (LP), a mish-mash of elements of cognitive behavioural therapy, neurolinguistic programming, hypnotherapy, life coaching and osteopathy, is anything other than quackery. For decades Phil Parker has made claims for its efficacy, including as a treatment for myalgic encephalomyelitis (ME), but no proper trial has ever supported these claims.

The Advertising Standards Authority (ASA) guidance is clear:To date, neither the ASA nor CAP has seen robust evidence for the health benefits of LP. Advertisers should take care not to make implied claims about the health benefits of the three-day course and must not refer to conditions for which medical supervision should be sought.

There are people who claim to have been helped, of course, but such claims are made for all bogus therapies. It seems that some people are simply amenable to these interventions. In addition, perhaps there are those who have become stuck in a rut, experiencing a generic chronic fatigue, believing themselves to have ME, and who are helped to kickstart their lives again by the LP. Since there is no biomarker for ME, diagnosis of the illness can be difficult: 40% of patients in an ME clinic may not actually have ME.

There is currently no treatment for ME, so it is understandable that some patients would be easy prey for and would seek more information about interventions hawked about with exaggerated claims.

Parents of children with ME were apparently contacting the charity Association of Young People with ME (AYME) (1) and asking whether it was worth trying the LP. Bewilderingly, Esther Crawley, a Bristol paediatrician and then medical adviser to AYME, instead of telling patients and parents that the LP had no scientific basis and was not worth the considerable amount of money it costs, decided to do a trial. Just as bewilderingly, the SMILE trial received funding and ethical clearance.

First, this trial should never have been allowed. Good science is not just about evidence, but about plausibility, so any such trial immediately gives a spurious credibility to the LP. Asking a question, even sceptically, can offer an implicit endorsement of its premises.

Second, it was the first study of any kind to use the Lightning Process, and it was doing so with children. There had been no opportunity to measure harms: there have been reports of patients who do not respond to the LP who then blame themselves and in desperation contemplate killing themselves. Exposing vulnerable adolescents to such a potential risk would seem particularly irresponsible.

Third, LP patients are made to accept a number of onerous conditions (such as taking responsibility for their illness) before taking the course. It is ethically questionable to ask trial participants to agree to such conditions in order to take part in a trial of a possible treatment for their illness. Making these demands of children would seem even more ethically dubious.

Fourth, patients are told to ignore their symptoms and to resume normal activity (from SMILE study):‘It has been a bit confusing, I have to say, because obviously we have got the [Lightning Process practitioners] approach, where, “Right, finally, done this, now you don’t need to do the pacing; you can just go back to school full time.” I think, the physical side of things, YP9 has had to build herself up more rather than just suddenly go back and do that’.

Research, backed up by patient surveys, shows the harms caused by exertion in patients with ME (see Kindlon). The recent report to the US Institute of Medicine found post-exertional malaise to be so central to the illness that it suggested a new name: systemic exertion intolerance disease or SEID. Even in disputed clinical trials such as PACE which use graded exercise therapy, patients are monitored by physiotherapists and nurses and plan a gradual increase in activity. Here service providers with no professional qualifications simply tell child patients that after three sessions in three days they should return to normal activity. It is deeply irresponsible.

Fifth, to anyone with genuine ME, that is ME as defined by the International Consensus Criteria, the Lightning Process is a form of torture. It is a physical torture simply to complete the course, again from the SMILE study:In addition to specialist medical care, children and their parents in this arm were asked to read information about the Lightning Process on the internet. They then followed the usual LP procedure (reading the introductory LP book or listening to it in CD form) and completing an assessment form to identify goals and describe what was learnt from the book. On receiving completed forms, an LP practitioner telephoned the children to check whether they were ready to attend an LP course. The courses were run with two to four children over three sessions (each 3 hours 45 minutes) on three consecutive days.

That is a very heavy burden. The homework is taxing enough but then to undergo 3 sessions of almost 4 hours each on 3 consecutive days is immense. The effort, the intensity and the busyness, would be punishment to anyone hypersensitized by the illness.

It is also a form of emotional torture as fundamental to the process is that patients take responsibility for their health, their illness and their recovery, from here, here, here and here:LP trains individuals to recognize when they are stimulating or triggering unhelpful physiological responses and to avoid these, using a set of standardized questions, new language patterns and physical movements with the aim of improving a more appropriate response to situations.

* Learn about the detailed science and research behind the Lightning Process and how it can help you resolve your issues

* Start your training in recognising when you’re using your body, nervous system and specific language patterns in a damaging way

What if you could learn to reset your body’s health systems back to normal by using the well researched connection that exists between the brain and body?

the Lightning Process does this by teaching you how to spot when the PER is happening and how you can calm this response down, allowing your body to re-balance itself.

The Lightning Process will teach you how to use Neuroplasticity to break out of any destructive unconscious patterns that are keeping you stuck, and learn to use new, life and health enhancing ones instead.

The Lightning Process is a training programme which has had huge success with people who want to improve their health and wellbeing.

To take chronically ill patients, who want only to get better, and spend three days attempting to brainwash them into believing their illness and recovery lie within their control is deeply unethical. Adult patients in the days after enduring this nonsense, blaming themselves for lack of improvement, have been left in such depths of despair as to want to take their own life. To expose chronically ill adolescents to such a danger was extraordinarily irresponsible.

Of course, with the broad criteria and the self-selection involved in determining who took part in the trial, it may well be that not a single participant actually had ME but had instead simply ‘chronic fatigue’. That would be even worse, though: the results may show that the LP has some effect with ‘chronic fatigue’ but would be used to claim effectiveness for patients with ME. Many children who genuinely do have ME could be gulled into paying for this nonsense only, potentially, to do themselves considerable harm.

This trial was unnecessary, gave spurious credibility to quackery and was unethical. It was also very poorly conducted, as will be shown in part 2.

1. AYME has now ceased trading and its role has effectively been taken over by Action for ME https://www.actionforme.org.uk/children-and-young-people/introduction/

‘Publication is conditional upon the agreement of the authors to make freely available any materials and information described in their publication that may be reasonably requested by others for the purpose of academic, non-commercial research.’

Queen Margaret University London (QMUL) is the responsible authority for the PACE trial, but for the purposes of this particular paper the role is played jointly by Kings College London (KCL) and QMUL. KCL and QMUL continue to refuse to release the data.

Iratxe Puebla, Managing Editor for PLOS ONE, and Joerg Heger, Editor-in-Chief of PLOS ONE, have written a blog giving their view of the arguments involved and an insight into some of their thinking in issuing the expression of concern.

One line gives cause for concern and I have written a response. I did post this response as a comment underneath their blog, but after more than 24 hours the comment still remains ‘awaiting moderation’. I have therefore decided to publish it here.

“Interestingly, the ruling of the FOI Tribunal also indicated that the vote did not reflect a consensus among all committee members.”
This line is misleading and reveals either ignorance or misunderstanding of the decision in Matthees.

First, the IT’s decisions may be appealed to a higher court. As QMUL chose not to exercise this right but to opt instead to accept the decision, then clearly it considered there were no grounds for appeal. The decision stands in its entirety and applies without condition or caveat.

Second, court decisions are not applied differently according to how those decisions are reached: they are full and final. Majority verdicts have no less standing. We are all familiar with the work of the UK & US Supreme Courts. Roe v Wade is not mitigated because it was a majority decision. May could not fudge the need for parliamentary approval of Brexit because the UKSC was not unanimous.

Third and above all, it is misleading to suggest there was a lack of consensus in the Tribunal.
The court had two decisions to make:
First, could and should trial data be released and if so what test should apply to determine whether particular data should be made public? Second, when that test is applied to this particular set of data, do they meet that test?

The unanimous decision on the first question was very clear: there is no legal or ethical consideration which prevents release; release is permitted by the consent forms; there is a strong public interest in the release; making data available advances legitimate scientific debate; and the data should be released.

The test set by this unanimous decision was simple: whether data can be anonymized. Furthermore, again unanimously, the Tribunal stated that the test for anonymization is not absolute. It is whether the risk of identification is reasonably likely, not whether it is remote, and whether patients can be identified without prior knowledge, specialist knowledge or equipment, or resort to criminality.

It was on applying this test to the data requested, on whether they could be properly anonymized, that the IT reached a majority decision.

On the principles, on how these decisions should be made, on the test which should be applied and on the nature of that test, the court was unanimous.

It should also be noted that to share data which have not been anonymized would be in breach of the Data Protection Act. QMUL has shared these data with other researchers. QMUL should either report itself to the Information Commissioner’s Office or accept that the data can be anonymized. In which case, the unanimous decision of the IT is very clear: the data should be shared.

PLOS ONE should apply the IT decision and its own regulations and demand the data be shared or the paper retracted.

While it is of course true that anything which brings about changes is ‘biological’, implicit in CBT is the notion that responsibility for recovery lies with the patient. If only patients think differently, then they will no longer be ill. No one disputes psychotherapy can help with a broad spectrum of illnesses, but there is no evidence it can reverse organic damage (injury, infection, inflammation). There is no evidence CBT can address the changes in ME patients found by Lipkin, Montoya and Naviaux.

It is true that ME is likely to prove to be more than one illness, or an illness with more than one sub-type. It is also true though that many people diagnosed with ME, do not in fact have it. This difficulty in diagnosis due to the absence of a biomarker is one which causes problems for clinical trials. Many patients think the criteria used by Dr Crawley are too broad and are likely to include patients who have a generic ‘chronic fatigue’. It is a view shared by the US Institute Of Medicine and the US Agency for Healthcare Research and Quality, which has recently stopped recommending CBT for ME because any claims for its efficacy come from trials which used the discredited Oxford criteria.

The other challenge for trials of interventions for ME is to distinguish between placebo, improved coping with the effects of the illness and a genuine treatment of the underlying illness. Since the severely ill do not respond at all to CBT and the small, subjective, self-reported benefit in the moderately ill is only temporary, many patients think that claims for effectiveness of CBT are unsafe. They are not convinced FITNET-NHS contains sufficient safeguards against this confound.

Patients agree that it is important we all work together. We would ask, though, that any research uses strict criteria to exclude chronic fatigue; takes into account the physiological changes found in people with ME; is based on plausible theory; is not based implicitly or explicitly on the notion the illness is one of false beliefs; includes meaningful patient involvement, from the broader patient community not just the established charities; benefits all patients, including the most severely affected; has, where applicable, proper controls against confounds; and unconditionally shares all anonymized data with anyone who wants to see it.

Thanks to (& #FF) Samei Huda for advice, though his help should not be seen as any kind of endorsement or agreement.

Sense About Science (SaS) exist to challenge misrepresentation of science and evidence. They advocate openness and honesty about research findings. They encourage people to #askforevidence. They agree that: we need all available information to make informed decisions about health care; hiding half the data is how magicians do coin tricks and shell games; with incomplete data we can only get an incomplete picture; outcome switching is like choosing lottery numbers after watching the draw. They ask people to contact them when there is something wrong so they can make a fuss.

In March, Rebecca Goldin posted a scathing criticism of PACE on the website of stats.org concluding that ‘the flaws in this design were enough to doom its results from the start’. In an accompanying editorial Trevor Butterworth of Sense About Science USA, equally as critical, said that ‘the way PACE was designed and redesigned means it cannot provide reliable answers to the questions it asked’.

After such criticism of PACE by Sense About Science USA and stats.org, would SaS, the UK organization, now support patients? They gave no signal they would, so in June I emailed SaS. I got an automatic response acknowledging receipt of my email, but no reply. I waited a few weeks and tried again. The same thing happened. In July, I wrote a letter to Tracey Brown. She didn’t even have the good manners to reply. In September I emailed Professor Paul Hardaker, chair of the trustees, asking if he could help me get an answer to my questions. He replied almost immediately, apologized and passed on my email. Soon after, Julia Wilson sent me an email.

I asked five questions of SaS:

Do they accept Goldin’s analysis of PACE and Butterworth’s criticism as valid?

Why, despite our requests, had they not made a fuss about something said by stats.org to be wrong? Would they?

Could they say where the data were available, the ‘results’ of the PACE trial?

Did they support the attempt by the PACE investigators to extend the Data Protection Act to prevent sharing of trial data?

Would they allow us a right of reply to Michael Sharpe’s interpretation of his own study which they had been carrying on their site since last October?

By the time I received a reply the Tribunal decision had been made, ordering QMUL to release the data. My third question was redundant.

SaS conceded Sharpe’s piece contravened their editorial policy and added a rider to that effect on their website. They claimed that they had never supported the extension of the DPA, but had not had enough resources to help in the case. Not enough it seems to post a tweet or send a single email.

They did not say whether they accepted the analysis and criticism on the stats.org website as valid.

There then began an exchange in which Wilson singularly failed to answer simple questions and continued to use language like a politician trying to avoid an issue. They did, though, remove the part of Sharpe’s article in which he made claims for his own study.

Eventually Wilson stopped responding to my emails, so I copied in Hardaker again. This time she did reply and she did finally state that SaS accepted Goldin’s analysis as valid. According to SaS the PACE trial is flawed. Wilson then ended the exchange.

They still refuse to help in any way. They have not welcomed the Tribunal decision. Even though they agree PACE is flawed, they are not prepared to do anything about it. They do not say whether they accept Butterworth’s criticism as valid.

A number of questions remain for SaS:

Why did they ignore my emails and only answer when I contacted the chair of the trustees?

Why did they allow Sharpe to promote his own study, contrary to their own editorial policy?

Why did they not push for the release of PACE trial data?

Why did they not support patients in their case against QMUL when QMUL were attempting to extend the DPA, which would have had a stifling effect on trial transparency generally?

Why have they never welcomed the Tribunal decision?

Why did they take so long and why were they so reluctant to say they accept Goldin’s analysis as valid?