We use cookies to improve our service and to tailor our content and advertising to you. More infoClose You can manage your cookie settings via your browser at any time. To learn more about how we use cookies, please see our cookies policyClose

All rapid responses

Rapid responses are electronic letters to the editor. They enable our users to debate issues raised in articles published on thebmj.com. Although a selection of rapid responses will be included online and in print as readers' letters, their first appearance online means that they are published articles. If you need the url (web address) of an individual response, perhaps for citation purposes, simply click on the response headline and copy the url from the browser window. Letters are indexed in PubMed.

We have read with great interest the paper of Yeh et al. „Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial“(1). Their work will definetively help patients with acrophobia and look into treatment options.
Although this is a Randomized Control Trial and their findings are of great interest, we have to point out some issues in their work:

1. The sample size (23 participants) needs to be larger.
2. Authors should provide a paper from the Ethics Committee.
3. The study was not double-blinded.
4. Did the individuals assigned in the parachute group open their parachute on jumping ? There might be a bias that may affect statistical analysis.
5. The student test is inappropriate for small sample size. Please use Mann-Whitney test.
6. A longer follow-up of the study group is needed.
7. In the end, more research is needed to validate your results.

I must confess that I did not read the article in it's entirety, however, I am gobsmacked by the self-evident question being asked.
Is this a satire? Are you saying that we should not research commonly held beliefs. Er what?

The unique contribution of Yeh et al to the medical literature is insightful, engaging and amusing in equal measure(1). However it is founded on the premise that “evidence supporting the efficacy of parachutes is weak and guideline recommendations for their use are principally based on biological plausibility and expert opinion”. This is not accurate.

Parachutes are rigorously tested on a multiplicity of variables and have to comply with the extremely high standards of the Parachute Industry Association(2). Parachutes have various phases of testing, initially using surrogate markers such as the integrity and deformation of dummies dropped from various altitudes with the test parachute deployed; escalating to human tests with alternative safety measures, should the parachute fail. Very few, if anyone, would use a parachute that had never been tested, even at low altitudes; as inappropriate deployment can cause harm.

The parachute principle "the notion that no one would do a randomised control trial on the parachute use as they are obviously efficacious" may possibly underlie the sometimes permissive approach to the approval of medical devices(3). Well-intentioned and apparently scientifically sound modifications on recognised designs can have catastrophic effects on the efficacy of implants and devices as has been seen with the various hip replacements. Contrary to this ubiquitous belief within the medical profession, there is no such complacency in the aviation of industry; even minor modifications on a recognised design undergo robust and exhaustive evaluation(2). This should serve as a salutary tale for our profession.

Yeh et al have failed to undertake one of the most important and basic activities in a clinical trial, namely ascertaining how well subjects adhered to the intervention, thus completely nullifying the results. Deviations in patient adherence in clinical trials can lead to biased results, reduced statistical power, and impaired causal inferences [1]. Zero adherence results in zero usefulness and wasted time and money; it also suggests that the study was unethical.

The authors stated that "the parachute[s] did not deploy"; However, the illustration of the young woman jumping with arms outstretched shows that she did not even attempt to deploy hers; in other words, the subjects failed to comply with the proper procedure for implementing the intervention. Nevertheless, the authors repeatedly (18 times by my count) included the phrase "parachute use". To give a pharmacological analogy, it is as if tablets containing an active treatment were issued to the subjects but never used. It is therefore hardly surprising that there was no difference between the active and placebo conditions.

The BMJ is to be congratulated on publishing this study, to highlight the importance of zero adherence to an intervention in a clinical trial. It is unfortunate that adherence was nowhere mentioned in the text.

So, perhaps the only reasonable response to the results of this study is "Yeh, Yeh".

It is great to read this satirical piece, with a timing of only 8 days after our reflections on "personalized evidence based medicine", or briefly: "Precision medicine" (https://www.bmj.com/content/363/bmj.k4245) . There we framed the interpretation of the effects of interventions for individuals as “reference class forecasting”: implicit predictions for an individual are made on the basis of outcomes in a reference class of “similar” patients treated with alternative interventions. The parachute study reminds us to always think about such a reference class. In this case I will be happy to keep jumping out of parked aeroplanes with my kids without a parachute if the height is less than 1 meter; then I may dream of, rather than truly being part of the different reference class of those aboard flying planes.

What this satirical article highlights is not a flaw in the foundations of evidence-based medicine, and well designed double-blind controlled clinical trials, but rather it highlights a flaw in their application, a flaw evidently still not well recognized by the British Medical Journal. A well designed double-blind study should compare the safety and efficacy of the best available current treatment with the safety and efficacy of the study treatment. A well designed randomized double-blind controlled trial need not be placebo controlled.

The mathematical flaw of this study is in its mathematical and statistical analysis, by avoiding what in the Bayes Theorem is called prior probability. Studying ineffective treatment against placebo does not yield useful scientific medical information. Studying ineffective treatments against placebo, and against nontreatment, is a common ruse used to imply that treatments such as homeopathy, aromatherapy, healing touch, acupuncture, and chiropractic have scientific evidence of efficacy.

Dear Editor,
We read with great interest, some delight and little concern Yeh et al. „Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial“(1). The authors claim to have performed the first randomized clinical trial on this issue. Although their findings are of great interest and finally shed some (more) light on this very controverse topic, we have to point out some issues in their work:
1. Methods, paragraph 3: „Responses were transmitted to an online database upon landing for later analysis“. FAA Guidelines clearly state to use portable electronic devices (PED) in airplane or flight mode only, gate to gate“(2). The act of landing (as in „upon landing“) is defined as the „fact of an aircraft arriving on the ground“(3). So clearly the authors methods were in violation of FAA Guidelines.

2. Regarding the 17th paragraph of the declaration of Helsinki:
"All medical research involving human subjects must be preceded by careful assessment of predictable risks and burdens to the individuals and groups involved in the research in comparison with foreseeable benefits to them and to other individuals or groups affected by the condition under investigation. Measures to minimise the risks must be implemented. The risks must be continuously monitored, assessed and documented by the researcher."

One must ask, have Yeh et al. taken into account the numerous, albeit anecdotal reports of parachute use before planning their study? Have the risks of jumping from the aircraft in a stationary setting from heights of 0.6m been anticipated enough? Fig. 2 clearly shows a subject not wearing any safety devices such as a helmet or paddings and seems unaware of the potential risk she is taking (hence the smiling face?).

3. Regarding the 21st paragraph of the declaration of Helsinki: "Medical research involving human subjects must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and adequate laboratory and, as appropriate, animal experimentation. The welfare of animals used for research must be respected.“

Despite having performed a thorough medline/pubmed search, unfortunately still Yeh et al. seem to have overseen our study from 2016 „Does usage of a parachute in contrast to free fall prevent major trauma?: a prospective randomised-controlled trial in rag dolls.“(4), which in fact was the (very?) first randomised clinical trial regarding the use of parachutes. In our study we have clearly shown the benefits of parachute use in an experimental setting and have proven it to be not only benefitial but potentially life-saving!

4. Paragraph 33 of the declaration of Helsinki: "The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best proven intervention(s), except in the following circumstances:
Where no proven intervention exists, the use of placebo, or no intervention, is acceptable; or
Where for compelling and scientifically sound methodological reasons the use of any intervention less effective than the best proven one, the use of placebo, or no intervention is necessary to determine the efficacy or safety of an intervention
and the patients who receive any intervention less effective than the best proven one, placebo, or no intervention will not be subject to additional risks of serious or irreversible harm as a result of not receiving the best proven intervention.
Extreme care must be taken to avoid abuse of this option."

Yeh et al. have unfortunately not tested the intervention against placebo, because the intervention did not happen (parachute did not open during free fall from 0.6m).
In summary we find the colleagues study important but to randomize human subjects to either wear a parachute or an empty backpack when jumping from aircraft seems – especially taking our scientifically sound results into account – in contrast to ethical standard and leaves room for an explanation.

Sincerely
Till Burkhardt (on behalf of most, but not all of the authors (4))

According to the authors, the new parachute study would be pointing to the problem that randomized clinical trials select samples less predisposed to the benefit of the intervention, a phenomenon that would promote false negative studies. The authors explain that it happens because patients who are more likely to benefit from therapy are less likely to agree to enter a study in which they may be randomized to non-treatment. This would make clinical trial samples less sensitive to benefit detection as a partial exclusion of patients with a greater chance of therapeutic success would take place.

Caricatures serve to accentuate true traits. However, if we were to characterize clinical trial samples (ideal world), they tend to be more predisposed to finding a positive results in comparison with a real-world target population. Therefore, this study is not a caricature of the real world clinical trial.

Thus, the present article should lose the caricature status and be consider just a funny joke, with no ability to anchor our mind towards a better scientific thinking.

As proof of concepts, clinical trials rely on the use of highly treatment-friendly samples, by applying restrictive inclusion and exclusion criteria. Differences between patients who accept and do not agree to enter the study are not sufficient to generate a sample less predisposed to treatment benefit than reality.

The "joke study" commits an unusual sample bias: it allows the inclusion of patients who do not need treatment. It would be the same as a study aimed at testing thrombolysis allows the inclusion of any chest pain, regardless of the electrocardiogram. Doctors who already believe in thrombolysis would see the electrocardiogram, thrombolyze ST-elevation patients, and release those who do not need thrombolysis to be randomized to drug or placebo. A joke without scientific value.

Caricature studies are useful when they anchor the mind of the community to a sharper criticism of the results of studies. However, in this case, the anchoring occurred in the opposite direction.

Firstly, when we think of the scientific ecosystem, the biggest problem is false positive studies, mediated by several phenomena: confounding bias in observational studies, outcome reporting bias, conclusions skewed to positive finding spin) and finally citation bias that favor positive studies. Behind all this lies the innate predilection of the human mind for false statements, to the detriment of true denials.

Secondly, there is the problem of efficacy (ideal world) versus effectiveness (real world). Clinical trials aim to evaluate efficacy, which could be interpreted as the intrinsic potential of the intervention to offer clinical benefit: "Does the treatment have beneficial ownership?" Therefore clinical trials represent the ideal condition for the treatment to work. In the face of a positive clinical trial, we must always reflect whether this positivity will be reproduced in the real world, which constitutes effectiveness.

Of course there is the problem of false negative studies and it should also be a concern. But the bias suggested by the funny parachute study does not represent an important false-negative mechanism. The most prevalent mechanisms of false negatives are reduced statistical power, excessive crossover in the intention-to-treat analysis and inadequate intervention applicability.

My concern is that a reader of this funny study takes the following message home: if a promising study is negative, consider that clinical trials tend to include patients less predisposed to the benefit. This message is wrong, as clinical trials tend to select samples more predisposed to the benefit. Of course there are cases, but if we are to anchor our mind, this should be in the direction of the most prevalent phenomenon.

My prediction is that this study will come to be cited by legions of believers not satisfied with negative results from well-designed studies. Just as the seminal article of the parachute has been used inadequately as a justification for many treatments that have nothing to do with the parachute paradigm under the premise that "there is no evidence at all." A recent study by Vinay Prasad has shown that most of the treatments characterized as parachute paradigm by medical articles are in fact not, many with clinical trials of negative results.

The great attention received by the parachute clinical trial is an example of how information sharing on social networks occurs. The main criterion for sharing is the interesting, unusual or amusing character, to the detriment of the veracity or usefulness of the information. In the appeal for novelty, fake news end up getting more attention than true news, as was recently demonstrated by a Science study. Although the article we are discussing should not be framed as fake news, it is neither a good caricature of the real world.

The work in question is not a caricature of the ecosystem of randomized clinical trials. It is a mere joke with the potential to bias our minds for the inadequate idea that the heterogeneity between clinical trial samples and the target population of the treatment reduces the sensitivity of these studies to detecting beneficial effects. In fact, specificity of clinical trial samples makes them easier in detecting beneficial effects than if the entire target population were included.

When the learning of science gets a fun approach it arouses great interest of the biomedical community. But we must always ask ourselves: what is the message implicit in the caricature? It is the first step for critical appraisal of such “thought experiments”.