Truth Behind Studies | Natural Health Newsletter

So, What’s the Story with Studies?

Let's start the New Year with something a little bit different--a scientific study unlike any you've seen before.

Scientific studies--and especially clinical trials--are considered the gold standard for the evaluation of most new medical treatments. If a treatment is backed by several studies--regardless of how many studies it took to get there--it's considered "good to go" in the medical community. On the other hand, if an alternative treatment comes up short in just one well-publicized study, it is derided and dismissed by the medical community as useless and a waste of money, or even worse: dangerous. More curiously--but perhaps eminently understandable considering human nature and the aura of gospel attached to studies by doctors--once "newer" studies disprove older studies concerning the efficacy of a medical treatment, it can still take decades for the medical community to act on that knowledge and disavow the now useless/dangerous treatment. Such is the miracle of studies.

For our readers, this is nothing new as I've talked about these issues extensively over the years--to the point of exhaustion. Now, don't get me wrong. I'm, by no means, trying to say that studies are useless or bad. On the contrary, they are responsible for the great advancements in science over the past several centuries. They are invaluable. But, that said, they are not what most people think they are. No single study or set of studies should be considered gospel. There are just too many places where errors can enter into a study--and once peer reviewed and approved, there are just too many journals in which that original error can be referenced and transferred to study after study, continually reinforcing the original error. For these reasons, studies should not be thought of as absolute but, rather, as invaluable trail signs that eventually lead us on a zigzagging path to a place of true knowledge.

Leave it to researchers from Harvard to use a formal, self-referential trial to prove the point.

Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial

The objective of the study, which was published over Christmas in the BMJ, was to determine if using a parachute prevents death or major traumatic injury when jumping from an aircraft.1 Yes, you read that correctly: they conducted a randomized controlled trial to determine if parachutes served a function when jumping from planes. And wait until you see the results before you jump (all puns intended) to any conclusions as to the merits of the study.

A total of 92 individuals aged 18 and over were screened and surveyed regarding their interest in participating in the PARACHUTE trial. Among those screened, 69 (75%) were unwilling to be randomized (I wonder why) or found to be otherwise ineligible by investigators. In the end, 23 agreed to be enrolled and were randomized. The study was conducted using private and commercial aircraft between September 2017 and August 2018. The study involved participants jumping from an aircraft (airplane or helicopter) with a parachute (noted in the study as the "control" group) versus those jumping with an empty backpack and noted in the study as the "intervention" group. The main outcome measured was the composite of death or major traumatic injury (defined by an Injury Severity Score over 15) upon impact with the ground measured immediately after landing.

Now, to avoid your own personal injury, please remain seated before reading any further. The results of the study found that parachute use did not significantly reduce death or major injury (0% for parachute v 0% for control; P>0.9). This finding was consistent across multiple subgroups. To put that in plain English, none of the study's participants--whether they wore a parachute or not--died or were injured within five minutes or 30 days of the jump. As a result, the researchers concluded that parachute use does not reduce death or major traumatic injury when jumping from aircraft.

Make no mistake. This was an actual study with real test subjects jumping from planes and real data collected and analyzed. It was replete with scientific jargon attesting to both the intelligence of the researchers and the careful design of their study as well as a series of compelling tables. For example:

As they wrote in the BMJ, should the results be reproduced in future trials, it could save the global economy billions of dollars spent annually on parachutes to "prevent injuries related to gravitational challenge."

"We have performed the first randomized clinical trial evaluating the efficacy of parachutes for preventing death or major traumatic injury among individuals jumping from aircraft. Our groundbreaking study found no statistically significant difference in the primary outcome between the treatment and control arms. Our findings should give momentary pause to experts who advocate for routine use of parachutes for jumps from aircraft in recreational or military settings. "

Okay, at this point, you're obviously asking, "What's going on here? There has to be a catch." And there is. Unfortunately, they stated it in typical science-study-speak, which means it's easy to have no idea what they're saying. But hang in there, and all will be made clear:

"A minor caveat to our findings is that the rate of the primary outcome was substantially lower in this study than was anticipated at the time of its conception and design, which potentially underpowered our ability to detect clinically meaningful differences, as well as important interactions. Although randomized participants had similar characteristics compared with those who were screened but did not enroll, they could have been at lower risk of death or major trauma because they jumped from an average altitude of 0.6 m (SD 0.1) on aircraft moving at an average of 0 km/h (SD 0). Clinicians will need to consider this information when extrapolating to their own settings of parachute use."

Or, as they stated in their conclusion in simpler terms: "The trial was only able to enroll participants on small stationary aircraft on the ground." In other words, the jumps were all at a height of about two feet suggesting, in the words of the researchers "cautious extrapolation to high altitude jumps." In a further exploration of that vein of thought, they stated, "The study also has several limitations. First and most importantly, our findings might not be generalizable to the use of parachutes in aircraft traveling at a higher altitude or velocity."

The researchers then added the following statement to their conclusion, revealing the actual intent of the entire exercise.

"When beliefs regarding the effectiveness of an intervention exist in the community, randomized trials might selectively enroll individuals with a lower perceived likelihood of benefit, thus diminishing the applicability of the results to clinical practice."

To once again put that into English, the scientists argued that their trial highlights how misleading scientific studies can be.

"The PARACHUTE trial satirically highlights some of the limitations of randomized controlled trials. Nevertheless, we believe that such trials remain the gold standard for the evaluation of most new treatments. The PARACHUTE trial does suggest, however, that their accurate interpretation requires more than a cursory reading of the abstract. Rather, interpretation requires a complete and critical appraisal of the study."

So, What Was the Point of this Exercise? Where Do We Stand with Studies?

The parachute study illustrates just one way studies can misrepresent reality. In truth, the researchers did not mention that there are, in fact, many. For example:

It first needs to be remembered that there are many different types of studies, and they vary in reliability--everything from case controlled retrospective studies, to cohort studies, to interventional (aka experimental) studies such as the parachute study, all the way on up to clinical trials. Note: The perceived gold standard in studies is the double-blind, placebo-control trial or, barring that, the randomized controlled trial--the methodology used in the parachute study. But it doesn't matter; errors, bias, and false conclusions can enter into any study thereby producing false results.

Animal studies are not what you think. Only about 4-20% of mouse studies statistically translate to humans. Although mouse trials are useful, they are far from definitive.2 Things behave very differently in humans and animals. Think chocolate. It's one of mankind's greatest addictions, but it's deadly to dogs. Think about that for a moment. The next time you see a "groundbreaking" animal study about some human disease making headlines in the world media, remember that it only has a one-in-five shot at being relevant to humans. That's one in five.

Then of course, there are researchers who fabricate and falsify the data used in their studies to advance careers or a specific agenda. Believing in those studies is obviously a big mistake.

And what about the researchers who are paid to put their names to a study they never researched or wrote but that was, in fact, written by a drug company ghostwriter from who knows what data.

Or did you know that it is totally legal for drug companies to selectively publish any favorable studies of their new drugs while unfavorable studies are hidden away, never to see the light of day. Yes, they're legally allowed to do that, and as a result, roughly half of all clinical trials--the negative, unfavorable, unflattering ones--never get reported.3

And then there's "outcome switching"--or changing the outcome criteria of a study from the original objectives because they fail testing to more "agreeable" outcomes that seem to indicate success.

What about studies based on false assumptions like the parachute study? Yes, this study was intentional and sarcastic, but most are not. And unlike the parachute study, because the false assumption is not readily apparent, these studies can still become medical canon.

And what about studies based on "correct" data that come to dubious conclusions based on that data? For example, running a study on synthetic vitamin isolates but concluding that those negative results are an indictment of the natural forms of the vitamin as well is like running a study on the quality of fake Rolex watches and coming to the conclusion that all Rolex watches are junk. They are not. All your study proved is that fake Rolex watches are junk. Now, compound your error by promoting those erroneous conclusions to the gullible world press, desperate for any outrageous headline that can sell papers or boost ratings. It is false logic. It is sophistry. It is unethical. And it is ultimately dangerous in that it will encourage many people to make bad health choices when it comes to nutritional supplementation.

A related problem is when mainstream media misunderstands and publishes false narratives about medical studies--narratives that are not actually contained in the studies themselves. In fact, truth be told, for budgetary reasons, most mainstream health editors don't even read the original studies about which they publish stories. They merely republish conclusions written up for them by one or two people working for the news services such as Reuters and Associated Press who frequently get it wrong. The problem is that with dozens of TV stations, newspapers, and online magazines republishing the same false narrative sent to them en masse by the news services they all subscribe too, fake news quickly takes on the aura of reality.

Of course, we also have false studies that get referenced over time by other studies so that its false narrative becomes gospel with repetition. For example, the study that found that flu shots are 50-90% effective. They are not.

It also should be noted there are often conflicting studies and huge disagreement within the medical community about which studies represent reality. The problem is that most people only get to hear one side of the debate--the side represented by the largest journals. For example, most people think that the medical community completely buys into the cholesterol theory of heart disease. They don't. In fact, there is huge debate on the topic and huge disagreement on the value of statin drugs. But you never hear about that, do you?

Surprise, as it turns out, according to the US government's Office of Technology Assessment (US OTA) -- only 10% to 20% of all procedures currently used in medical practice are supported by controlled clinical studies. That's it -- just 10-20 percent! And that's an optimistic number as it assumes that all studies come to valid conclusions -- something we know is categorically not true. And you probably thought everything in medicine was backed by studies?

Oh yes, and did I mention that according to a study published in the Journal of General Internal Medicine, only 11% of physicians rely on evidenced based medicine for all their treatments.4 When push comes to shove, some 90% of doctors like to go with their gut at least some of the time even if their gut runs contrary to studies.

Now don't misunderstand what I'm saying. Studies are not useless. In fact, they are the foundation of modern medicine and modern science. Without them, we might still be treating diseases by regulating bad humors and expelling evil spirits. BUT--and this is a very, very important "but" --clinical trials are not the be-all and end-all of scientific knowledge. They often contain serious flaws and biases, not to mention being subject to the vagaries of human greed and duplicity. In the end, individual clinical trials should be considered as divining rods that at best point us in the direction of important conclusions. Certainly, multiple studies that come to the same conclusion are a better indicator than individual studies--but even then, the result is not guaranteed. As mentioned above, if the same bias is carried from study to study (confusing synthetic vitamin E with full complex, natural vitamin E, for example), study after study can end up with the same flawed conclusion, just reached multiple times. But once we understand the limitations of the different types of studies and the way errors can creep into them, we can see that decades, centuries, and even millennia of anecdotal information about herbal and nutraceutical remedies can be equally effective in "possibly" pointing the way to important healing discoveries.

And for those who might say, "Yes, Jon, that's all well and good, but who are you to attack the credibility of clinical studies? What studies do you have to support your point of view?" Well, thanks to Harvard University, I can now say, "Here ya go."

You sure NAILED that! Thank you for publishing this!
Public needs to know how terribly strange what passes for “research” can really be. One lady did some research into researches...and [I quote extremely conservative number] found more than 50% of research is manipulated to see products.
Believe me, after working at a VA hospital that garnered research funding, I saw 1st hand what goes wonky--and it was DELIBERATE. From setting up the parameters, to prepping the site to run it, to gathering data, to reporting the data...it was ENTIRELY manipulated......and then, the product [in this case, it was those radiation beads that get implanted next to tumors--it was then called “Brachy Therapy Implants”]...went right to marketing.
They kept changing the parameters/rules; they faked the setup; they lacked proper equipment to prevent staff getting over-exposure; they instructed staff to use a pen-type dosimeter, wrong; they supposedly hired a “3rd party” to evaluate staff exposure, from their cardboard dosimeter tags, but it was another government agency; they manipulated the data from those badges, and collected badges in the middle of high-dose patients, to make it appear staff had far less exposure than they really did; they never gave “informed consent” to participants--that was more a cheerleading talk; it was pretty epic-badly done. People were harmed, but management denied radiation could have caused any harm.
They did similar with nicotine gum..but that was not as complicated.
So, it’s really important to read the entire research, really understand how it was set up, what they were looking for, how they conducted it, and, how did they evaluate it. Be aware, the synopses that precedes most research papers, aren’t related to the data in the full body of the research papers!

This is so true. They want to sell a pill, so they taylor the study to create a narrative so that it looks like said pill will cure something - but first - they have to create the "disease" and "show" the population that this "disease" is "deadly" and their pill (or injection, infusion, powder, etc) is the "cure" for said disease. They have done this with cholesterol, blood pressure, diabetes, osteoporosis, acne, cancer; you name it. 90% of the supposed diseases, I believe, are made up. I also believe peoples cholesterol, blood sugar, blood pressure, etc, go up and down and we shouldn't screw around with nature.

Sign Up For Our Award Winning Health Tips!

Health Articles

Reader Testimonials

I have done research for years on every health guru out there in cyberland, and I have to say that Jon Barron is the only health site I would ever recommend. There are other good sites out there but I have always felt Jon Barron was the most reliable when it comes to good information, not hype just to sell products.

Privacy PolicyWe respect your right to privacy. Therefore, we do not sell, or share any names or information to third parties or other mailing lists.

Liability StatementThe statements found within these pages have not been evaluated by the Food and Drug Administration. If a product or treatment is recommended in these pages,it is not intended to diagnose, treat, cure, or prevent any disease. The information contained herein is meant to be used to educate the reader and is in no way intendedto provide individual medical advice. Medical advice must only be obtained from a qualified health practitioner.