I’m one of those odd people that enjoys distance running. I end up spending a lot of time in the company of other runners. And when we’re not running, we’re usually griping about our running injuries. As the cohort that I run with ages, the injuries are getting more prevalent. Besides the acute conditions, the chronic problems are starting to appear. Our osteoarthritis years are here.

As the available pharmacist, I get a lot of questions about joint pain. What’s reassuring, I tell them, is that they shouldn’t blame running. Osteoarthritis is common — the most frequent cause of joint pain. For some, it starts in our twenties, and by our seventies, osteoarthritis is virtually certain. Regardless of your level of exercise, the passage of time means the classic osteoarthritis symptoms — joint pain and morning stiffness, that worsens over time.

Osteoarthritis progresses gradually. Blame biomechanics and biochemistry. It starts with a breakdown of the cartilage matrix. Stage 2 progresses to erosion of the cartilage and a release of collagen fragments. Stage 3 is a chronic inflammatory response. The goals of treatment are to reduce inflammation and pain, and stop progressive disease. There’s no drug therapy that’s been show to actually improve joint function. Reduce pain, or slow inflammation, yes. Analgesics, like Tylenol, and anti-inflammatories are mainstays. But repair damage? Sorry: you lose it, it’s gone. Chondrocytes don’t seem to be able to repair the overall matrix — which is made mainly of collagen.

I don’t advise people with joint pain and morning stiffness to stop running. (It’s futile to suggest this to a runner, anyway.) The idea you can “wear out” your joints is is a popular image, but it’s inaccurate. In most cases exercise can continue — exercise doesn’t seem to accelerate the rate of osteoarthritis development. When the pain is persistent, I’ll suggest a physician evaluation, and describe some non-drug and non-prescription approaches to symptom management that might be appropriate.

I’m prepared now, for the inevitable question that follows: “But what about supplements? What do you think of X?” Glucosamine is a common inquiry (discussed at length here) as is chondroitin (which it is often co-packaged with). As with most supplements, their popularity is not related to good clinical evidence. Rather it seems to be secondary to perceptions of efficacy, driven by lots of anecdotes and general awareness. But the message that it’s ineffective may be getting out — I’m getting fewer inquiries about it these days. The market has shifted, and there are new products on the shelves for those with sore joints.

The latest supplement trend for joint pain may be collagen. Collagen supplements have been around for years, and their traditional popularity has been its purported effects on skin and nails. There are regional trends, with collagen-rich foods and even restaurants gaining popularity in Japan. (Anyone want a meal of pig’s feet?) The enthusiasm for collagen is now being more actively tapped for the arthritis market – perhaps it’s a way to drive demand among all ages.

Genacol, a collagen supplement, is currently being actively marketed here in Toronto, and seemingly distributed worldwide. This particular brand caught my attention not just because I see the giant image of Indy care driver and spokesperson Alex Tagliani) everywhere, but because of an explicit efficacy claim made in the advertisements and on the website: “Scientifically proven to reduce joint pain.” There’s no other supplement with convincing evidence for any type joint pain. So I went looking for the proof.

Why collagen, and what does it have to do with our joints? Collagen is the major component of connective tissue in the body — it’s about a quarter of our body mass, and is found in skin, muscle, tendons, etc. Hence the pig’s feet. All animals are mainly collagen: it’s the most abundant protein in the animal kingdom. Your leather coat? Think of it as a collagen coat. In fact, that’s where your Genacol is coming from: the collagen source for this particular supplement is European bovine skin collagen.

If we don’t have enough collagen, we’re in big trouble. There’s several different types in the body, each with their own role. Remember that M. Night Shyamalan movie Unbreakable with Bruce Willis and Samuel L. Jackson? Jackson’s character had Lobstein syndrome (osteogenesis imperfecta), a congenital disease that results in the failure to produce type 1 collagen, which results in exceptionally brittle bones.

Or think of scurvy — now rare, but once the worst thing about extended sea voyages. Vitamin C is a necessary cofactor for collagen synthesis. Without it, multiple systems are affected, and biological disaster ensues, starting with your teeth falling out. History’s first clinical trial was performed by James Lind, a British Royal Navy physician, to compare a variety of scurvy cures, including the one that worked: fresh lemons.

Both scurvy and Lobstein’s are basically collagen shortage diseases. But, interestingly, guess what doesn’t work for either scurvy or Lobstein’s? Eating collagen. That’s because collagen is a protein: a triple-helix, long chain protein. It’s the product of an elaborate synthesis process that occurs throughout the body. When we consume collagen, usually in the form of food, the long chain proteins are broken down during digestion to their original amino acids. Only then can they be absorbed. Once absorbed, these amino acids are available as building blocks to support collagen synthesis throughout the body. So from a dietary perspective, your body doesn’t care (and can’t tell) if you ate a collagen supplement, cheese, quinoa, beef, or chick peas — they’re all sources of protein, and indistinguishable by the time they hit the bloodstream. The body doesn’t treat amino acids derived from collagen any differently than any other protein source. For this reason, the idea that collagen supplementation can be an effective treatment for joint pain, osteoarthritis, or any other condition, is highly implausible, if not impossible in principle.

But about that proof …

The company links to a press release with its scientific evidence for Genacol. Two clinical trials are described. A search of PubMed reveals neither have been published. That doesn’t prevent the efficacy claims:

With Genacol, a “statistically significant” beneficial effect with relation to pain was observed in the subjects: the best result that can be obtained during a clinical trial. “These two new trials demonstrate without a doubt that millions of people suffer less with Genacol, proving the impeccable quality of the products under this brand,” says Guy Michaud, President of Groupe Genacol.

Not quite. Setting aside the grandiose claims, statistical significance isn’t enough — we want clinical significance. A tiny change in pain may be enough to be statistically significant — but is it relevant in the real world? We only have the abstracts, so the information is incomplete. However, as a general rule, if study authors don’t mention effect size in the abstract, it’s probably because they are not worth mentioning.

The first trial, A 6-month randomised, double-blind, placebo controlled study to assess the clinical benefit of a food supplement made of a proprietary collagen hydrolysate in subjects with joint pain at the lower or upper limbs or at the lumbar spine randomized 200 patients to 1200mg of collagen daily, or placebo. The results don’t sound promising:

At 6 months, the proportion of clinical responders to the treatment, according to VAS scores, was significantly higher in the collagen hydrolysate (CH) group 51.6%, compared to the placebo group 36.5% (p<0.05) , .[sic] However, even if there was no significant difference in the number of clinical responders at 3 months (44.1% vs 39.6%, p=0.53), there was still a higher proportion of responder in the group using collagen hydrolysate demonstrating a certain evolution. Using other pain and function assessment tools (i.e. questionnaires Lequesne, DASH or EIFEL) no significant effect of collagen hydrolysate was observed compared to placebo.

The second trial is also lacking in information. A randomized controlled trial on the efficacy of oral collagen treatment on the medial knee joint space and functional outcome among veterans memorial medical center patients diagnosed with osteoarthritis of the knee: open label and single blind (the blinding is not described) this trial randomized 150 patients, with 113 completing the trial. Right away, that many patient dropouts is a massive red flag, as patients that drop out usually are not responding to treatment. Patients were randomized to collagen, 1200mg per day, or the anti-inflammatory aclofenac (which I’ve never heard of) 100mg daily for five days, then as required. Both groups could use ketoprofen gel as desired. The results?

Patients in Group A significantly scored lower in the average WOMAC score from baseline to the sixth month follow up. On the other hand, patients in Group B had no significant change in their average WOMAC score after six months. There was no significant difference in the medial knee joint space measured at baseline and after six months in both groups.

There’s no additional data, so there’s no way to determine if the groups were properly matched, or if the effect observed was both real and clinically meaningful. Given the trial was single-blind, the risk of bias is large. Perhaps not surprisingly, when objective measurements (joint spaces) were taken, there were no differences between the groups.

It could be that collagen supplements provide a meaningful clinical benefit to arthritis and joint pain, but there’s certainly no persuasive evidence to suggest that’s the case. Based on what collagen is, how it’s absorbed, and how we know collagen is actually synthesized in the body, it’s highly implausible that 1200mg of additional protein consumed daily will have any meaningful therapeutic effects. Genacol, like other collagen supplements, appears to be little more than an expensive protein supplement. If you want to supplement with collagen, my suggestion is to skip the supplements, and go for a well-marbled steak. Enjoy it, but don’t expect the steak, or any collagen supplement, to relieve your joint pain.

Thanks for the article. As a runner, it’s always nagged at the back of my mind that I might be destroying my knees, and it’s good to know that OA-wise, at least, I might as well run as not. (I developed Osgood-Schlatter in puberty that never resolved, so its flareups limit my running enough as it is.)

What about the potential of the various forms of injectable hyaluronan as disease-modifying agents, however? M’boy has been undergoing that treatment (with PT) for over a year now to try to stave off joint-replacement surgery due to OA at a relativey young age, and n=1 anecdotally, his pain is reduced and his range of motion has improved…

@Roadstergal – like collagen, hyaluronan is a big molecule. if you want it intact it has to be injected. There’s a fair bit of data; on balance it does not appear that impressive. There is a nice summary of the evidence in the 2008 NICE osteoarthritis guidelines:[PDF]

On balance, the evidence seems to suggest a benefit for reducing pain up to 3 months after a series of three to five injections, although the effect size is generally small.

I would post more, but supporting my arguement with too many references gets the post hung up in moderation until the thread goes stale.

“Setting aside the grandiose claims, statistical significance isn’t enough — we want clinical significance. A tiny change in pain may be enough to be statistically significant — but is it relevant in the real world? ”

Unless a standard has been set by a professional organization, “clinical significance” is an OPINION on the part of individual doctors and medical directors. I have watched medical directors go at it over whether something is “clinically significant.” The visual analog scales used in these trials are relatively insensitive tools compared to biochemical markers. If there is improvement in a joint pain evaluation by both the physician and the patient (Penn State study above), then there has to be quite a lot of improvement for it to rise above the noise in the data. Have you done any pain research?

“Right away, that many patient dropouts is a massive red flag, as patients that drop out usually are not responding to treatment.”

Again, have you done any pain research? People drop out of pain treatment studies because they are in PAIN. There are non-responders and people with adverse effects on ibuprofen and acetominophen as well. The more the protocol strives for minimizing confounders (i.e. clearer interpretation of the data), the less treatment options are available to the non-responders and control patients. Rescue medications and such may not be enough so patients seek out the full range of medical treatment options and drop out.

“Genacol, a collagen supplement, is currently being actively marketed here in Toronto…”

Yep, claims backed by two unpublished studies are starting out with a considerable disadvantage. Do you have to publish Phase III trials before you can market a drug with claims (yes, I already know the answer)? And weaknesses in the study design inferred from the non-peer reviewed abstract don’t make the claim any more believable. Personally I would like to see the full paper when/if they publish it before I put too much weight on my inferences from just an abstract, but sometimes that is all we have to go on.

I sort of get it when commentators here laugh off reiki or homeopathy because they aren’t scientific practices. But when there are published RCTs countering your opinion and standardized study design methodologies that you misinterpret (with probiotics and constipation as well), I think that merits a fruitful scientific discussion so as to not mislead your fact-oriented readership. And, I promise not to throw in vitro studies at you and call them clinical efficacy studies either.

OK, I do this with a liiiiittle trepedation with this audience. There are many more nutritional products under investigation (or just forming a fan base without the need for too much science – ugh) for use in joint health. Now, Risa Schulman, Ph.D. is a consultant in the nutritional products industry and has written a web article talking about different nutrients in joint health. Now, she includes a lot of scientific references, but the article is intended to be read by non-scientists in the nutritional products industry (including the faithful and true believers), so she finds the upside of even limited data. Ever defended your budget to your administration? That is also a great time to find every upside you can and water down the details for a non-scientific audience.

Why am I tarnishing SBM with blatent industry propaganda (well, not my personal industry right now – I am working on famine relief issues at the moment)? Well, I thought it might help provide some perspective if you saw some actual “inside” stuff from the industry that isn’t loony. It isn’t a peer-reviewed scientific paper, but it isn’t loony when you consider true-believer salesmen are one of the target audiences. In the range of scientific hierarchy from Cochrane review (I prefer AHQR btw) to FTC-banned health claim on a box of Airborne, this is below that median (but not below average, Risa communicates to her audience quite well). Snicker, guffaw and criticize all you want, but I am overcoming a bit of trepedation to share another (and here, controversial) POV with you:

May I ask if you wish to respond and engage in a science-based conversation?

Science-based. You keep using that word. I do not think it means what you think it means. Being science-based does not mean entering a few keywords into PubMed, finding positive citations, and then declaring that to be an argument.

Following that, I recommend you read some of the posts written at this blog about glucosamine. In addition to the one I link to in my article, see the other posts by Harriet Hall on the topic. Virtually the same plausibility issues exist for for collagen. We are literally made of collagen. Wallace Sampson, emeritus editor to the SBM blog has pointed out that t that the amount of glucosamine in the typical supplement dose is a tiny fraction of the available glucosamine in the body, most of which is produced by the body itself. Given we are 25% collagen, the same observation can be made with collagen supplements.

Getting into the data, if you want to look at it in a systematic way, the low plausibility, lack of pharmacokinetic data, and an absence of valid mechanism of action makes any systematic evaluation difficult, if not pointless. In addition to the weak methodology, when interpreting the totality of the data you’re stuck extrapolating from different forms of collagen (e.g., skin vs. trachea-sourced), different manufacturers with different processing methods, different diseases (e.g., RA vs. OA), different disease sites, and different endpoints. Furthermore, I have not seen any study that controlled for, or measured, dietary consumption of collagen.

For all of these reasons, I focused on the trials associated with a single product, and the specific effectiveness claims made about that product.

While it is possible collagen could have a treatment effect, it is very improbable. It would not surprise me if this product has a life cycle similar to so many other products profiled here, what Mark Crislip calls the usual arc of clinical studies of CAM products:

…better designed trials showing decreasing efficacy, until excellent studies show no effect. There is the usual meta analysis or two, where all the suboptimal studies are lumped together, the authors bemoan the quality of the data, and proceed to draw conclusions from the garbage anyway.

It may interest you to know that in Canada, this product has been approved for sale by Health Canada’s Natural Health Products Directorate, with the following Recommended Use (translated from French):

“Patients in Group A significantly scored lower in the average WOMAC score from baseline to the sixth month follow up. On the other hand, patients in Group B had no significant change in their average WOMAC score after six months.”

This is the famous wrong-way-to-study-a-difference-in-differences discussed here and many other places recently. Let the alarms ring when you see this. For example maybe they get p=.04 for group A, p=.07 for B. Scientists are obliged instead (or at least in addition) to test if the change is larger in A than B. To not do so is to be a bullshitter, and probably worse, since for such a design, that is exactly what we want to know. Not giving estimates of effect size for each should also be unthinkable.

I am actually extremely tired at the end of a long day of actual relaxation (went deap sea fishing today and made fish tacos with fresh made corn tortillas for dinner with some sangria) but I suppose I am a glutton for punishment.

I pulled up the full text of the article you linked JPZ. It talks about a lot more than just collagen hydrosylate, but I’ll toss up a few relevant points to discuss more tomorrow after I get some sleep.

First off, they start with basic cell culture stuff. In sum, they demonstrate that fragments of collagen will stimulate chondrocyte production of collagen in vitro. This makes sense – cartilage breaks down, and the rate of breakdown can be reasonably assumed to be reflected by how much broken collagen is laying around, thus chondrocyes react accordingly.

They then cite a study that demonstrated via radiolabeled oral collagen that “collagen hydrolysate is not completely broken down by the digestive system, but that a variety of collagen fragments, including up to 10% high molecular form collagen fragments thatrange from 1 to ≈10 kD, are absorbed following oral administration of collagen hydrolysate, with some individual variability.” [emphasis added]

I should look up that study, but don’t have the compunction at the moment.

Next they cite that the radiolabeled collagen ends up in cartilage preferentially. They reference another article which I will also need to look up. But ATM, the statement seems non-compelling to me. I’m very curious as to exactly what the control was that they determined there was a statistically significant amount of collagen in cartilage compared to it. If you are giving collagen vs a control, and you are saying that the collagen is in the cartilage at a stastistically higher amount than control, isn’t that a “no duh” statement? I mean, the control should not be receiving collagen so….? But to be sure I’d have to look at that study which I will attempt to do tomorrow.

They then use this to conclude: “it might be reasonable to use collagen hydrolysate as a nutritional supplement to activate collagen biosynthesis in chondrocytes in humans, especially under conditions where cartilage is under considerable stress.” [emphasis added]

All cell culture stuff that seems pretty weak to me.

What about the clinical stuff? First off, it is 4 open label and 3 double blind studies.

The first open label was mediocre at best, especially considering it was open label. It used some objective measures of orthopedic function but “Statistical analyses were not reported by the investigators” (n=56, with 33% showing no improvement at all).

Next “Similar findings were reported in a 1982 study in which 60 juvenile patients…” Once agin, 25% showed no improvement and “Statistical analyses were not provided in this report”

Next “An open-label study of 154 patients with OA provided additional evidence of the clinical effect of collagen hydrolysate.” [emphasis added] I really have to question how the authors take the first two studies and then claim this to be additional evidence. They state that 43% of the PT group (control arm) were unchanged whereas “only” 14% of the experimental arm were unchanged. Yet again, “The statistical significance of the differences between treatment groups was not reported.”

To maintain some brevity here, I’ll give a quick idea of the double-blind studies included.

One was a crossover study which showed: “Reduction in pain reported: 81% of those taking collagen hydrolysate, 23% of those taking egg albumin; A 50% decrease in analgesics: 69% of those taking collagen hydrolysate, 35% of those taking egg albumin†” The cross at the end indicates: “Statistical analyses were not reported”

The section concludes: "Although this review included several studies that did not provide key information, such as statistical analyses, that are generally accepted as standards for the evaluation of scientific data, it does provide results which suggest that collagen hydrolysate may provide symptomatic relief to some patients with OA. It is
not known if the effects seen in the in vitro studies are responsible for these findings or whether other effects are involved. This question will need to be addressed in future research."

Regardless of the Bayesian prior, I am highly unimpressed. This is a bench science article that tries very hard to masquerade as a clinical paper. The "up to 10%" macromolecular absorption of collagen is also only very moderately interesting. Nothing even remotaly addresses whether that is enough to equal the 0.5mM solution that was used in vitro or even if the composition or size of the macromolecule segments were the same as in vivo.

Add in the prior, and I am beyond unimpressed. And this is a 2006 article that had to dig back to 1979 to get the 7 very, very lackluster papers for the analysis.

I have no energy to go over the next article, but I think the point is sufficiently made.

So doing a PubMed search is indeed a good thing, JPZ. But reading an abstract is also very oftenly highly misleading. I would never take such an abstract at face value considering the Bayesian prior in play here. I think it is safe to say that Scott’s argument stands quite nicely and indeed is further reinforced by the article you referenced.

On somewhat of a sidenote, is there any evidence for Vitamin E taken in large doses after exercise to prevent muscle soreness the next day? I read an article on it a few years ago (sadly, I don’t remember where) and recently poked around a bit and failed to find much of anyone talking about it either for or against. I don’t remember if they had any justification for efficacy, and it was much less than double-blind (if I recall correctly, it was a group taking the megadose of Vitamin E after exercising and a control group that took nothing), but it supposedly had a fairly significant effect that worked better the older you were.

In short, we have 4 studies with no statistics and two which simply reek of multiple comparisons. That last one especially looks like unmotivated subgroup analysis – why would anyone expect that Germans would respond differently than anybody else? Classic data mining.

Nothing properly statistically significant at all; the data is completely compatible with the null hypothesis.

“Science-based. You keep using that word. I do not think it means what you think it means. Being science-based does not mean entering a few keywords into PubMed, finding positive citations, and then declaring that to be an argument.”

OK, first, major kudos on using my all time favorite quote from Princess Bride. Second, I need to go back to a previous statement from your post:

“…the idea that collagen supplementation can be an effective treatment for joint pain, osteoarthritis, or any other condition, is highly implausible, if not impossible in principle.”

I counter your “impossible” by entering a “few keywords” into PubMed and coming up with articles that provide “plausible” support. Its that whole trap of using words like “never” or “always” when it can be disproven with one example. I don’t “declar[e] that to be an argument,” I declare that to be “plausible” evidence (not absolute proof), and I declare that you misinterpreted VAS measures and dropout rates in pain studies.

“Kimball Atwood has written (at least) 15 posts on prior probability, Bayes’ Theorem, and what “Science-Based Medicine” means. I suggest you start with Prior Probability: The Dirty Little Secret of “Evidence-Based Alternative Medicine and go on from there.”

I look forward to reading the two articles he mentioned. I am quite the proponent of Bayesian statistics in clinical trials, and I am glad the FDA is starting to catch on. By referring me to this article, I am going to guess that you wanted me to take away the message that assertions that have no scientific basis require more support than a few, low-powered studies that can be biased by conventional probability analysis. I understand that argument with the examples Kimball provided (distance healing and homeopathy), but there is some scientific research on collagen, so we aren’t starting out in that kind of hole for this discussion. Well, unless there is some kind of “SBM fallacy” that states, “just because I can call it CAM, and I don’t like it, I can also assume there is no evidence for it.” (that was a joke, btw)

“Following that, I recommend you read some of the posts written at this blog about glucosamine.”

Um, hasty generalization? Just because you don’t find glucosamine believable therefore collagen isn’t believable either? I understand the parallel you are drawing, and it isn’t an accurate representation of the science – its one of those fallacies I can’t remember at the moment.

“…lack of pharmacokinetic data…”

Did you even bother to read about the radio-labeled collagen data? It ain’t a perfect study, but radio-labeled tracer molecules provide incredibly sensitive data about tissue distribution. I’ll get to nybgrus’s being unimpressed with 10% macromolecular incorporation later.

“Furthermore, I have not seen any study that controlled for, or measured, dietary consumption of collagen.”

That is an interesting point. I’ll have to look into that aspect, but, a priori, I am not sure there is enough collagen content data on foods to make that calculation. That does make the point that ideally you would put patients on a vegan diet during clinical trials. Nice insight!

“For all of these reasons, I focused on the trials associated with a single product, and the specific effectiveness claims made about that product.”

So, I could review the lack of support for Airborne’s illegal claims about the common cold and conclude whether any of its components or related ingredients might be efficacious? Or did you mean that it would be too hard to look at Genacol in the larger context of other collagen data? You make some valid points about variability in the source material, but shouldn’t that come after reviewing the data – not before?

“Which, from a science-based perspective, is a reasonable conclusion.”

You keep using that word… OK, that seems a little silly to steal your joke. What I find odd is that your reply was really just re-expressing your opinion and introducing off-topic points from Harriet, Kimball and Mark (although as stand alone points, what they said was quite interesting). Did you want to talk about the science (although I really like your point about dietary collagen) and the points I made? On the chiropractic thread, I was telling NMS-DC that s/he was answering questions that s/he wanted to answer, not the question that was asked. I may have to see if that condition is contagious… (ok, maybe a little too sarcastic, sorry).

Now I need to start working on my reply to nybgrus. Someone who puts that much effort into a reply deserves good answers, although I must try not to be jealous of his fish tacos while writing said reply.

What I am reading is that even the supporters of the use of collagen are not claiming huge improvements with its use. Along with that, most collagen is make from sharks, and contributes to the decimation of shark populations. The end result being there is no good reason to use collagen.

Let’s make sure that we are starting from the same place in this discussion. I never, ever set out to prove that collagen is an efficacious treatment for OA. I believe it is a plausible treatment with limited supporting evidence (I know I didn’t say that before, but your comments took this discussion in a new direction). I responded to Scott’s conclusion:

“…the idea that collagen supplementation can be an effective treatment for joint pain, osteoarthritis, or any other condition, is highly implausible, if not impossible in principle.”

Which your in-depth analysis proved false. There is a little evidence of cellular-level mechanism, there is a little evidence of macromolecular incorporation, and there is a little evidence of preferential tissue uptake (i.e. into the cartilage not the liver – under the null hypothesis it would be equal) – just supportive data, not proof. So, when Scott said:

“Based on what collagen is, how it’s absorbed, and how we know collagen is actually synthesized in the body, it’s highly implausible that 1200mg of additional protein consumed daily will have any meaningful therapeutic effects.”

It is also false – because it isn’t “highly implausible.” When you said:

“All cell culture stuff that seems pretty weak to me.”

I conceed that it is not proof, but is it plausible?

“The “up to 10%” macromolecular absorption of collagen is also only very moderately interesting.”

Well, how much gets there is not as important as whether the amount that gets there is efficacious. Remember, this is “up to 10%” to dispute Scott’s assertion that none of it gets there except as amino acids. By way of example, less than 1% of oral DHA reaches the brain (http://www.ncbi.nlm.nih.gov/pubmed/14523049), but even that small amount can reduce cognitive decline in otherwise healthy people (http://www.ncbi.nlm.nih.gov/pubmed/20434961).

As for the review itself, I included it to show that even a cursory lit search comes up with published papers that don’t agree with Scott’s assertion of “impossible.” I agree that it is not a great review. Anyone who pulls in open-label studies and studies without statistical analyses to make a point is REALLY reaching. The two RCTs you mentioned are kind of interesting, but I included the more recent Penn State study (the one that sleep deprived you of seeing) to show a RCT with a bit more quality. I am sure someone will get hung up on “Despite the study’s size and limitations…” quote but it was n = 97 (and you only need n = 21 (or is it 23?) per group to detect a 1 SD difference at p < 0.05).

And again, it was not my intention to prove efficacy, only to counter a complete dismissal of efficacy.

"I think it is safe to say that Scott’s argument stands quite nicely and indeed is further reinforced by the article you referenced."

Re-read Scott's comments. Both articles nicely refute his "impossible" and "implausible" claims. He assumes that oral collagen is broken down into individual amino acids, and that is not supported by data either. Actually, the amount of personal speculation that he presents as fact is quite surprising.

"Regardless of the Bayesian prior, I am highly unimpressed."

Well, I don't have any idea what the threshold is to impress you. I tend to be most impressed by breakthrough, ahead-of-the-curve research that can easily be disparaged by others as the only study on the topic. But I hope I have clarified my approach to responding to Scott's post enough that you can see my point of view. I am not presenting cell culture studies as evidence of clinical efficacy. If you find it weak, fine – I would agree with you in many cases, but is it plausible or possible that oral collagen may have any benefit in OA?

“…why would anyone expect that Germans would respond differently than anybody else?”

If I run a clinical trial with 10 study sites, and one site comes out with a statisically significant difference in one of the outcomes – it could be for a lot of reasons. Recruitment bias, confounders specific to a study site (pollution in China, diet in Hungary, etc.), treatment differences (e.g. some sites are doing additional tests on a subset of the patient population), etc. We generally try to handle that in the statistical analysis, not pull that population out for separate testing. Although, the Germans had a functional improvement p =0.007 which might just have held up to a post-hoc statistical analysis of a site x functional outcome interaction with a Tukey’s adjustment for multiple comparisons. But, the authors didn’t do that, and you make a fair criticism of the paper.

What’s your take on the Penn State study? Having both the physician and patient-reported pain scales show a significant improvement gives me a bit more confidence in their results. Penn State is also one of the best sports nutrition programs around, but I don’t know these particular investigators.

From that one review, yes that is precisely right. I did not really read anything else that they wrote on and focused only on their analysis of collagen. However, in skimming all the others (they inlcuded SAMe, glucosamine, MSM, as well) they concluded very similarly using very similar datasets.

As you noticed, every one of the areas that broke statistical significance were in studies that used multiple analyses and most of them did not break significance. The XKCD re: green jelly beans comes to mind. I should also add that the studies involving the radiolabeling were also animal studies – which doesn’t invalidate them, but when we are talking about digestion which involves specific proteases that are going to be at least somewhat different between animals, it adds an additional layer of skepticism.

So yes, very much in line with the null hypothesis. And if you actually read the original article, it seems painfully clear to me that they are taking bench science results and straining to have them add a positive Bayesian prior to the highly uquivocal clinical studies. In other words, they like the clinical studies because they seem maybe slightly positive (and indeed they specifically add that collagen has not been shown to be harmful) and then take the bench science to justify the plausibility of the trials. But it really is highly unconvincing.

Seems you are currently active on the comments at the moment, so I’ll take a second to say hi and quickly address a couple points whilst I take a look at a couple more papers as I said I would in my first comment. I am also leaving in my thought process and mistakes, rather than editing everything neatly.

Scott’s comment about the implausibility/impossibility to me seems a bit of editorializing, but certainly within the realm of reason. Knowing the nature of digestion and incorporation of amino acids and macromolecules into structural tissue immediately gets you to implausible. That is a very fair statement. Adding the “impossible” is the editorializing. I do see your point about being able to invalidate an absolutist argument with a single exception, but I would counter that your attempt there at least appears nit-picky and really doesn’t demonstrate much except a discussion on semantics.

I wouldn’t call my analysis “in-depth” but moreso at least. However, to really prove Scott’s point I will have to go into those other articles. I’ll put out my prediction right now.

Firstly that there is no delineation as to exactly what the macromolecular composition is of the radiolabeled collagen in the cartilage. This is important because there would then be now way of knowing if the particular compistion (because it is a digestion product) will induce the same results in the chondrocytes as the in vitro prep.

Secondly, I would be willing to bet that at the “up to 10%” level, that is not sufficient to mimic the 0.5mM concentration used in vitro so the effect can’t be assumed at a biological level. And actually I just looked again and have been making a mistake – it isn’t 0.5mM it is 0.5mg/mL. So I’ll see if I can find a way to compare. They also note that the molecular composition is quite variable since the “up to 10%” figure includes anything that is at least 1 kDa and up to around 10kDa. BTW, a typical amino acid is roughly 115Da, which means we are talking about 10-100 (roughly amino acids). In doing deeper reading, it seems that they actually used collagen hydrosylate in the cell culture – NOT a broken chain.

it was shown that treatment of cultured chondrocytes with 0.5 mg/mL collagen hydrolysate over a culture period of 11 days induced a statistically significant, dose-dependent increase in type II collagen synthesis of the chondrocytes ( p < 0.01 compared with untreated control cells

Collagen is a variable length protein but according to a paper on its physiochemical properties (PDF) the beta chain is 200kDa and the alpha is 100kDa but that collagen hydrosylate was around 50kDa and had a wide distribution. The point being is that what they are finding is at most 1/5th of a molecule in the cartilage and that they are using the whole molecule, with its inherent variablity, to stimulate the in vitro chondrocytes.

Thirdly, the control group is something I need to look at. If they find that there is more in the cartilage than the liver, that is one thing. But they did not actually say that in the review – they simply stated that there was more in the cartilage than in control.

In experiments with radiolabeled collagen hydrolysate, it has been shown that a significant amount of collagen hydrolysate-derived peptides reach cartilage tissue within 12 h after administration ( p < 0.05 compared with control animals

So that is what I mean when I say a look at the control is necessary. To me that does not imply they compared liver vs cartilage concentrations, but I will check to be sure.

I think at this point that is likely enough before I start confusing things with too much speculation. I’ll take a look at those other articles real quick and come back (I still have some time this morning).

The point I’m trying to make is that this is how I think of these things and why from my training and education, saying that collagen uptake having a clinical effect is highly implausible – which is what Scott was trying to say. Saying there is a mechanism here is a stretch, at least, but also you seem focused on the fact that some potential molecular pathway exists – i.e. plausibility of molecules. I think of plausibility in people, and I think that is what Scott was referring to. To me, saying “impossible” seems hardly a stretch in that context.

Given that this is for pain management, I’m having a hard time figuring out what the point of it is in the context of today’s pain management strategies. I assume the main draw vs. traditional pharma would be a presumed reduction in side effects? But in the end the patient is still medication-dependent if the effects aren’t permanent; it’s just a different kind of medication.

Pain management, these days, is based around increasing self-efficacy and reducing catastrophizing and fear, with the goal for most people being reducing or eliminating external pain management options that make people dependent on the health care system for relief. So regardless of whether collagen works, which it seems it probably doesn’t, what’s the point? For side effect reduction to be meaningful, if that is the case with collagen, it would have to still at least provide the same amount of relief as medication, but as I said, medication is a dependency that many patients do not want to depend on forever, for a lot of other reasons — expense, having to remember to take the meds, embarassment if they have to be taken in public, feelings of helplessness without the medication, what happens (significant oain increase) if the meds are missed for some reason, etc.

So in reading through both articles, I will admit the data is a little more compelling than I had speculated.

The first piece about the absorption into the cartilage has pretty decent methodology and demonstrates that at least the distribution of MW of proteins absorbed seems to overlap the distribution of MW of collagen hydrosylate. They admit in the article that there is some issue with the methodology, but as a non-expert in that level of bench biochem it seems to be the most robust way available. However, their data demonstrates only a relative increase. There is no possible way to determine that absolute amount. In brief, they compare the absorption and distribution of radiolabeled proline vs radiolabeled collagen. They find that the proline distributes evenly in all tissues, as does the collagen, except that a peak of 2.6 fold more collagen was found in the cartilage than proline. Now that is interesting. But is it clinically interesting? Honestly, I just don’t think so. a 2.6 fold higher accumulation of collagen doesn’t reflect the reality of how much collagen is actually already there. The studies are looking at what? Roughly in the range of 1200mg intake orally? Distribute that evenly amongst the tissues of a 70kg man and then make it 2.6 times more concentrated in cartilage. To make the math easy lets call it 2gm of collagen. Back of the envelope I get something on the order of 7.5e10^-5mg/mL in the cartilage (lets just call the person all water just for comparison sake, since that will also yield the highest increase in actual collagen level). That is a far cry from the 0.5mg/mL they used in the in vitro chondrocyte stims study. Now, I know this is all VERY VERY rough, but the point is it is somewhere in the range of 5 orders of magnitude difference.

This is actually something that we here at SBM find very regularly – plausible mechanisms (by a stretch) + equivocal data = clinical result, except that the concentrations and distributions going from bench to body are many orders of magnitude different.

The other study that was cited, referencing the cell culture chondrocyte stim study I simply could not manage to find for some reason. I found piles of papers referencing it, but getting the full text is proving slightly more difficult and I am losing motivation to find it. It is also reasonably immaterial at this point. Even if I concede that the molecular composition was identical, the point is that the concentrations must be orders of magnitude different and there is no way to actually compare, since the other study only gives a relative increase, not an absolute one.

BTW – that Penn State study also references these same guys. Seems like they are big on the bench biochem of collagen and high MW protein stuff.

In sum, I’d say it is still high implausible. Even completely on its own, the data doesn’t demonstrate that the chondrocyte stim conditions are approximated by ingestion of collagen. Moreover, in the background of how much collagen is actually already there and turned over on a daily basis, it is even less compelling.

Some of my speculations were proven false, but I believe the crux of it remains. All of this stuff is interesting from a purely biochemical standpoint, but it really just doesn’t seem to translate well to a full fledged clinical model.

So I suppose whether your critique is a nit-pick is dependent on what question you are asking. Is it implausible/impossible from a biochemical cell culture perspective or from a clinical perspective in a full fledged human being? In the former, sure, you’re right. But in the latter, I’d say Scott is still pretty on the money. And SBM is truly about the latter.

Sorry for cutting it a little short – I actually have to mobilize at the moment. This was my “last hurrah” if you will. Tomorrow I am on a flight back home – the Australian portion of my training is at an end. I now have to finish packing and spend some time with friends before hopping on that plane. Hence why I went deep sea fishing last night. Done with exams and about to head home

So if I drop off the face of the earth for a few days, that is why.

As usual JPZ – always a pleasure hashing out some science with you. You are always up front that clinical medical science is not your field, but hopefully this discussion helped to bridge a bit of a gap in understanding why I would think it is reasonable for Scott to say what he did. At least, that was my goal.

Best wishes all, until the next time when I am sitting by the beach back home drinking my morning coffee and trying to catch up with the discussion I will have missed.

I am really interested in your insights from those articles. I didn’t take the time to read them all because I was so focused on countering (perhaps pedantically as you suggest) what came across to me as an absolutist argument about collagen.

Saying there is no evidence to support collagen’s utility in OA and saying there is some evidence are two very different things to me. Perhaps it has to do with how I approach science in my work. I take hints of mechanisms or efficacy that are teased out of other experiments or the literature, form a hypothesis, and test/modify that hypothesis with a series of incresingly rigorous experiments culminating in a RCT. So, “some” evidence to me is a glass half full.

I guess I can understand looking at the same data from the perspective of a physician who sees insufficient evidence to recommend it to his/her patients in comparison to better understood treatments like drugs. That judgement is practical and correct (especially in the case of collagen), but it can come across as saying “some” evidence means the glass “is” empty – like Scott did, editorializing or not. You wouldn’t recommend a drug still in Phase I trials to your patients either, but likewise you wouldn’t say there is no evidence that drug is ever going to work. That’s not the best example, but it is the one that just came to me.

If I were assigned a project to test the use of collagen in OA, the existing data would be more than enough for me to lay out a detailed pre-clinical and clinical research program. I’ve started programs based on one blip on one graph before, so this looks like a gold mine in comparison. Again, efficacy has not been proven – yet.

I understand the whole basis for taking what you know about digestion and feeling it is impossible for a peptide to be absorbed intact. That is one of the fun things in science – what you thought was true may be overturned in a day. I’ve worked with DHA quite a bit, and the field went through the whole process of thinking mg to g quantities of a fat could never survive digestion and reach the brain. Then we found out that less than 1% of the dose reaches the brain, and the critics said that small of an amount could never produce any benefit. Nowadays, there is plenty of empirical evidence of efficacy in the brain, so that 0.6% of the DHA must be doing something. Maybe thats why I feel I can say “I understand” because I have already had my preconceived notions overturned in the past. But, I refrained from saying “impossible” at the time, so perhaps I don’t understand that perspective fully.

Thanks for giving me some more thoughts to think. It is really helping me to evolve my perspective in this discussion, and has got me thinking about how one defines “some” data. I may have to chew on that thought for a while.

“Given that this is for pain management, I’m having a hard time figuring out what the point of it is in the context of today’s pain management strategies. I assume the main draw vs. traditional pharma would be a presumed reduction in side effects? But in the end the patient is still medication-dependent if the effects aren’t permanent; it’s just a different kind of medication.”

I wanted to thank you for hitting the nail squarely on the head! You have pointed out the most important data gap between what we know about oral collagen and OA now and what we need to know in order to fit it in the big picture. Even if collagen is proven effective, you have to know how effective it is compared to other pain treatments. When you are comparing a nutritional product to something as cheap as generic ibuprofen, you need to know if it is worth the money (as you mention, fewer side effects with equal efficacy might be one potential outcome from proper RCTs).

Scott “Science-based. You keep using that word. I do not think it means what you think it means. Being science-based does not mean entering a few keywords into PubMed, finding positive citations, and then declaring that to be an argument.”

I’m all for snark on blogs. It’s what the web is all about, isn’t it? But, it’s good to be aware that snark still communicates a lot to people who have any social sensitivity.

This one said to me “I am indignant that you questioned my authority and I am putting you in your place.”

This is fine, as a reader, I like to know up front if the writer is indignant, defensive and less likely to consider or acknowledge the possible flaws in their argument.

Now I end up wondering if the princess bride quote wasn’t some sort of cosmic irony. “Inconcievable” does sound a lot like “Implausible”. SBM keeps using that word…

@micheleinmichingan
That’s the last time I use Gorski’s line. I promise. But if we’re going to have a “science-based” discussion on a topic, ideally we should agree on what being “science-based” means. No particular authority implied. It took me a long time before I understood the difference between EBM and SBM, and working my way through Kimball’s posts was crucial to forming that understanding.

I’m prepared now, for the inevitable question that follows: “But what about supplements? What do you think of X?” Glucosamine is a common inquiry (discussed at length here) as is chondroitin (which it is often co-packaged with). As with most supplements, their popularity is not related to good clinical evidence.

There is one recent study which does show clinical evidence for chondroitin:

I am about to head to the airport, but since my partner was sleeping when I woke up I indulged in a bit of reading over my coffee.

I am glad that I have given you something to think about. That really was the point of my posts. I’m happy to have others weigh in on my thought process as well, which is why I put it all out there like that.

In my mind, biochemical curiosities are exactly that – curiosities and very little more than that. So while I agree that there exists a potential mechanism, I would actually much prefer to see more bench work to demonstrate that the concentrations and composition of the collagen actually induce the same chondrocyte reaction in vivo as they do in vitro. The problem is that making the leap from collagen stims chondrocytes + cartilage absorbs 2.6 times more collagen than proline + the SDS-PAGE profile of rat gut sac collagen is very similar to the collagen that stims the chondrocytes = humans will see improved OA symptoms and disease progression via oral collagen leaves a lot of gaps. And as we stress here, human physiology is extremely complicated and muddied by that same organ we use to study it all in the first place. So unless the clinical data comes out absolute resoundingly positive with things like histological confirmation of improvement and objective function improvement as well…. well, the Bayesian prior just isn’t good enough based on what we have so far.

Going from what we have to any clinical trial, no matter how rigorous will only prove useful if the data is truly that good. Anything else will be equivocal and give us more useless data. So when I hear Scott say collagen for OA “is highly implausible, if not impossible in principle” that really rings true for me. And to reasonably force him to amend his statement would (almost certainly) not involve clinical trials but more of those detailed bench studies that I alluded to above. Until then, all those gaps that I can see add up to exactly what Scott said – especially since the clinical data backs that up by being very equivocal and with improvements primarily in highly labile and suggestible metrics (pain and self reporting).

Please correct me if I am wrong in my thinking, Scott. I hate speaking for others, but I feel like you would probably agree with me.

Well, now I really need to get in the shower and drive to the airport or I’ll risk missing my flight!

Thanks for the excellent discussion JPZ. This is the sort I wish I was able to have with some of our other denizens who shall remain unnamed.

I clicked on the wrong link and wiped my whole comment. So, nybgrus, I’ll have to reply to you later about my “science-based” (quoted because I don’t seem to be qualified to define that to the satisfaction of SBM) counter arguments about your interpretations of the data. My theme when I wiped my own post was that SBM couldn’t give a diddly-squat about being science-based on my interactions with your contributor/resident dietary supplement expert, Scott Gavura.

(well and Steven Novells on the “Alpha-Brain” post, but that was more a case of his personal opinion about the dietary supplement regulatory environment winning out over facts – you folks can check it yourself with a search – and, it is important to keep in mind that, based on what I have witnessed so far on SBM, if someone criticizes a drug company over criminal activity, then we need to keep it in context – if someone points out an illegal and irresponsible claim on a dietary supplement, it is proof of how horrible the industry acts – you know, a scientificish-wise comparison).

Scott’s post on “Constipation Myths and Facts” provided only two new facts to support his conclusion that, “Overall, not encouraging. And little reason to recommend their use.”

1) He refered to a paper (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2799919/) that called itself a systematic review. I provided a position paper (http://www.worldgastroenterology.org/assets/downloads/en/pdf/guidelines/19_probiotics_prebiotics.pdf) written by a remarkable panel of scientific experts (PubMed their publications yourself, I have to conserve my 3 citations), that said the methods used by the review paper were scientifically invalid. I brought this to his attention, and he gave me this tautology, “I called it a systematic review because it was a systematic review: “Systematic review of randomised controlled trials: Probiotics for functional constipation“. The methodology notes the authors followed the Cochrane handbook for systematic reviews of interventions.” So, if I do a systematic review of pain relief and conclude it doesn’t work for sinus congestion, can I score points because I “said” it was a systematic review – and even more points if it sounds like a Cochrane review? The conclusions of the review paper are false because they conflated studies based on a false premise. Read the position paper.

2) When he said, “That’s the opinion of some regulators, too. The European Food Safety Authority has largely rejected general health claims for probiotics.” Which shows he didn’t actually research this comment. I provided a link to a scientific organization’s response to the EFSA rejections (http://www.isapp.net/docs/ISAPP_responds_to_EFSA_oct09.pdf) which was co-authored by Glenn Gibson who may be the foremost expert in probiotics and prebiotics in the world today (again, PubMed the references yourself). So, these “rejections” do not support Scott’s thesis that EFSA agrees with his unscientific point of view. Read the response.

So, we take a scientifically invalid review and a mis-interpretation of european regulations and conclude, “Overall, not encouraging. And little reason to recommend their use” and then stand by it when diametrically-opposed data are provided? Scott thinks so, in fact, he doesn’t even need to address actual science to prove it. No really, go look at his post and see if you think he bothered with a scientific response.

You don’t agree that I referred to the 2010 systematic review by Chmielewska as a systematic review. You provided a link to the World Gastroenterology Organisation’s 2008 guideline (WGO) on prebiotics and probiotics, [PDF] which you noted “said the methods used by the review paper were scientifically invalid.” Note that the WGO paper precedes the systematic review by two years, so had a statement to that effect actually appeared, the authors should be calling the JREF and collecting their million dollar prize.

Looking closer, the WGO paper you cited does not discuss the use of probiotics for the treatment of constipation, which was the topic of the post. It does not discuss the use of systematic reviews to evaluate probiotic studies. It does not refer to any of the papers discussed in the systematic review (Mollenbrink, Koebnick, Yang, Banaszkiewicz, Bu). It does state that evidence for one product is only applicable to that product. However, Chmielewska did not perform a meta-analysis of the data in the systematic review, so the relevance of the WGO paper with respect to the systematic review I cited is unclear.

Furthermore, when I looked at the WGO’s own practice guideline for constipation, [PDF] it does not list probiotics as a recommended treatment option. In fact, it doesn’t mention probiotics at all:

Treatment of constipation is symptomatic. Available studies have concentrated on therapies with fiber and different laxatives. Whilst therapy with fiber and with laxatives has some benefit in improving the quantity and quality of BMs, there is no clear evidence regarding which laxative is superior. Treatment should be graded and should start with lifestyle and diet changes. Any medication that can cause constipation should be stopped if possible. Further steps include the use of bulk-forming agents, osmotic laxatives, and possibly pelvic floor physiotherapy. If these fail, a next step can be the use of contact laxatives, enemas, and prokinetics.

In response to my statement that in 2010, the EFSA largely rejected health claims for probiotics, you stated “the EFSA process does not define what constitutes sufficient information” and further that you “provided a link to a scientific organization’s response to the EFSA rejections” and then linked to a 2009 letter [PDF] from an organization called the International Scientific Association for Probiotics and Prebiotics. This letter appears to be a special pleading for different testing standards:

ISAPP cannot comment on the validity of specific dossiers, but wants to emphasize that a standard of reasonable levels of evidence for health benefits of foods and supplements is needed, and it is not prudent to expect a level of substantiation of efficacy of these probiotic foods, equivalent to what is required for pharmaceutical agents.

The pertinence of the 2009 document you cite, to the specifics of the 2010 EFSA evaluation is unclear.

I looked closer at the ISAPP guidance related to constipation. In a 2009 consensus statement [PDF] they make the following statement with respect to probiotics and IBD:

The consensus was weaker when it came to specific symptoms such as constipation. This appears to be linked mainly to the lack of clear definitions of constipation or what constitutes a ’normal‘ transit time, leading to wide variations in inclusion criteria and outcome measures; this prevents accurate evaluations of studies involving probiotics. Furthermore, the situation is quite different between adult constipation, where some studies have shown beneficial effects of probiotics, and childhood constipation, which has a different etiology and where it seems probiotics are not clinically effective.

So the organization that you cite is on record as making a statement largely in alignment with the systematic review you criticized, which concluded,

In summary, this systematic review demonstrates that the data published to date do not yet provide sufficient scientific evidence to support a general recommendation about the use of probiotics in the treatment of functional constipation. Until such data are available, we believe that the use of probiotics for this condition should be considered investigational.

But, looking closer, the EFSA has given specific guidance to industry. The EFSA guidance on this issue makes the following points [PDF]:

• extent to which a cause and effect relationship is established between consumption of the food/constituent and claimed effect
– for the target group under the proposed conditions of use
• all of the evidence from pertinent studies weighed – overall strength, consistency & biological plausibility
• human data central for substantiation – hierarchy of evidence – design and quality of individual human studies
– studies in animals or in vitro may provide supportive evidence

In assessing each specific food/health relationship that forms the basis of a claim, the NDA Panel makes a scientific judgement on the extent to which a cause and effect relationship is established between the consumption of the food/constituent and the claimed effect (for the target group under the proposed conditions of use). All the evidence from the pertinent studies (i.e., studies from which scientific conclusions can be drawn for the substantiation of the claim) is weighed with respect to its overall strength, consistency and biological plausibility, taking into account the quality of individual studies and with particular regard to the population group for which the claim is intended and to the conditions of use proposed for the claimed effect. A grade is not assigned to the evidence. While studies in animals or in vitro may provide supportive evidence, human data are central for the substantiation of the claim. This procedure is in agreement with the hierarchy of evidence as described in the EFSA guidance. The NDA Panel considers the rationale/evidence on the biological plausibility of the claim based on the data provided by the applicant to support the substantiation of the claim.

Which is to say, efficacy statements need to be backed by relevant evidence, where the biological plausibility is a factor in the consideration.

Well, it seems I did get my comment held up in moderation with only 3 citations. Heck, I should have put in 30 citations like I wanted to do! Since it is, you know, criticism of the official bloggers – you good readers might see it come out of moderation in a week (if ever). In the meanwhile, I’ll start on the second half of that post, and try not to get this one blanked in moderation.

Just a thought, isn’t it weird how this system discourages you from providing links to support your POV? It is more likely to get your POV posted if you provide no links than it is if you provide tons of scientific evidence to back your posts via links. Wow, that just came to me.

Thank you for looking into my points of disagreement, and you were very insightful in teasing out the key points!

“You don’t agree that I referred to the 2010 systematic review by Chmielewska as a systematic review. You provided a link to the World Gastroenterology Organisation’s 2008 guideline (WGO) on prebiotics and probiotics, [PDF] which you noted “said the methods used by the review paper were scientifically invalid.”…It does state that evidence for one product is only applicable to that product. However, Chmielewska did not perform a meta-analysis of the data….”

Well, I guess it is a “systematic review” with regards to their using a system to search for references, but so would a systematic review of diseases associated with words more than 10 letters long ending in “q.”

My ONLY intent by mentioning the WGO guideline was to make the point that comparing constipation data where the treatments involved different genus and species of bacteria is not a valid comparison to assess “probiotic” efficacy because “probiotic” describes a group as heterogeneous as “antibiotic” or “analgesic.” I agree that the authors did not do a meta-analysis, but they did assume that by comparing five different organisms from five different studies that they could make the broader statements about “probiotics” in general.

So, if we break out the individual studies, Mollenburg, et al. 1994 used E.coli Nissle 1917 in adults and found a significantly significant increase in stool frequency that on average exceeded NIDDK guidelines for defining constipation in the treatment group and was well below it in the control. There was also an astonishing 90% drop in the frequency of hard stool. Koebnick, et al. 2003 used L. casei Shirota in adults and saw statistically significant reductions (or increases where appropriate) occurrence of moderate and severe constipation, degree of constipation, defecation frequency, occurrence of hard stools, degree of stool consistency. Yang, et all. 2008 used B. lactis DN-173 010 in adults and found significant improvements in stool frequency strain and stool consistency. Any one of those three organisms seems to work to a greater or lesser extent in constipation.

For children, Bu, et al. 2007 used L. casei rhamnosus Lcr351 in children and found treatment success for relieving constipation (return to >3 BM/week) was a statistically significant (p<0.01) RR 7! There were other statistically significant results in pain and defecation frequency. The ONLY trial reported that showed no benefit when using L rhaminosis GG as an adjunct to lactulose treatment (Banaszkiewicz, et al. 2005). Well, either L rhaminosis GG doesn’t work on constipation at all, or it doesn’t add to the efficacy of lactulose.

The only organism to show no efficacy was the only one used as an adjunct to a current treatment. So the problem isn’t that many of these organisms don’t work. The problem is that we don’t have a lot of data on each species.

“Furthermore, when I looked at the WGO’s own practice guideline for constipation, [PDF] it does not list probiotics as a recommended treatment option. In fact, it doesn’t mention probiotics at all:”

The WGO practice guidelines only list three categories of treatment (laxatives, enemas and prokinetics), and, of those, they only review laxatives (prokinetics are never mentioned again). My interpretation would be that this practice guideline was never meant to be a comprehensive review of potential treatments and no conclusion can be drawn from its omission other than that probiotics are not already a recommended treatment. If your point is that only EBM-proven constipation treatments can be considered, I go back to my point that there are plenty of SBM treatments worthy of additional consideration.

From your original post:
“For constipation, their effectiveness hasn’t been demonstrated though.”

But you quoted the ISAPP as saying (focusing on the part relevant to your statement):
“…Furthermore, the situation is quite different between adult constipation, where some studies have shown beneficial effects of probiotics, and childhood constipation, which has a different etiology and where it seems probiotics are not clinically effective…”

My reading was that their “have shown beneficial effects” counters your “effectiveness hasn’t been demonstrated.” I suppose you and ISAPP could have different definitions of effectiveness.

“But, looking closer, the EFSA has given specific guidance to industry.”

Cart before the horse. EFSA threw out 180 probiotic health claims in October 2009, but only 10 of those were for lack of evidence. The rest were for incomplete paperwork. The first DRAFT guidance was issued in May 2010, the presentation you linked was given during the comment period in December 2010. The FINAL draft was issued in May 2011 (http://www.efsa.europa.eu/en/efsajournal/pub/2170.htm).

From your original post:
“And little reason to recommend their use. That’s the opinion of some regulators, too. The European Food Safety Authority has largely rejected general health claims for probiotics.”

They didn’t reject health claims, they sent back incomplete paperwork. Your thesis is not supported by the evidence you presented.

“The EFSA elaborates in the 2010 consultation paper…”

That is the draft of the 2011 final guidance, and you seem to be using it as ex post facto support for dismissal of the 170 probiotic health claim applications in 2009 (probably submitted in 2007).

You brought up many interesting points that I needed to clarify. I’ve seen remarkable commentators come through SBM stating that it is their god-given right to take any dietary supplement regardless of whether there is any safety or efficacy data. But, I think a nutritional product with some data on efficacy (adequately-powered RCTs without fatal flaws) and supported by other data (epidemiology, less convincing studies, etc.) and with abundant evidence of safety, could be helpful to use. There seem to be an awful lot of other definitions for “efficacious” running around here too.

I fly across the Pacific and the collagen/OA article turns to sh&t! haha… sorry, I couldn’t resist.

But seriously – I was actually curious as to the response from my last post from you JPZ – namely the magnitude of the Bayesian prior in this case based on the deeper reading of the basic sciences work and how that applies to clinical study of collagen in OA.

I’ve read Dr. Atwood’s posts (and I’l be the first to admit I should read them a few more times) and I believe the point of SBM is that the magnitude of said prior depends on the question you are trying to ask. And scaling something up from a minute biochemical standpoint to a full fledged human being makes that prior… well, orders of magnitude smaller. That is the perspective of SBM.

I can feel that you are frustrated JPZ (and BTW, I have lost giant comments as well and I feel your angst there as well), especially in regards to how we define SBM. I really would recommend going through Dr. Atwood’s posts, but I’ll try and give my distillation of it (and if anyone can counter or refine that, I’d be more than welcoming).

Basically, the concept is that not enough emphasis is placed on the basic sciences aspects regarding extremely dubious modalities. The fact that human-scale physiological processes become extremely complex and varyingly introduce unknowns (as well as the quirky nature of statistics), things which should have no effect sometimes seem to. That is where the basic sciences telling us firmly that physics and chemistry mean homeopathy can’t work comes in.

However, the basic tenets of EBM still do exist and are valid. There is a reason why basic sciences is at the bottom of the evidence hierarchy – even in SBM. So while a modality with an extremely implausible basic sciences MOA can be discounted based on that, simply having a hypothetical MOA for a modality won’t even come close to validating it.

sCAMsters (and nutrition scientists, which I will intentionally leave separate here) selectively underplay or overplay the basic sciences. In most cases it is underplaying – hence Cochrane reviews stating that “more research is needed” on homeopathy. Or how it may be disproven for asthma, but “more research is needed” for teething.

Nutrition scientists tend to (at least in my experience, for whatever piddle that may be worth) tend to overplay it. In the same way that an engineer can look at a complex machine and just “know” that a widget in that specific spot just wont work, despite all the basic data that says it could plausibly work there so can a physician “just know” that oral collagen simply can’t really have any sort of clinically significant effect on OA. That isn’t another “way of knowing” it is just integrating a whole slew of knowledge in different aspects of medicine to realize that the Bayesian prior is just very low – even if many can’t or don’t verbalize it that way. The data to date supports that assumption. Just because there exists a theoretical pathway doesn’t mean it is actually viable in real life. I could theoretically swim back to Australia, but it just ain’t gonna happen. I would call that “implausible, or nearly impossible” even though truly there does exist everything necessary for me to theoretically do it. It is the same with the collagen/OA story here. Every individual step described is scientifically feasible and plausible. But the combination of them, scaled up to a clinical level, diminishes that plausibility so dramatically that it really is essentially impossible – but at the very least highly implausible. The bench data supports that view, the clinical data does as well. And that is the Bayesian prior that SBM employs in this case.

I’m not sure if that was really clear – we may be just talking past each other a bit at this point and I apologize for that if it is the case. It was a (probably last) stab at trying to convey what I mean since my take on what you have written so far is that you have missed that distinction. It seems obvious to me, but I know enough that it may very reasonably not be obvious at all.

I think what you are describing is a “gestalt” where (as I think of it) you have reached a level of knowledge about a subject such that you see the whole of that knowledge clearly (even if incompletely). Enough parts fit together in your mind to make the sum greater than its parts, and, if you attempt to fit new knowledge into this whole, the “gestaldt” gives you intrinsic insight as to whether it “fits” or not. I went through this when I studied immunology – suddenly, it was so much more clear than before. The beauty of it is being able to see the big picture. The downside comes from the blind spots, assumptions, cognitive and personal bias imposed. You create them when you extrapolate across the gaps.

For collagen and OA, the Baysean prior will likely be derived subjectively. Scott Gavura’s approach has been to dismiss any evidence that he feels does not fit his gestaldt which seems somewhat contrary to Kimball Atwood’s post on the Baysean prior (quoting O’Hagan & Luce) “‘the whole evidence’ – not omitting relevant information (preferably a consensus that pools the knowledge of a range of experts).” Evidence seems to work on a sliding scale on SBM.

My approach has been to challenge his exclusion or overlooking of data that I feel should be included in the formulation of the Baysean prior. I have supported my viewpoint with scientific references (as much as possible with citation limits), and many of these have been turned aside based on opinions. I am always surprised when commentators attempt to falsify scientific evidence with “I am not impressed” or “that study is not big enough” (without a power calculation) or “that would never work” or any one of the other lovely fact-free terms bandied about on SBM. You can feel free to accuse me of falling victim to the third point in the same comment from Kimball Atwood, “‘nothing but the evidence’ – not contaminated by bias or prejudice.” But who doesn’t have bias?

Do I have enough evidence to design a collagen-OA RCT with a testable dose, viable outcome measures, effect size estimates to calculate sample size, knowledge of major confounders, and a population recruitment profile. Yes, I do. If I understand the statistics correctly, we could set a subjective Baysean prior based on present knowledge and calculate the distance to the new term after the completion of the study and determine the posteoiri distribution (I think that is right).

Let me pull out another one of my odd examples: If a high throughput screening system identifies a molecule that binds to a receptor that might affect OA – what’s the Baysean prior? Almost non-existant. Do we stop because there is no chance we could get to an OA drug? What if the molecule survives the in vitro screen – only 1:20-50,000 do? Do we have enough evidence to believe it will work in humans yet? You get the picture, right? All health-related treatments have got to progress from tiny amounts of data to considerably more data supporting common, wide-spread use. Nutritional products come with a lot of scientific information (good and bad) that has to be sorted out to understand the potential health benefits – my gestaldt.

Sad to say, all I am seeing again is different people drawing different lines in the sad on “Evidence” beach when discussing a potential, subjective Bayesan priori. Funny enough, WHERE the lines get draw don’t seem to be based on “evidence” at all – just opinion about what should be “enough evidence”! LOL

I do understand your points btw. Seriously. But I think you’ve missed mine – I included your studies in said prior.

Now, lets pretend that the clinical trials had never been done and we ONLY had the bench work you’ve cited. The prior is small, but not infinitesimal. I am not trying to claim that a clinical trial should NEVER be done for cases like this one. But that doesn’t change the fact that the prior is still small, thanks to the gestalt you speak of.

Of course a well designed, well powered study will show something. But we actually have some studies on it and they are all equivocal at best. That gives us a post-test idea of what our prior actually should be and it is even smaller than before. Correct me if I am wrong, since I also am not a statistician, but if you have a prior that is small, then do clinical studies (even not terribly great ones) and they come up really equivocal, doesn’t that demonstrate to us that the assumptions we made in our gestalt assessment of the size of the prior were likely correct? And that includes the notion that mechanism aside, the average 70kg person contains roughly 14kg of collagen (at an estimated 20% of body mass I read somewhere). How is 2gm or even 100gm of oral collagen supposed to change that?

So we look at all that evidence, including the existing (albeit poor quality for the most part) clinical trials and RCTs and determine that for any future clinical trial the Bayesian is very small. Implausible in fact. And I might be willing to call it nearly impossible, if I were editorializing a bit.

I think if all we had is the bench work and not a single trial done, we’d be a bit more willing to say, “Hey, its unlikely but it could work lets give it a go.” But there have been and it all jives. That’s all I have been trying to say.

BTW – if you watch Gorski’s talk he put up from the Florida talk he has a breakdown of the SBM/EBM thing. Not that it says much I haven’t.

“Getting into the data, if you want to look at it in a systematic way, the low plausibility, lack of pharmacokinetic data, and an absence of valid mechanism of action makes any systematic evaluation difficult, if not pointless.”

–This is the big hang-up for “SBM:” what was Lind’s basis of “plausibility?” What was Lind’s “pharmacokinetic data?” We venerate his actions because he made an astute abservation, trialed it with lots of control for a range of biases, and evaluated his a priori hypothesis with a leading relevant clinical outcome.

I myself am skeptical of many CAM ideas because I recognize that the body tightly regulated what nutrients pass from gut to blood stream, and the body manages blood flow very well. Eating more protein does not make a wound heal more quickly, and does not make bigger muscles. Increasing blood flow by magnetism does not make a sore muscle resolve more quickly.

But there can be unlikely but real effects out there. Red yeast rice does thin your blood. Once one vitamin was recognized, there came the idea that there could be other substances playing the vitamin role, and the rest were discovered toute suite. A supposed remedy should not have to clear a “biological plausibility” committee before advancing on to a trial.

The weakness of earnest CAM advocates is the willingness to avoid the clinical trial test.

(The un-earnest CAM advocates have no weakness except the love of money.)

Pardon the premature post above. I was trying to point out some other studies I have not mentioned before, plus some other points of discussion.

In the end, I wanted to include the EFSA, “Scientific Opinion on the substantiation of a health claim related to collagen hydrolysate and maintenance of joints pursuant to Article 13(5) of Regulation (EC) No 1924/2006″ final report (http://www.efsa.europa.eu/en/efsajournal/pub/2291.htm) saying that there is not enough evidence right now for an Article 13(5) health claim. It is not a comprehensive review (which is the fault of the company submitting the petition), but it is not a bad quality review. I think Scott Gavura’s opinions could have referenced this EFSA decision for considerably more authoritative support. This is a “present state of knowledge” review about use of joint health claims right now, and I tend to agree with EFSA on that basis.

What you and I don’t agree about is how to view the scientific information making up this “insufficient” database when making decisions about how to move forward. I can fault your apparent biases, and you can fault my apparent overplay of minor points. I can claim that you are dismissing potentially viable products, and you can claim that I am wasting time and money on projects that will never work because you know they won’t work.

For example, I had a program that would have had a CRP-lowering nutritional product ready to launch as soon as the results of the Jupiter study were know, the project was killed mid-stride by the medical director who said that CRP was not already a well-recognized risk factor for heart disease. He knew it wouldn’t work. Brilliant guy though otherwise.

I’ll look more into Kimball Atwood’s and other posts clarifying their interpretation of the Baysean prior. I did study stats (6 courses) in grad school, and I did the whole “Task force on reducing sample size in clinical trials” at a Pharma company. There are a lot more elegant aspects of Baysean theory, but SBM seems poised to always get caught up in the initial, subjective determination of the Baysean prior where bias can most affect decision making. Also, it seems slippery here to define where you are starting, what your next step should be, and where you are going (especially the later) with a Baysean analysis.

Maybe this is just another case of agree to disagee.

What is one of the reasons I hang out at SBM? Where I can add new (citable) insights to a comment about dietary supplements, I had hoped that by sharing it, the discussion could be enriched. But, I guess my lesser goal here has been to ask strident dietary supplement critics to review and revise their POVs when I provide facts that disprove some of their statements. The reponses are generally 1) ignore the facts, 2) disbelieve the facts based on personal belief, or 3) turn the tables and expect me to prove the nutritional product in question works to their personal satisfaction (which I could care less about doing since the author picks weak cases to begin with too). I’d rather discuss the point I raised or hear how I missed the point (e.g. EFSA decision above).

And, hey, I am always open to feedback on how I can be more effective in contributing to discussions on SBM! Thanks.

“I myself am skeptical of many CAM ideas because I recognize that the body tightly regulated what nutrients pass from gut to blood stream, and the body manages blood flow very well. Eating more protein does not make a wound heal more quickly, and does not make bigger muscles. Increasing blood flow by magnetism does not make a sore muscle resolve more quickly.”

Honestly, they are good points. I most certainly do not want to be the guy that lets his biases and pre-conceived notions about something affect his interpretation of the evidence.

I just sat in on a Sigma Xi lecture on neuroscience today after meeting a guy named Stan White. Stan made a point about Bayesian ideas using cell phones as an example. In brief, the notion was that cell phones can’t cause brain damage because the ionizing radiation output is too low to harm neurons. Then someone else demonstrated that it could harm the DNA repair proteins, which could then lead to more oxidative damage and thus cause said brain damage. I didn’t have a chance to follow up with him at that point, but my response would have been that the proteins can be replaced quite readily.

The point is that you can often come up with some sort of idea that might change a Bayesian prior – down or up (and then back down again). But my thought is that if you need to postulate an extra step like that (the protein damage) likely your prior is actually still pretty small. And a biologist like myself would readily recognize that damaging those proteins likely wouldn’t lead to much of an effect because they can be replaced (Stan is a mechanical/aerospace guy).

So what do you do in that situation? Well, since cell phones are used so much by so many people, you study it! And it has been. And no effect has been shown.

In regards to the collagen stuff, the same sort of scenario applies – so I don’t begrudge studies. But studies have been done (and I am looking at the better ones, not the crappy ones I slammed in the review) and they don’t show an effect. Same as the cell phones. So the Bayesian is low – that much I hope you can at least admit – and the data so far shows that there is no effect. That means that there either really is no effect, the studies were all very poor and couldn’t show an effect, or the effect size is so small as to be a wash in the noise. Maybe it is option 2, but my take on the lit is not that (unless you want to claim that there is not one single study that is even half decent). So taken all together, that means the prior is indeed low. Doing a larger, more robust, high power study actually might show an effect. But if that is what it takes to tease out an effect, what is the clinical utility of it? I’d say none, so why bother?

If the claim is that the way in which it is administered is simply wrong and that doing it differently could lead to a large(r) effect size, then I can’t really argue with that, except to say that what I know (and those here much more experienced than I as well) makes me think that is just pretty unlikely. But that also falls into the pot off stuff that does indeed get lost on the wayside. Even Gorski has written about it. A necessary by-product of doing science rigorously, efficiently, and accurately is that some stuff that is actually worthwhile will be missed. Sometimes, most certainly due to bad assumptions on priors. But the converse is chasing every little thing and throwing good money after bad.

So alright, I’ll concede that saying “near impossible” is uncalled for editorializing (sorry Scott!) but calling it very implausible and thus not particularly worthy of extensive follow up (and most certainly not worthy of recommending it to patients) is still, IMO, quite valid.

In your example of the guy who killed your CRP drug… yeah, he may have killed a great product because of biased priors. But, as science has taught us – especially in medicine – it is far more likely he killed a product that would be a waste of time and money. Sure, maybe his reasons were not as sound as they could have been (or maybe they were and it was still accident he got it right just by the odds) but there isn’t much you can do about that.

Again, I agree that it should not be recommended to patients based on the current evidence. I never set out to prove that oral collagen works in OA. In the amount of time I was willing to devote to a dietary supplement-health outcome I generally don’t find very interesting right now, I did see some studies showing efficacy in some parameters, but you appear to have invested more time and energy, so I am more than willing to revise my “might work” to “inconsistent support” and lower the Baysean prior we perhaps would have both established by taking more and more evidence into account.

We could perhaps debate issues of study design, dose, intervention duration, etc. to clarify whether “no evidence” or “some evidence” is the better characterization of the available clinical evidence, but it really doesn’t affect the core discussion beyond a game of getting you to admit there is some evidence or your getting me to admit the evidence is crap. Since we both seem to readily admit when we are wrong, that discussion would REALLY be nitpicking and a waste of our analytical skills. Also, I am not and have never been interested in trying to prove it works, I expressed my dismay to Scott with his using flawed standards of reference and ignorance of testing methodology to evaluate data – it turns out Scott was saved from being a scientist by an actual lack of proof for collagen working for OA. By contrast, the actual points that I cited and would defend regarding DHA, etc. have not received one comment. Come on, let the skeptics look at Karin’s study I cited – THAT one I ACTUALLY cited and would defend based on MY conclusion of efficacy.

As for the CRP dietary supplement, the JUPITER trial showed that CRP lowering was a causitive risk factor in CVD. There was a lot of compeling epidemiological, animal study and mechanistic data leading up to that clinical trial. In industry, if you want to place a product on the market with sound data supported by a government/academic study, you need to do a risk:benefit analysis years ahead of time based on limited science. He got it wrong because he wanted a sure thing.

To me, the core discussion here is what SBM is about. I understand taking health-practices to task for having no scientific evidence to support them other than vague and flawed trials. I understand taking Cochrane and NCCAM to task for not dismissing health-practices with no basis in science (somehow I would love to get them to comment on “pogo-stick jumping on carrots” as a treatment paradigm for XYZ – “further studies are needed with greater randomization of carrot cultivars and pogo-stick brands” LOL).

What I don’t understand is the default dismissal on SBM of any case where there is some but incomplete evidence. I sort of understand it when when everyone scoffs at an uninformed and strident individual making unsupported comments about how horrible and dismissive the staff and commentators are being (I still think it is mean), but that isn’t always the case. If I cite references that someone is using the wrong standard for evaluation or that they have misrepresented the regulatory standard or they didn’t take into account other data (and I back all of it up with citations – up to two a post), I have to say that the staff actually does appear to scoff at these things or ignore them.

And, for the newcomers reading this post – I don’t want you to think that I got dismissed once and got angry or something – I am resigned, not at all angry. Here are the most recent examples: Citing three well-controlled, large epidemiological studies to counter the vitamins and mortality discussion (*crickets chirp*). Pointing out conveniently overlooked information from the paper about hoodia and weight loss (*crickets chirp*). Pointing out that the dismissal of probiotics and constipation was based on false premises (got a dismissal response followed by another false premise response that ignored the previous posts). And this collagen thread, where I focused on pain study methodology and absolutist comments, but got saddled with defending collagen-OA (lots of comments, but they were focused on me defending something I didn’t seek to defend).

I am still amazed that this whole collagen-OA thing got turned around on me to defend when I was criticizing bad reasoning on Scott’s part – it is probably my fault for posting articles to counter a claim of “impossible” without realizing these would be turned back on me as though I said “it is a miracle cure!” Once again, micheleinmichigan’s wisdom comes through saying that commentators aren’t arguing with me, they are replaying arguments they have had hundreds of times before. Many people talk about miracle cures in my world, so I can understand the frustration with them.

Honestly, I didn’t follow this website to watch people virtually sit around and fist-bump one another over how well they dissed some “sCAMer.” But, while I have looked for it, I haven’t seen much if any (other than among a very small subset of the commentators) open-minded debate other than how those horribly wrong sCAMer opinions are killing people (well, some are, in all honestly). Actually, it all strikes me more as a sermon or prayer meeting to decry “sCAMsters” (which seems REALLY weird to me considering that this is apparently a collection of skeptics). And the thing about firing up the faithful is that you can encourage them to worship false gods (e.g FoxNews does this well). Rather than encouraging them to make decisions informed by science, you encourage them to make decisions informed by their hate of CAM. Be honest with yourself, pastor, before you answer.

And this leads us back to the Baysean prior. If you can’t accept and evaluate all of the data, you cannot establish the prior. SBM is great at pointing out that the prior is almost non-existant for unscientific disciplines – and it is! Where there is “some” science, the constant preaching about the non-existant data becomes the “faith-based” rejection of “some” data.

I imagine I will be dismissed as some quack sCAMster trying to sell you snake oil to blind you to the truth of nasty, nasty nutritional products. I suppose that is the easy answer, and the easy path to a clean conscience. I am just going to continue to learn from informative debates, and share my insights on the portion of the nutritional products industry where good science is being done (though just not here). I spoke with Marion Nestle (Food Politics book) last year about my blog and my enjoyment of her work. Her reply was to tell me that all nutritional product companies need to be shut down, and, if I had anything nice to say about them, I must be a shill. I encourage anyone who agrees with her statement to buy and read her books, because I have (no really, I love her stuff!).

I may stop in to read nybgrus’ reply and occasionally catch up on some of the interesting subjects here, but I really don’t see how I can do much to honestly debate viewpoints or educate the audience with actual science. It has been amazingly educational to hear everyone’s comments, but you need to know when to hold ‘em and when to fold ‘em.

I’m honestly saddened by your response and feelings right now JPZ. I think that was a bit of a compliment my way at the very end, so I thank you for that.

As I have said before, I don’t think we are too far off in our thinking. I can’t help but think that the issue at hand is one of a basic sciences viewpoint versus clincal medicine viewpoint, but maybe I am just pidgeonholed into that thinking.

I’m happy to try and give a more thorough response tomorrow, if you would care to read it, but it is very late and I have had a very nice dinner and, um, a few glasses of wine so I simply can’t tonight.

@JPZ, I had a longer comment basically agreeing with some of your criticism and having to do with the variability of the direction of the discussion, which I lost due to my iPad turning off (Grrr). Maybe I’ll try later, I can’t face rewriting it now.

But I’ll give you a summary. Having read the blog for awhile, I noticed an ebb and flow in the willingness of commentors and even other writers to question the finer points of arguments presented here. Now there seems to be a more ‘with us or against us’ tend. Other times there has been a more ‘even though I’m against SCAMs too, I’ll question a poor argument’ tone. Possibly, If you check back in a few month the trend will have changed. Possibly you should send some of your evidence minded nutritional science friends over to sway the trend.

Currently, I’d welcome a little more non-troll diversity of opinion. I’ve never been much of a choral fan.

On a different direction. I don’t want to dispute your overall point, but I do think it’s not quite fair to include the hoodia discussion in your list. Your comment there did not come across as needing a response from HH. It seemed more like an observation of the behind the scence thought process on the hoodia research with some possible next steps. HH’s article was clearly focused on consumers using hoodia today, the quality of the product, side effect and environmental profile. You appeared to agree with her conclusion that you wouldn’t recommend taking hoodia supplements for weight loss today. I don’t think that HH concluded that hoodia was implausible for weight loss or that it was a dead end research avenue (Unless my memory is wrong).

You are correct, HH’s article was a clear and well-written discussion of the evidence, and my posts there essentially agreed with her conclusions. My point there was that the cited study was not a total fait-accompli. There was the potential for some of the Unilever internal data mentioned to be published, and there were other scientifically-valid lines of evidence to be pursued. If HH’s intent was to place her well-informed opinion about hoodia on SBM for later cross-reference, she COULD have included potential open topics, but she was not misinforming the readership by leaving them out. But, it is to a much lesser extent an example of how SBM currently defaults to “no evidence” when there is still a “glass half-full” scientific point to be made.

I still plan to lurk a little to pick up on topics, but I doubt I will post much if at all. If any topic with valid scientific support but not 2 RCTs with n=300+ and p<0.001 on all outcome measures is "no evidence," "implausible," or "impossible," then that approach is unscientific. If false premises and faulty reasoning can't be challenged by citations, then that approach is unscientific. I am a scientist.

@nybgrus

Sorry for the late reply, I was in the hospital for an allergy I didn't know I had. My previous statement was a compliment. You are among the few I have met here who are willing to engage in the kind of scientific discussion I hoped to find here. I think HH beats you out for toleration of talking to trolls though. Perhaps some of my point of view is a bias on my part toward basic science, but I don't think basic science can be dismissed summarily. I have also run clinical research programs for years, and clinical testing a premarket product involves using pilot studies, animal studies and mechanistic research to inform the major outcomes. If those are not worthy of discussion, then there would be no clinical data. The SBM approach works great to throw out non-scientific practices, but really can't seem to handle anything (from a positive perspective, at least) that isn't ready for a clinical practice guideline. Sometimes I wonder what the ratio of positive to negative comments might be here. Maybe that is one source of negativity here.

Keep engaging commentators the way you have. In my opinion, it is really a MAJOR contribution to the quality of scientific discussion here. If you need to reach me, my blog linked to my JPZ handle here is probably the easiest.

I am glad you brought this topic up. Another therapy that is being used for joint pain by orthopedics is “platelet rich plasma injections.” In the therapy, they draw the patients own blood, spin it down to get platelet rich plasma, and then combine the platelet rich plasma with a little bit of anticoagulant and B12. This is then injected directly into the joint. Three treatments one week apart.

While eating collagen doesn’t have anything to do with vitamin c deficiency, nobody here mentioned that vitamin c deficiency is not the final thing about collagen improper synthesis. Amino acids Lysine and Proline are involved too, and the first one is essential, that is, its body status depends on dietary intake.

Thus, increasing collagen dietary will lead to Lysine increases and may account for better collagen synthesis. Some parts of Lysin metabolism also seem to diminish with age.

Obvious supplemental choices are hence vitamin C and lysine/collagen, especially because those are not toxic even in gram doses, and very cheep (collagen can be made at home).