In the very first sentence of the introduction, the authors state that “There is an ongoing international debate as to the necessary length of mammalian toxicity studies in relation to the consumption of genetically modified (GM) plants including regular metabolic analyses (Séralini et al., 2011).” I find it interesting that Seralini cites himself as proof of this… I did not look up the reference or search to see if this is an actual international debate, or if it is simply Seralini vs the world on this point. But I digress. A reasonable person would certainly agree that long-term studies of food products sounds like a good idea, and so it is easy to side with the authors that this type of research is needed.

But if we compare the life span of rats with the life span of humans, the concept of “long term” is not at all similar. And this is where I think the Seralini study falls apart. It boils down to the fact that this study lasted for 2 years, and used Sprague-Dawley rats. To those of us who don’t do rat studies, 2 years probably seems like a reasonable “long term” duration for a study (it did to me at first glance). However, it seems that for the specific line of rats they chose (Sprague-Dawley), 2 years may be an exceptionally long time.

A 1979 paper by Suzuki et al. published in the Journal of Cancer Research and Clinical Oncology looked at the spontaneous appearance of endocrine tumors in this particular line of rats. Spontaneous appearance basically means the authors didn’t apply any treatments (like feeding them GMOs or herbicides). They just watched the rats for 2 years and observed what happened in otherwise healthy rats. When the study was terminated at 2 years (the same duration as the Seralini study), a whopping 86% of male and 72% of female rats had developed tumors.

Below I provide the results of a very basic simulation using R. I’ve also provided the R code incase anyone would like to repeat or modify this little exercise (R code is in red, output is in blue). Let’s assume that the Suzuki et al. (1979) paper is correct, and 72% of female Sprague-Dawley rats develop tumors after 2 years, even if no treatments are applied. If we randomly choose 10,000 rats with a 72% chance that they will have a tumor after 2 years, we can be pretty certain that approximately 72% of the rats we selected will develop a tumor by the end of 2 years.

## Create a sample of 10,000 Female rats. Each rat we choose
## has a 72% chance of developing a tumor after 2 years.
SD.Female<-sample(c(0,1),10000,replace=T,c(0.28,0.72))
## The mean of this population (of 0s and 1s) will tell us the
## the proportion of rats that developed tumors, by chance.
## 0 = no tumor; 1= tumor
mean(SD.Female)
[1] 0.714

In our very large sample of 10,000 simulated rats, we found that 71.4% of them will develop tumors by the end of a 2 year study. That’s pretty close to 72%. But here is where sample size becomes so critically important. If we only select 10 female rats, the chances of finding exactly 72% of them with tumors is much less. In fact, there is a pretty good chance the percentage of 10 rats developing tumors could be MUCH different than the population mean of 72%. This is because there is a greater chance that our small sample of 10 will not be representative of the larger population.

UPDATE: 9/20/2012 – See the comment from Luis below for a more elegant way to set up the 9 groups. It also allows you to more easily change the probabilities (only one time, instead of 9) if you want to see the impact if the probability of tumors is 50 or 80% instead of 72%. Thanks Luis!

The 9 groups (in columns) of 10 rats each represent one possible randomization of the rats used in the Seralini study. Let’s assume that “Control” is the control group, “t1″ is the first treatment group, and so on. If we look at all 90 simulated female rats chosen for the experiment, 62 rats (about 69%) would develop tumors after 2 years, even if no treatments were applied. Again, that’s not too far away from our known population mean of 72%.

But here’s the important part: Simply by chance, if we draw 10 rats from a population in which 72% get tumors after 2 years, we have anywhere from 5 (“t2″) to 10 (“t1″) rats in a treatment group that will develop tumors. Simply due to chance; not due to treatments. If I did not know about this predisposition for developing tumors in Sprague-Dawley rats, and I were comparing these treatment groups, I might be inclined to say that there is indeed a difference between treatment 1 and treatment 2. Only 5 animals developed tumors in treatment 1, and all 10 animals developed tumors treatment 2; that seems pretty convincing. But again, in this case, it was purely due to chance.

So my conclusion is that this study is flawed due to the choice of Sprague-Dawley rats, and the duration (2 years) for which the study was conducted. Sprague-Dawley rats appear to have a high probability of health problems after 2 years. And when there is a high probability of health problems, there is a high probability that just by chance you will find differences between treatments, especially if your sample size for each treatment is only 10 individuals.

The possible explanations are legion, but with several different kinds of estrogen receptors with different actions in different tissues, compounds that block a receptor at one concentration but activate it at another, compounds that interact with different kinds of hormone receptors in different ways, and differential effects in different species–it’s no wonder the results with mixtures are themselves so mixed. The one thing that doesn’t leap out here as being involved, among a sea of likely possibilities, is the GM corn itself.

UPDATE: October 4.The European Food Safety Authority (EFSA) has released a statement on the Seralini study. Their conclusion (emphasis mine):

EFSA notes that the Séralini et al. (2012) study has unclear objectives and is inadequately reported in the publication, with many key details of the design, conduct and analysis being omitted. Without such details it is impossible to give weight to the results. Conclusions cannot be drawn on the difference in tumour incidence between the treatment groups on the basis of the design, the analysis and the results as reported in the Séralini et al. (2012) publication. In particular, Séralini et al. (2012) draw conclusions on the incidence of tumours based on 10 rats per treatment per sex which is an insufficient number of animals to distinguish between specific treatment effects and chance occurrences of tumours in rats. Considering that the study as reported in the Séralini et al. (2012) publication is of inadequate design, analysis and reporting, EFSA finds that it is of insufficient scientific quality for safety assessment.

and:

Séralini et al. (2012) draw conclusions on the incidence of tumours based on 10 rats per treatment per sex. This falls considerably short of the 50 rats per treatment per sex as recommended in the relevant international guidelines on carcinogenicity testing (i.e. OECD 451 and OECD 453). Given the spontaneous occurrence of tumours in Sprague-Dawley rats, the low number of rats reported in the Séralini et al. (2012) publication is insufficient to distinguish between specific treatment effects and chance occurrences of tumours in rats.

Comments

And for those that have questioned whether the peer-reviewing panel should have even accepted the Seralini study for publication in the journal Food and Chemical Toxicology…

Quote from the journal Nature, 25 September 2012: “José Domingo, a toxicologist at Rovira i Virgili University in Reus, Spain, and a managing editor of Food and Chemical Toxicology, says that the study raised no red flags during peer review. Domingo, who last year authored a critical review of safety assessments of GM plants4, has previously complained about the lack of independent feeding studies of GM foods”.

I feel that the comments above put the “statistics and probability only criteria” comments in perspective.
———————————————
The bottom line is that this paper was reviewed and accepted by a prestigious scientific journal. If there is “real scientific concern” concerning any point in the paper, the person(s) concerned can submit a “Letter to the Editor”:
“Letters to the Editor will be limited to comments on contributions already published in the journal; if a letter is accepted, a response (for simultaneous publication) will be invited from the authors of the original contribution. All Letters to the Editor should be submitted to the Editor in Chief, A. Wallace Hayes at the following address: awallacehayes@comcast.net. ”
The above quote is from: http://www.elsevier.com/wps/find/journaldescription.cws_home/237/authorinstructions#N10BB9

(Please note the number of signatures in the link.)
———————————————
———————————————
To those who do not understand that our understanding of Nature generally proceeds in steps where each step results in a published, reviewed scientific paper that is then extended/refuted by other scientists doing research that is then also submitted to scientific review for consideration to be published. Below are links to two 2012 published reviewed scientific papers on another aspect of the health effects of glyphosate use.

I think the point brought up by Jefferson Santos about using the same strain as the Monsanto paper totally valid. Also, it is strange to say that “Sprague-Dawley rats appear to have a high probability of health problems after 2 years.”. If you consider the expected life span of these rats (~2yrs), it would be as if one said “humans appear to have a high probability of health problems after 70 yrs”.

ALSO when Seralini’s research is attacked as inadequate, it is useful to compare it to the data and research on which regulatory approvals of GMOs are being made. This document by the NGO CBAN does exactly that in a Canadian context, and what it shows is startling.

Seralini’s recently published research was on Monsanto’s GM Roundup-Ready corn NK603, and studied over a two-year period the impacts of consumption of this GM corn on rats, with and without Monsanto’s Roundup herbicide, and the impacts of Roundup alone.

Monsanto, by point of comparison, published a 13 week “safety assurance study” with rats fed NK603 corn, but this wasn’t until some 4 years after the Canadian regulators (Health Canada) had already approved Monsanto’s NK603 GM corn for human consumption. (Remember the health problems that Seralini’s study highlighted only emerged after 13 weeks.)

Canada’s regulators have not conducted any tests on NK603, or on any other GM food. In 2001, Health Canada approved Monsanto’s GM NK603 for human consumption, based on a data package submitted by Monsanto. This data is not accessible to the public or to the wider scientific community, so nobody knows or can comment with certainty on its contents.

Health Canada did publish a 3-page summary of their 2001 decision and this makes no reference to a feeding trial, but “does refer to a gavaging study (typically a few days long), in which mice were force-fed a high dose of the single purified protein coded for by the modified Roundup Ready gene”, ie it didn’t involve the actual GM corn.

As CBAN also note, the Royal Society of Canada’s 2001 Expert Panel on the Future of Food Biotechnology found with regard to the data behind Health Canada’s decisions to approve GM foods, “there is no means of independent evaluation of either the quality of the data or the statistical validity of the experimental design used to collect those data.” Although the Royal Society’s Expert Panel, set up at the request of the Canadian Government, made a series of recommendations for improving GM regulatory decision making, these have all been ignored.

Pretending it’s a matter of chance or probability doesn’t prove anything.
IT DOES NOT PROVE THAT TUMORS WERE NOT RELATED to the products.

What do we do then ?

Will we devise new experiments, with more precise and sound conditions? That would be standard science…

Or will we just yell, saying the experiment is bad, bad, bad… implying that gmos are safe ?

Because that’s the point, now.

Or maybe the question of gmos safety is closed, thanks to Monsanto and friends’ experiments ?
By the way, have you checked these the same way you did with this one ?

I’d bet you’ve not been able to do so, because their data are trade SECRET… and more: these non-existing data have been “used” to justify gmos safety (that’s how European Authority EFSA work. They rely on PARTIAL DATA to allow -or not – a product)
GM producers should be FORCED to show their data, but the system is entirely corrupted already. Public institutions rely on nothing to devise public policies.

Did you check all the Monsanto data concerning this GMO (or any other) ?

What is the most important: Checking ONE experiment being against, or protesting about the FACT that authorizations have been given WITHOUT GOOD SCIENCE BEHIND?

GMO issue is not just a matter of ‘science’. It has much wider political, economical and environmental perspectives too.

Pretending it’s only a matter of science, when this science is entirely under the control of those who sell the product (because of patents and trade secrets) is JUNK thinking at best, corrupted one at worst.

It is quite a leap to think that the tumor numbers from the 1979 Suzuki study are directly comparable to the numbers in the Seralini study. Do you suppose rats have a glowing digital Number of Tumors readout over their heads? Any tumor counts are entirely dependent on the criteria for what counts as a tumor and the methodology used to find them.

Actually not a leap at all, when you bother to do some research/understand how rats are bred for use in scientific research. The breed used in this paper is not a sample of wild rats – it’s a specific strain that was developed (through inbreeding) specifically so that each rat was almost completely genetically identical to the other rats from the same strain. Their near genetic identity is very valuable in scientific research where your aim is to elucidate information about specific genes in regards to risk for disease, etc. The fact that the rats are identical (nearly) means that the effects that may pop up in the study can be accurately linked to genes that may be responsible for the outcome you are observing. So as far as your comment about these rats not being “directly comparable to the ones” in the 1979 Suzuki study – well you are clearly misinformed. These rats are not only “directly comparable” but are actually nearly genetically IDENTICAL to those rats – they were specifically designed and bred to accomplish this. So yes, in this case, these rats may as well have a “glowing digital Number of Tumors readout over their heads.”

AY you either didn’t read my comment very well or are suffering from some seriously fuzzy thinking because your comment did not address mine at all. My comment was not about the rats comparability, it was about the numbers comparability, about HOW such numbers are determined. It does not matter if the rats were clones, my comment still applies. You utterly and completely missed my point.