Researchers handed media a flawed paper, but forbid any consulting of experts.

Very little of the public gets their information directly from scientists or the publications they write. Instead, most of us rely on accounts in the media, which means reporters play a key role in highlighting and filtering science for the public. And—through embargoed material, press releases, and personal appeals—journals and institutions vie for press attention as a route to capturing the public's imagination.

This system doesn't always work smoothly. Just this year, we've seen a university promote a crazed theory of everything and researchers and journals combine to rewrite the history of science in order to promote their new results. But these unfortunate events are relatively minor compared to a completely cynical manipulation of the press that happened last week.

In this case, the offenders appear to be the scientists themselves. After getting a study published that raised questions about the safety of genetically modified food (GMOs), the researchers provided advanced copies to the press only if they signed an agreement that meant they could not consult outside experts. A live press conference and the first wave of press appeared before outside experts could weigh in—and many of them found the study to be seriously flawed.

Science journalism and the embargo system

Each week, reporters around the world get a jump on the scientific community. Nearly a week before the new editions of major journals are released, the press gets a chance to download many of the papers that will appear within them. That access is predicated on a simple agreement: nobody runs any news stories about the contents until after a date and time set by the journal. This embargo system is why key scientific findings tend to appear everywhere at the same time, with hundreds of similar stories published within minutes of each other.

When it works well, the embargo system provides a valuable sanity check on media hype. While preparing their stories, reporters are allowed to share the papers with relevant experts and scientists with contrary opinions, who can warn the public about the possibility of over-interpretations or shaky data (provided they, too, agree not to publicize the results early).

This system has its flaws—embargoes can be capricious or get broken, and the sources can be selective about who they allow to access the papers. But it can provide what's essentially an additional level of peer review before the results are set out before the public.

Manipulating the system

The important checks provided by that system have now been systematically undermined by a group of French researchers, primarily at the University of Caen. The researchers managed to get a paper published in the journal Food and Chemical Toxicology. Their paper examined the long-term viability of rats fed a diet supplemented with either the herbicide Roundup or a crop engineered to tolerate high levels of Roundup. In their study, the researchers claim to have found that both the pesticide and the GMO crop reduced the lifespan of the rats and caused a high incidence of tumors.

People with relevant expertise, when given a chance to look at the study, found significant and systematic flaws with both the experimental approach and the data. Presumably, this is precisely why these scientists went out of their way to avoid giving them the chance to look.

At the blog Embargo Watch, Ivan Oransky has tracked how the researchers controlled access to their paper. Any journalist that wanted to receive an advance copy was required to sign a non-disclosure agreement before receiving one. That agreement prohibited the outlet from sharing the results with any outside experts before the embargo lifted. In other words, if a press outlet wanted to be one of the first to cover the story, it would have to run the story without having any experts sanity check the paper.

The manipulation didn't end there. The embargo lifted during a live press conference from the researchers, hosted in London in cooperation with the Sustainable Food Trust. The SFT conveniently had a press release prepared; a release claiming that the research was "supported by independent research organization, CRIIGEN." However, this neglected to note that the paper's lead author, Eric Seralini, is on the CRIIGEN board.

Problems with the research

We looked into the study immediately after it began appearing in a variety of outlets. While we didn't have the sort of expertise in toxicology needed to critique some of the details, a few things stood out. First, there appeared to be no dose sensitivity for either Roundup or the level of GMO food provided—they saw the same effects at any of the doses they tested. In addition, the GMO food produced the exact same effects as the Roundup, something the authors didn't provide a reasonable explanation for.

But these problems were only the beginning. As more critical reports began to appear and scientist/bloggers looked at the results, huge issues were made clear. The authors used a strain of rats that is prone to tumors late in life. Every single experimental condition was compared to a single control group of only 10 rats, and some of the experimental groups were actually healthier than the controls. The authors didn't use a standard statistical analysis to determine whether any of the experimental groups had significantly different health problems. And so on.

The experts who weighed in were dismissive. One called the work "a statistical fishing trip" while another said the lack of proper controls meant "these results are of no value." One report quoted a scientist at UC Davis as saying, "There is very little scientific credibility to this paper. The flaws in the test are just incredible to me."

However, by this point, the promotion of the paper already had its desired effect. Both the European Union and French governments were asking their food safety organizations to look into the results, which may have implications for France's attempt to ban GM crops.

Ethical failures all around

If the previous examples we've covered have been about scientists who have gone too far in promoting their work, at least in those cases the goal appears to have been publicity. In this case, the researchers clearly seem to be focused on achieving political ends, namely a ban on the use of genetically modified crops. All indications are that they have performed sloppy science, presented it as indicating something it hasn't, and then knowingly manipulated the press coverage of their work. All in order to ensure the paper had an oversized impact in the public sphere.

In that, apparently, they were unknowingly aided by the peer reviewers who allowed the article to be published. Food and Chemical Toxicology is a small, specialized journal, but there's little reason that its reviewers shouldn't have expected a basic statistical analysis of these results. (We've reached out to the Editor-in-Chief of the journal, but haven't received a response at publication time).

But the use of a nondisclosure agreement to limit press coverage should have set off alarm bells within the press. As Ivan Oransky put it in his Embargo Watch report, this threatens to turn reporters into little more than stenographers, copying down only what researchers want them to say, and performing no independent evaluations. Journalist Carl Zimmer, on his blog, calls out the AFP and Reuters (Oransky works for a different group within Reuters) for having agreed to the conditions in the first place. He writes, "This is a rancid, corrupt way to report about science."

In this case, however, the press wasn't reporting about science at all. It was simply being used as a tool for political ends.

Promoted Comments

There are several articles about the French GMO paper on Le Nouvel Observateur magazine.The following article answers some of the concerns and explains the reasons behind some of the choices made:

1) The number of rats, 200 rats in 10 groups of 20 rats each, is strictly identical to the numbers chosen in Monsanto's own 90 day study, except that the CRIIGEN studied many more parameters than Monsanto. If this number was a real problem, then every single Monsanto GMO should be taken off the market immediately, because it's on the basis of similar numbers that these GMOs were authorized.

/sigh, no you design the trail according to the number of parameters you want to test and de-convolute. If you're doing proper ANOVA more parameters means a lot more animals. Seriously, there's experiment design software out there to help you do this because it can be rather complicated.

Chimel31 wrote:

2) Same for the type of rats, which are used worldwide for toxicology studies, and have the advantage of being very uniform (profile, weight) and biologically and physically stable. These rats are prone to developing tumors indeed, but there were about twice more tumors with the rats eating GM corn than with the witness group.

Twice the number means nothing without proper stats to determine whether this was significant. That's it. Really. End of story. Given the fact these rats are prone to tumours and were kept well into old age the background noise is going to be pain. This could have been mitigated by choosing a different strain or, indeed, a different species.

Chimel31 wrote:

Several more concerns were answered.The raw and processed data of the study will be made public over the next months, so data can be reprocessed according to whatever statistics methods you choose.

Really not good enough. Bad enough when companies sit on data, it's equally reprehensible when supposedly independent research groups do it. All the papers I ever published had raw data, freely accessible from day one, anything less would be poor science indeed.

The rest of what you say is rather irrelevant or, indeed, points to the author's bias. I agree Monsanto are to be condemned for the way they handle science too, but it has no bearing on this paper, or the actions of it's authors, which are questionable at best.

Here is a counter response from the GM Watch group: http://www.gmwatch.org/latest-listing/51-2012/14217Including: 1) The rats were the same breed as in Monsanto’s trial2) The size of control group met the standard practice “The experimental groups were 20 animals [10 male + 10 female] and therefore the control group should be 20 animals”.3) An expert statistician was part of the research team and this was certainly not a "fishing trip". Significance in many liver and kidney parameters are shown and highlighted in the Tables 1 and 2.4) We are not dealing here with a regular poison effect where increasing the dose will increase toxic effects. [...] hormonal system disturbances[...] are known to display nonlinear . see Hormones and Endocrine Disrupting Chemicals: Low Dose Effects and Nonmonotonic Dose Responses, Vandenberg et al 2012

The radio show in the link from al.truisme (thanks for your own blog!) had a person from the CRIIGEN saying that the peer-reviewed journal controlled the experiments for 8 months, so it looks like the study passed all peer-review criteria, including the statistics methods used, and there was no short-comings in their mind, or they wouldn't have published the paper.

The varieties of maize used in this study were the R-tolerant NK603 (MonsantoCorp., USA), and its nearest isogenic non-transgenic control. These two types ofmaize were grown under similar normal conditions, in the same location, spacedat a sufficient distance to avoid cross-contamination. The genetic nature, as wellas the purity of the GM seeds and harvested material, was confirmed by qPCRanalysis of DNA samples.

Here is a counter response from the GM Watch group: http://www.gmwatch.org/latest-listing/51-2012/14217Including: 1) The rats were the same breed as in Monsanto’s trial2) The size of control group met the standard practice “The experimental groups were 20 animals [10 male + 10 female] and therefore the control group should be 20 animals”.3) An expert statistician was part of the research team and this was certainly not a "fishing trip". Significance in many liver and kidney parameters are shown and highlighted in the Tables 1 and 2.4) We are not dealing here with a regular poison effect where increasing the dose will increase toxic effects. [...] hormonal system disturbances[...] are known to display nonlinear . see Hormones and Endocrine Disrupting Chemicals: Low Dose Effects and Nonmonotonic Dose Responses, Vandenberg et al 2012

#1 is not a positive, due to the difference in the length of the trials. #2 is a lie -- whether 10 or 20, the control group is absurdly small, and fails to provide enough data points#3 is a gross distortion... the use of poorly-described 'multiple regression' instead of the standard ANOVA without sufficient explanation is a red flag. The statistical expert appears to be an expert in fudging the numbers.#4 is a hybrid of special pleading and moving the goal posts.

Here is a counter response from the GM Watch group: http://www.gmwatch.org/latest-listing/51-2012/14217Including: 1) The rats were the same breed as in Monsanto’s trial2) The size of control group met the standard practice “The experimental groups were 20 animals [10 male + 10 female] and therefore the control group should be 20 animals”.3) An expert statistician was part of the research team and this was certainly not a "fishing trip". Significance in many liver and kidney parameters are shown and highlighted in the Tables 1 and 2.4) We are not dealing here with a regular poison effect where increasing the dose will increase toxic effects. [...] hormonal system disturbances[...] are known to display nonlinear . see Hormones and Endocrine Disrupting Chemicals: Low Dose Effects and Nonmonotonic Dose Responses, Vandenberg et al 2012

#1 is not a positive, due to the difference in the length of the trials. #2 is a lie -- whether 10 or 20, the control group is absurdly small, and fails to provide enough data points#3 is a gross distortion... the use of poorly-described 'multiple regression' instead of the standard ANOVA without sufficient explanation is a red flag. The statistical expert appears to be an expert in fudging the numbers.#4 is a hybrid of special pleading and moving the goal posts.

Here is a counter response from the GM Watch group: http://www.gmwatch.org/latest-listing/51-2012/14217Including: 1) The rats were the same breed as in Monsanto’s trial2) The size of control group met the standard practice “The experimental groups were 20 animals [10 male + 10 female] and therefore the control group should be 20 animals”.3) An expert statistician was part of the research team and this was certainly not a "fishing trip". Significance in many liver and kidney parameters are shown and highlighted in the Tables 1 and 2.4) We are not dealing here with a regular poison effect where increasing the dose will increase toxic effects. [...] hormonal system disturbances[...] are known to display nonlinear . see Hormones and Endocrine Disrupting Chemicals: Low Dose Effects and Nonmonotonic Dose Responses, Vandenberg et al 2012

#1 is not a positive, due to the difference in the length of the trials. #2 is a lie -- whether 10 or 20, the control group is absurdly small, and fails to provide enough data points#3 is a gross distortion... the use of poorly-described 'multiple regression' instead of the standard ANOVA without sufficient explanation is a red flag. The statistical expert appears to be an expert in fudging the numbers.#4 is a hybrid of special pleading and moving the goal posts.

#3 is a gross distortion... the use of poorly-described 'multiple regression' instead of the standard ANOVA without sufficient explanation is a red flag. The statistical expert appears to be an expert in fudging the numbers.

Especially this. No matter how you cut it there's little to no justifying the stats they've used. I suspect the results aren't significant when analysed properly, if indeed that's possible with the sample size used. (NB appropriate sample size is dictated by the size of effect you would like to detect. There is no "standard" as such.)

There are several articles about the French GMO paper on Le Nouvel Observateur magazine.The following article answers some of the concerns and explains the reasons behind some of the choices made:

1) The number of rats, 200 rats in 10 groups of 20 rats each, is strictly identical to the numbers chosen in Monsanto's own 90 day study, except that the CRIIGEN studied many more parameters than Monsanto. If this number was a real problem, then every single Monsanto GMO should be taken off the market immediately, because it's on the basis of similar numbers that these GMOs were authorized.

2) Same for the type of rats, which are used worldwide for toxicology studies, and have the advantage of being very uniform (profile, weight) and biologically and physically stable. These rats are prone to developing tumors indeed, but there were about twice more tumors with the rats eating GM corn than with the witness group. The rats who ate the most of GM corn had about the same number of tumors as the witness group, but they got them starting at the 4th month, but mostly between the 11th and 12th months, compared to 23-24th month for the witness group.

3) The rats diet is similar to the one used by Monsanto, except for the 0-11-22-33% of GM corn contained.

Several more concerns were answered.The raw and processed data of the study will be made public over the next months, so data can be reprocessed according to whatever statistics methods you choose. Séralini apparently had to get court orders to obtain the Monsanto studies submitted to the EU for this same GM variety of corn. European deputy Corinne Lepage is now pushing for these studies and their raw data to be made public.

The study cost €3.2M, financed by the Swiss Foundation Charles Léopold Mayer and several French hypermarkets. It was conducted in full secret, with the scientists using only strongly encrypted emails and no phone calls, and even building a mock-up honeypot study. This paranoia and secrecy was justified from past experiences, from other scientists who got their studies publication cancelled after pressures from Monsanto. A book from Séralini ("Tous cobayes !", "Everybody a Guinea pig!") is published today about this study, and the movie from the book will be shown starting today also in Paris (http://touscobayes.tumblr.com/). There is even a video about the "making off" of the study. Expect more mediatization now that the temporary embargo has been removed or is about to.

I would also like to point out that the 3 pictures of the large rat tumors are to be found well inside the paper, not as prominently as the articles about the paper, so this part of the manipulation of the press seems to be performed by the press in this case, not by the authors of the paper.

There are several articles about the French GMO paper on Le Nouvel Observateur magazine.The following article answers some of the concerns and explains the reasons behind some of the choices made:

1) The number of rats, 200 rats in 10 groups of 20 rats each, is strictly identical to the numbers chosen in Monsanto's own 90 day study, except that the CRIIGEN studied many more parameters than Monsanto. If this number was a real problem, then every single Monsanto GMO should be taken off the market immediately, because it's on the basis of similar numbers that these GMOs were authorized.

/sigh, no you design the trail according to the number of parameters you want to test and de-convolute. If you're doing proper ANOVA more parameters means a lot more animals. Seriously, there's experiment design software out there to help you do this because it can be rather complicated.

Chimel31 wrote:

2) Same for the type of rats, which are used worldwide for toxicology studies, and have the advantage of being very uniform (profile, weight) and biologically and physically stable. These rats are prone to developing tumors indeed, but there were about twice more tumors with the rats eating GM corn than with the witness group.

Twice the number means nothing without proper stats to determine whether this was significant. That's it. Really. End of story. Given the fact these rats are prone to tumours and were kept well into old age the background noise is going to be pain. This could have been mitigated by choosing a different strain or, indeed, a different species.

Chimel31 wrote:

Several more concerns were answered.The raw and processed data of the study will be made public over the next months, so data can be reprocessed according to whatever statistics methods you choose.

Really not good enough. Bad enough when companies sit on data, it's equally reprehensible when supposedly independent research groups do it. All the papers I ever published had raw data, freely accessible from day one, anything less would be poor science indeed.

The rest of what you say is rather irrelevant or, indeed, points to the author's bias. I agree Monsanto are to be condemned for the way they handle science too, but it has no bearing on this paper, or the actions of it's authors, which are questionable at best.

I'm sorry, but that's a stupid argument. People have irrational fears of things all the time and highlighting something a lot of people are irrational about is not always a good idea. Think of the proposals to label cell phones with the level of radio waves you can expect to receive a few inches from your head. There is zero reason to think that this electromagnetic radiation does anything to you and plenty of good reasons to think it does nothing, but huge numbers of people are convinced that they're allergic to Wi-Fi and that cell phones cause cancer. The argument about "labeling" GMO food sounds like the idea of mandating cell phones label their by how much EF you get from holding it. It's a label that means absolutely nothing in scientific terms but plays directly into people's irrational fears. A similar baseless scare turns people away from vaccinations, with a demonstrably harmful effect on the population at large through the loss of herd immunity.There can be good arguments for labeling foods, but the "You have nothing to fear if it's really safe!" line completely ignores the existing levels of crazy bullshit floating around, as demonstrated in this very thread where people confuse Roundup Ready corn for Bt corn and worry about it rotting their guts if eaten fresh.

I agree that there are a lot of people who don't know WTF they're talking about that happen to be against GMOs, and I also agree the entire controversy seems tailor-made for the tin-foil hat crowd. On the flip side, however, there are a lot of people so used to standing up for science and reason against feelings and brainwashing that they seem to automatically come down on the side of promoting GMOs as safe, proven science just because GMOs are the result of science, and because they responded to an argument against GMOs that may not have been well-constructed or fruit hanging too low to pass up. But this is hardly a black-and-white issue with tons of irrefutable data going back millenia, i.e. this is not every climatologist on the planet vs. the Heartland Institute. This is pretty much Monsanto and the people in their employ generating (what may as well be) ALL the data (and ALL the implied bias) on the subject and bringing that nuke to the slap-fight that those with little or no funding whatsoever showed up to ask questions at. If all the facts are in favor of GMOs at this point, I suspect it's only because Monsanto has not allowed other facts to be generated so far. They're surely not going to fund anything that has the potential to hurt their profits. Now, I'm not a genetic engineer, but I did stay at a Motel 6 last night, and with respect I have to disagree that wanting to know more about the safety of GMOs constitutes "irrational fear". Specifically, I want fully-accredited scientists who are not funded or linked in any way to Monsanto, Cargill, BASF, Bayer, or Aventis to do long-term studies and issue properly peer (unbiased peers, please) reviewed publications. Then I can avoid GMOs just to boycott Monsanto.Because we all do have access to what is officially known as a British Butt-Ton of facts regarding Monsanto and their business practices, not to mention the hubris of attempting to PATENT genes and living things, sentient or not (again, so far). I find it impossible to believe anyone could not have serious questions about anything such a company promotes if they had even read just the Wikipedia article about Monsanto - and that would be the most favorable mention outside of their own press releases. If one could define a company as evil, Monsanto fits that bill perfectly. To assert they are guilty of corporate wrongdoing is like saying Zyklon-B isn't good for you or I make pretty good segues because Monsanto has partnered with I G Farben since 1967 (yes, I know I mentioned Aventis above).

Edgar Monsanto Queeny wrote:

"I recognized my two selves: a crusading idealist and a cold, granitic believer in the law of the jungle." ~ Edgar Monsanto Queeny, Monsanto chairman, 1943-63, "The Spirit of Enterprise", 1934.

^FUCK that guy!PCBs, Aspartame (and the 167 reasons to avoid it according to the NIH in 1991!) and Agent fucking Orange. Dude! WTF?!?Google Monsanto and just look at the negative reactions from, well, Earth. Everyfuckingbody can't be a conspiracy theorist.Wheels, I've actually been on here a long time, and even out of all the regulars here who's writings I typically admire, you are the only one I can't recall ever making a statement I disagree with. I really do consider you to be one of the hallmarks of logic and reasoned debate here. Don't let the unproven science of GMOs or the defense of ZOMG Monsanto (of all that's evil in this world) be the exception(s). I don't automatically assume GMOs in general are bad just because Monsanto is the leading proponent, but that is a good enough reason not to automatically assume they're safe and commercially benign. Srsly. Monsanto's bad, mmmkay? Not kidding. They suck.

This is pretty much Monsanto and the people in their employ generating (what may as well be) ALL the data (and ALL the implied bias) on the subject and bringing that nuke to the slap-fight that those with little or no funding whatsoever showed up to ask questions at. If all the facts are in favor of GMOs at this point, I suspect it's only because Monsanto has not allowed other facts to be generated so far.

Whoa, whoa. Monsanto are a proper pain in the arse but they are not the only people working on GM. Some very well respected labs in universities look at GMOs, with a mix of funding sources for example:

Moreover, GM is in reality an extension of the bacterial genetic engineering we've been doing, safely, for decades. There's no good reason to suspect the technology suddenly becomes dangerous because we put it in plants. The real risk is in the environmental impact, and the questionable nature of pursuing crop monocultures. Even then the risk varies on a case by case basis.

Now, I'm not a genetic engineer, but I did stay at a Motel 6 last night, and with respect I have to disagree that wanting to know more about the safety of GMOs constitutes "irrational fear". Specifically, I want fully-accredited scientists who are not funded or linked in any way to Monsanto, Cargill, BASF, Bayer, or Aventis to do long-term studies and issue properly peer (unbiased peers, please) reviewed publications.

Just to add, scientists aren't accredited by any official independent body. We gain our Ph.D.s from a university then that's it. Incidentally, I've nothing to do with any of those biotech companies, or plants for that matter (which are boring, boring). My research area was parasitology and now covers antibody applications. (FWIW, which is little to be honest as my saying so has no bearing on the facts, nullius in verba and all that...) Moreover, you do demonstrate why companies aren't too keen to label food as GM. You've conflated labelling "this contains GM" with safety information, the two are not equivalent. While I support your right as a consumer to know what's in your food (and boycott as you see fit), merely stating contents says absolutely nothing about safety.

Granted, but that just expands the pool of raw material you have to work with. You're still shaping the genome to suit your goals. DNA is DNA; it makes no difference whether a gene is from a related plant, a bacterium, or an octopus that glows in the dark, what matters is what it does once introduced into the target genome.

The problem is that you don't know what side effects will such modification cause in future generations of the same and totally different species which interact with it because such modification was never available in nature to begin with.

Ever heard of a little thing called mutation? Any gene can pop up in mostly any organism, given enough time. The "problem" you speak of exists with every new variation. Of anything.

igor.levicki wrote:

When it comes to GMO some of you people are so arrogant and confident as if the whole meaning of every possible genome on the planet was decoded and mapped out and as if making new species by combining whatever comes to mind is a game where caution and consideration for long term effects on eco-system can be safely ignored. This is not some strategy game on your computer where you can load your last save if you fuck up.

What kind of attitude is "we don't have to prove it is safe, we should just let it out of the lab"?

It reeks of arrogance, selfishness, and it is governed by corporate greed for profit and desire for control and power.

Whereas your attitude is "don't do anything new, regardless of how likely or unlikely the possible negative consequences". If everybody thought as you, one in three women would still be dying in childbirth.

Nobody says that care should not be taken with introducing new variations of plants or animals. What I object to is people like you, for whom no proof will ever be good enough. It reeks of Luddism, or mindless and hypocritical worship of the status quo.

igor.levicki wrote:

Deamon wrote:

Mother Nature didn't go "hmmm, need to keep these genes separate, better put them in separate species that can't crossbreed". Your argument amounts to "it can't happen in nature, so it should never be done."

And your argument is "regardless of what is possible in nature if we can make it happen lets do it for the sake of science"?

Is that so you can prove human superiority? Beat your chest and say "we can do anything we want"? Or maybe it is some other reason but I am pretty sure you are not ending world hunger with that shit.

Trying out new things is how we learn things. What does whether it happens by chance (which is what nature is) have to do with human endeavor? Human history is one long story of us fighting back against the way man was "meant to live". Which is to say brutal, violent, and short. Our nature is to kill anyone who isn't family or part of our tribe. We prove our human superiority every day by curbing our violent nature enough so we can live in groups of more than 50 people without smashing each others heads in. Why should the fact that random chance hasn't spat out a certain variation yet matter to us?

And the cause of world hunger is poverty, not a lack of food.

igor.levicki wrote:

Are you telling me that there is no reason for species to be separate? At all? That they have randomly developed as they are now? Or maybe you don't have a good answer to that question? If you don't, why are you trying to change anything before understanding it fully? That is what I am trying to understand.

It is you that does not understand it. Not only is there indeed no reason for species to be separate, there is in fact no such thing as separate species, and there never was. The notion of "species" is a consequence of the human method of making sense the world by giving everything a label and putting them in categories. Reality is not so neat and tidy. There are only individual organisms. Groups of organism may have a sufficiently similar genome that viable reproduction is possible, but it's a continuum, not a discrete distribution with clear boundaries. All living organisms share a common ancestor. There is no separation between species, just degrees of relation.

They developed through evolution, duh. And while natural selection is not random, the variation that natural selection acts upon for the most part is. So yeah, the form nature takes today is mostly chance; if you reran the planet from a few billion years ago, you'd end up with something completely different.

On a side note, trying to change what you don't fully understand is how you learn more about it. Fully understood stuff is boring.

igor.levicki wrote:

Deamon wrote:

There's still no reason to assume that a gene from a fish is more dangerous than one from another plant, though. Would you still say it's unnatural and dangerous if the same gene showed up in a strawberry plant through random mutation?

There is no proof that it isn't more dangerous. Since I am not the one making it but I am asked to consume it as if it were natural product, the burden of proof should be on those who make it, no?

You are only interested in immediate effects such as toxicity -- if it doesn't kill you or make you sick it must be good, right? I am sorry, but I totally don't agree with that. There is a much wider picture out there. What happens if that gene gets transferred to other plants by cross-polination in successive generations? What if it mutates and transfers to say common weed (or any other plant whose growth is hard to control) and in doing so becomes toxic for say domestic bee? Can you imagine the consequences? Can you prove it cannot happen?

Cross-pollination? You get a hybrid. Duh. Or if you're only dealing with a single-gene difference, you get a mix of GM plants and "normal" ones. You know, just like every other crop variation in the history of agriculture. You've given no reason why genes from GM variations are somehow more dangerous.

Mutations? Again, can happen to anything with DNA. In fact, most food crops we cultivate today are the result of mutations that would be negative in nature, but happen to make the plant more useful for humans. And they are random. There is nothing in a gene inserted through GE that makes it more likely to mutate, mutate into something harmful, or transfer that mutated gene to other plants. Nothing,

You can never prove a negative, that's no reason to sit on your backside and never do anything. And the truth is even if you could, you'd just pull another "danger" out of your ass. You demand absolute proof of absolute safety, knowing full well that's impossible. It's scaremongering, pure and simple, no different from the panic of cell phone radiation or terrorist threats. Except the latter has actually happened.

igor.levicki wrote:

A study paid by Monsanto? The one that says GMO is safe?

Pro-something study means that it was manipulated exactly like this one, only it happens much more often and it is never reported on in this fashion.

So you are saying that no study that didn't say GMO is dangerous went beyond 90 days? Despite the counterexamples given by several posters in this very thread?

igor.levicki wrote:

Deamon wrote:

No, it is because, all too often, intelligence becomes nothing more than a means to be wrong with confidence. Smart people are very good at cherrypicking facts that confirm their existing biases.

I think you got it all wrong. Intelligent people are those who are more aware of how little they know -- intelligent person can never be as confident as an ignorant one (source).

If only that were the case. Sadly, we are all prone to confirmation bias, and being intelligent is no guarantee for using that intelligence properly. That is why the scientific method took so long to get off the ground; its emphasis on constant doubt, on considering all knowledge as provisional, and on constantly trying to prove yourself and everyone else wrong, is profoundly counter to the way our brains work.

igor.levicki wrote:

Deamon wrote:

His notions of species sounds more like it's from a creationist's handbook, like the "created kinds" that were made by God separately and should forever remain separate.

It is not important who or what made them that way. It is important to understand why before trying to change it.

There is no "why". There is a "how", but nature has no reasons for anything, except in the purely statistical sense that an individual with variation A has a better chance to reproduce than one with variation B. And as I mentioned before, the distinction between species is an artifact of human thought processes, not something that exists in reality.

This is pretty much Monsanto and the people in their employ generating (what may as well be) ALL the data (and ALL the implied bias) on the subject and bringing that nuke to the slap-fight that those with little or no funding whatsoever showed up to ask questions at. If all the facts are in favor of GMOs at this point, I suspect it's only because Monsanto has not allowed other facts to be generated so far.

That's nuts. That's conspiracy thinking. Monsanto may be a big and powerful organization of scumbags out to make a buck, but they do not control everything related to GMOs. Even as influential as they are, it's extremely unlikely that the only reason GMOs tend to turn up null when looking for health effects is because Monsanto has their strings connected to everything. This kind of accusation is of a piece with those who dismiss all climate science just because the government funds so much of it. It's not about the strength of the evidence accumulated so far (as you asserted), it's about the realistic chances of one evil organization manipulating everything throughout the world, especially all the scientists. Go back and read anything I've written in response to people who assert that all climate scientists are corrupt or incompetent, promoting a hoax and keeping mum about it just to get more government bucks, and see how much of it would have to apply to any argument about Monsanto controlling everything to do with GMO crops. Why do you think this is a special case? You had better have a damn good reason beside what you've given so far, which is essentially "Monsanto's a bunch of jerks!" Extraordinary claims require extraordinary evidence.

Quote:

I don't automatically assume GMOs in general are bad just because Monsanto is the leading proponent, but that is a good enough reason not to automatically assume they're safe and commercially benign.

Except that acceptance of their safety seems to rest on the inability of researchers to consistently find anything wrong with them under proper experimental conditions. I have not reviewed the literature on this very much, but so far there does not seem to be a clear and present danger despite decades of studying the issue. At this point it's looking more and more like the "ZOMG CELLPHONE RADIATION!" scare than anything else.

This begs the question of whether the groups were large enough to distinguish a real effect from random fluctuations, particularly given the large number of comparisons.

Quote:

2) The size of control group met the standard practice “The experimental groups were 20 animals [10 male + 10 female] and therefore the control group should be 20 animals”.

The suggestion that the size of the control group should be determined based on "standard practice" is either ignorant or deceptive. There is no "standard" control group size that is appropriate for all study designs. The size of the control group should be determined by an appropriate statistical power analysis which takes into account the experimental design, including the number of comparisons, and the statistical method to be used.

Quote:

3) An expert statistician was part of the research team and this was certainly not a "fishing trip". Significance in many liver and kidney parameters are shown and highlighted in the Tables 1 and 2.

Standard ANOVA statistical tests with correction for multiple comparisons are not provided. There is no discussion or citations provided to demonstrate that the highly nonstandard method used corrects appropriately for multiple comparisons for this specific study design.

Quote:

4) We are not dealing here with a regular poison effect where increasing the dose will increase toxic effects. [...] hormonal system disturbances[...] are known to display nonlinear . see Hormones and Endocrine Disrupting Chemicals: Low Dose Effects and Nonmonotonic Dose Responses, Vandenberg et al 2012

This is ingenuous. Yes, dose-effect curves can saturate such that above a certain level, further increase in dose does not further increase harmful effects, and it is not impossible that eating a high dose of Roundup could somehow be more healthy than eating a low dose. Nevertheless, there should be a dose-effect curve; harmful effects should vanish at a sufficiently low dose. Failure to observe this tends to suggest artifact, such as false positives from incorrect statistics or (possibly unconscious) bias due to the lack of proper blinding of the groups.

Safe or not, the problem with GMOs is that the companies making them then get absolute control of the food industry, to the point of demanding royalties from farmers who never willingly used GM seeds but their crops got contaminated via pollen.

Could a organic farmer with a "contaminated" field sue the company making the GM for ruining his organic crop?

3) An expert statistician was part of the research team and this was certainly not a "fishing trip". Significance in many liver and kidney parameters are shown and highlighted in the Tables 1 and 2.

#3 is a gross distortion... the use of poorly-described 'multiple regression' instead of the standard ANOVA without sufficient explanation is a red flag. The statistical expert appears to be an expert in fudging the numbers.

ANOVA is a statistical tool derived from gambling statistics which applies poorly outside of that world. It assumes a normal distribution of probable outcomes, and can't handle other distributions. It uses the null hypothesis to make black and white decisions in a world that is only in multiple shades of color. It forces researchers to shoehorn their experiments into this highly unnatural mathematical paradigm, rather than intuitively exploring the world. And when it actually works all you get is a correlation between events, never the cause.

Statistics is the very last thing to consider in an experiment, to put some error bars around your conclusions. But most of the scientific world is now innumerate, and place this feeble mathematical tool above all reason. They design their experiments around a mathematical model which they don't understand and doesn't apply to what they are doing, and trust results from a program they did not write or test more than their own intuition.

We are going to have to agree to disagree, big time. And I'm guessing the experimental application of statistics isn't *your* strong point either:

tfernsle wrote:

[ANOVA] when it actually works all you get is a correlation between events, never the cause.

Is a caveat that applies to *all* statistical analyses, even the highly questionable ones used in this paper. I lament the statistical knowledge of many a biologist but most are aware of the foregoing. BTW if you're worried about normality reducing the suitability/power of ANOVA, well guess what? You can check that. Q-Q plot anyone?

“... the protocol [and] the statistical analysis traditionally used by a number of petitioners, including Monsanto, show certain weaknesses that make it impossible to conclude with sufficient certainty that there are no health and environmental risks associated with GMOs. These weaknesses are now largely accepted outside the HCB”

“... the protocol [and] the statistical analysis traditionally used by a number of petitioners, including Monsanto, show certain weaknesses that make it impossible to conclude with sufficient certainty that there are no health and environmental risks associated with GMOs. These weaknesses are now largely accepted outside the HCB”

Amusing that you should recycle that quote, let's have a look at a little more of that paper:

"The most fundamental point to bear in mind from the outset is that a sample size of 10 for biochemical parameters measured two times in 90 days is largely insufficient to ensure an acceptable degree of power to the statistical analysis performed and presented by Monsanto. For example, concerning the statistical power in a t test at 5%, with the comparison of 2 samples of 10 rats, there is 44% chance to miss a significant effect of 1 standard deviation (SD; power 56%). In this case to have a power of 80% would necessitate a sample size of 17 rats."

Hmm 10 Rats not enough? Wise words indeed. Tu quoque? I think so. I can only wonder why they didn't think about powers in their most recent paper... Honestly, we get the fact Monsanto do not do a good job of the science either (perhaps purposefully so), but that doesn't make similar behaviour ok just because someone you support performed the work.

Amusing that you should recycle that quote, let's have a look at a little more of that paper:

"The most fundamental point to bear in mind from the outset is that a sample size of 10 for biochemical parameters measured two times in 90 days is largely insufficient to ensure an acceptable degree of power to the statistical analysis performed and presented by Monsanto. For example, concerning the statistical power in a t test at 5%, with the comparison of 2 samples of 10 rats, there is 44% chance to miss a significant effect of 1 standard deviation (SD; power 56%). In this case to have a power of 80% would necessitate a sample size of 17 rats."

Hmm 10 Rats not enough? Wise words indeed. Tu quoque? I think so. I can only wonder why they didn't think about powers in their most recent paper... Honestly, we get the fact Monsanto do not do a good job of the science either (perhaps purposefully so), but that doesn't make similar behaviour ok just because someone you support performed the work.

How many thoughtful people such as your self are calling Monsanto's actions the same as they call Seralini's? Is there an asymmetry in action if not in mind?

Seralini etal paper calls for more study, but Monsanto's flawed approach was given a pass by so many and thus put their products on the plates on millions. Perhaps the asymmetrical imbalance should be tipped the other way?

How many thoughtful people such as your self are calling Monsanto's actions the same as they call Seralini's? Is there an asymmetry in action if not in mind?

I'm a scientist, that makes me and my colleagues professional cynics. Lots of scientists take a dim view of Monsanto, you can see letters to Journals on the subject fairly regularly. The same can be said for pharmaceutical industry studies. Furthermore, the whole point of publishing our work is so other scientists can replicate and *critique* the science. A paper in a journal is the bare minimum to get our attention, but that's all it should buy; acceptance takes time and weight of evidence.

It's a pity that tenure track pressure, commercialisation and politicisation poison the well, too many are tempted to rush or perform science by press release these days (which is not science at all IMPO, I'm looking at you Craig Venter). I'm unsure if better lines of communication (in recent times) make the issue worse, though it certainly feels like it. Pick up almost any big journal these days and you'll see plenty of hand-wringing editorials on the subject.

ANOVA is a statistical tool derived from gambling statistics which applies poorly outside of that world. It assumes a normal distribution of probable outcomes, and can't handle other distributions. It uses the null hypothesis to make black and white decisions in a world that is only in multiple shades of color. It forces researchers to shoehorn their experiments into this highly unnatural mathematical paradigm, rather than intuitively exploring the world. And when it actually works all you get is a correlation between events, never the cause.

Statistics is the very last thing to consider in an experiment, to put some error bars around your conclusions. But most of the scientific world is now innumerate, and place this feeble mathematical tool above all reason. They design their experiments around a mathematical model which they don't understand and doesn't apply to what they are doing, and trust results from a program they did not write or test more than their own intuition.

This is confused on several levels. ANOVA is not "derived from gambling" but from fundamental mathematical principles of probability and statistics (although in introductory courses, it is often illustrated with examples from gambling, which tends to be familiar to students). It is true that ANOVA assumes a normal distribution. This is the case because (contrary to what you believe) many real world processes are observed to follow a normal distribution. This is understandable based upon the fundamental mathematics of probability. In particular, the Central Limit Theorem proves that means of random variables will tend to approach a normal distribution when the number of samples is large enough, even if the individual variables are not normally distributed.

The null hypothesis is used to answer a very fundamental question: given two sets of conditions, is there any difference at all? This black-and-white question is merely the starting point for addressing the "shades of gray" in the real world--there is no point in discussing how much two "shades of grey" differ if you don't have any evidence that they differ at all. And from a statistical point of view it is easier (i.e. requires a smaller sample size) to show that there is some difference than to determine how much difference there is.

Nevertheless, one can fall into error by assuming a normal distribution. There are in fact statistical tests (known as "nonparametric") that make no assumptions about the statistical distribution.

Of course, not knowing the statistical distribution introduces additional uncertainty, which means that studies using nonparametric statistical tests have lower power, and for that reason require even larger numbers of animals/subjects than tests that assume a normal distribution. If the size of the groups is too small to reliably detect a statistical difference using ANOVA, then it will also be too small to detect a difference using a nonparametric test.

I am probably naive, not being a scientist, but what is the importance of statistics in this particular study for the big results?Say, if all the females in the 11% GMO diet group developed tumors while only half of them developed tumors in the witness group, it does mean that GMOs in this concentration cause twice as much tumors, right?Statistics can only interpret the raw results, give us precise figures and confirm that the results from the other 22-33% GMO diet groups are following the same pattern or not, but it seems to me that the trend is visible to all regardless of which method you use.

As such, the health concerns are real, or, if you are still denying the results, it looks like they warrant to conduct more such independent long-term studies, this time with the proper funding so you can have, say, 100 rats in each group instead of 20, using the proper statistics methods, etc. At least one good result of this study is that Monsanto will not be able to shut down or discredit future studies and the scientists who perform them as they did for dozens of years.

Séralini only got €3.2M from one foundation and 2 supermarket brands for this private 2-year study, let's hope we can get a better public study with full transparency.

I recommend to watch the Genetic Roulette documentary. It has some serious questions about global health that should really be answered by more studies, like the relationship between GMOs and auto-immune diseases, autism, etc. I won't say that all the evil in the world comes from GMOs, but the explosion of health disorders in the whole U.S. is more likely to come from our food than from the air we breathe, and yet we don't dare investigate this because of corporate pressure, when it's literally our life we're talking about.

I am probably naive, not being a scientist, but what is the importance of statistics in this particular study for the big results?

Absolutely critical. Basically, you need the statistics to tell you whether the Roundup or GMO maize did anything at all, or whether the results are entirely determined by chance.

Quote:

Say, if all the females in the 11% GMO diet group developed tumors while only half of them developed tumors in the witness group, it does mean that GMOs in this concentration cause twice as much tumors, right?

Wrong. Let's take two groups of randomly selected animals and do nothing at all to them. It is almost certain that one group will have more cancers than the other, because development of cancer, whether spontaneously or chemically induced, is influenced by random factors. Particularly with small groups, there is some probability that one group will have twice as many cancers as the other, just by chance. You can make this probability larger by breaking down your cancers by type--brain cancer, stomach cancer, etc. If you look at 10 different organs, you can increase the likelihood that one group will have twice as many cancers in some organ as the other group--purely by chance--by as much as 10-fold. The more different measures you look at, the greater the likelihood that you will see a big difference in one or more that is totally due to chance.

This is what the statistics calculates--given the study design, how likely is it that the differences observed are entirely due to chance, and have nothing to do with the treatment you gave them (feed, in this case)? The likelihood of big differences arising by chance in a complex study can be surprisingly high. This is what "statistically significant" means: that a calculation has been done to show that differences between the groups as large as those that were observed, a "false positive"--are unlikely to have arisen by chance. The general minimal standard for this is less than 1 chance in 20 that differences that great could have arisen by chance (less than 1 in 100 is preferred). There is a standard accepted method for doing this calculation, called ANOVA, and it is quite suspicious if a paper of this sort does not provide the results of doing this calculation--it strongly suggests that the authors are trying to conceal the fact there is a strong chance that the GMO/Roundup diet did nothing, and that the apparent excess cancers are entirely due to chance.

This is confused on several levels. ANOVA is not "derived from gambling" but from fundamental mathematical principles of probability and statistics (although in introductory courses, it is often illustrated with examples from gambling, which tends to be familiar to students).

ANOVA compares standard deviations and means of samples, methods from probability theory which originated in the 18th century and was largely motivated by an interest in gambling. These methods were then expanded to other suitable stochastic processes, in particular particle physics and finance. In the early 20th century Gosset's use of the t-test to characterize raw materials for brewing inspired Fisher to develop/popularize applying probability theory to biological statistics in the form of hypothesis testing, the basis of ANOVA.

Sample mean and standard deviation make a lot of sense in the context of gambling; you've got your dice or coin designed for equal probability, thrown in a way on a surface that does not favor any one outcome. Life is exactly the opposite of that, it seeks out the optimal circumstances to favor itself. When we take a bunch of monkeys out of the forest, stick them in cages and apply ANOVA analysis to them, we're assuming the most unnatural thing that just happened to them was the green dye we injected into their dogfood. When you feed a group of chihuahuas and great danes the mean of the food the dogs need, you get fat chihuahuas and starving danes. The widespread use of ANOVA has led us to perceive dogs as interchangable, cages as homes, and neighbors as statistical data points. It prevents us from seeing the uniqueness of individuals and the briefness of time. It promotes the idea that a mean even exists, that there is a such thing as normal.

Statistics has its usefulness, you want to know how many people are coming to dinner. And probability has its usefulness, you'd like to know how accurate your statistics were, and approximately how hungry your guests will be. But if you ignore individuality, things like allergies and dietary preferences, you may have a disastrous meal. Designing experiments to fit a mathematical model is to place theory above empiricism, it is to invite people to dinner without discussing the meal.

This is very frustrating, we still need several independent long term studies since this one is so flawed. I'd rather have the scientists look at less parameters (they said they analyzed more parameters than the Monsanto 90-day study did) but use a statistically significant number of rats. Now we're back to square one, it looks like there could be something wrong with GMOs, but this study cannot answer that question.

I wish John Timmer would have made that clear in his article. Thanks goodness there are so many smart people commenting on Ars, and idiots like me not even aware they are showing their incompetence from the stupid questions they ask...

Really, tfernsle, I disagree with so much of what you say I struggle for words. I'll give you this though it's a nice bit of sophistry to try and undermine statistics by conflating it with gambling. The origins of statisitical treatments are many and varied, including cups of tea:

Does that mean they can't be applied outside of their original scenarios? No of course not.

tfernsle wrote:

Statistics has its usefulness, you want to know how many people are coming to dinner. And probability has its usefulness, you'd like to know how accurate your statistics were, and approximately how hungry your guests will be. But if you ignore individuality, things like allergies and dietary preferences, you may have a disastrous meal. Designing experiments to fit a mathematical model is to place theory above empiricism, it is to invite people to dinner without discussing the meal.

Not really no. Biological data is inherently variable, how do you tell, for example, what a person's predisposition to an allergy is without stats? Metaphorical dice are rolled every time a genome is formed, without large datasets and a means of sorting out messy numbers biology would be in a lot of trouble. In your example, say you're catering at a concert, you want to know if it's worthwhile putting something hypoallergenic on the menu and if so how much to order, how do you do it? By extrapolating from public health data of course: statistics.

Even relatively uniform animals of the same strain will have enough differences to alter their response to a given drug/treatment or even old age. Simply put without statistics you've not a hope in hell of confidently determining whether an effect is real or not. (Which is a pity really because biology would be laughably simple otherwise.)

I've got further bad news for you, electrons... we only probably know where they are. Never look at quantum mechanics, even Einstein was initially disappointed

Sample mean and standard deviation make a lot of sense in the context of gambling; you've got your dice or coin designed for equal probability, thrown in a way on a surface that does not favor any one outcome. Life is exactly the opposite of that, it seeks out the optimal circumstances to favor itself.

And one of the things that comes out of both mathematical analysis and practical observation is that mean and standard deviation are useful measures over a broad range of conditions in which there is variation due to random or chaotic factors (which is pretty much everywhere in the real world), even when the conditions you cite are not met. For example, (since you seem stuck on gambling) it is also possible to characterize the behavior of a weighted die, or a die thrown on an irregular surface using mean and standard deviation.

Quote:

When we take a bunch of monkeys out of the forest, stick them in cages and apply ANOVA analysis to them, we're assuming the most unnatural thing that just happened to them was the green dye we injected into their dogfood.

I don't know anybody in science who makes such a ridiculous assumption. Use of ANOVA has nothing whatsoever to do with "natural" or "unnatural" (which are, in any case, imprecise terms that have no useful meaning in science).

Quote:

When you feed a group of chihuahuas and great danes the mean of the food the dogs need, you get fat chihuahuas and starving danes. The widespread use of ANOVA has led us to perceive dogs as interchangable, cages as homes, and neighbors as statistical data points. It prevents us from seeing the uniqueness of individuals and the briefness of time. It promotes the idea that a mean even exists, that there is a such thing as normal.

Nonsense. ANOVA stands for "analysis of variance" where variance is a measure of how the individuals in a group differ. If individuals were not unique, there would be no need of a concept such as standard deviation or variance. Any scientific analysis recognizes that individuals in a group are not identical, and respond in different ways to the same conditions. This is very elementary.

Quote:

Statistics has its usefulness, you want to know how many people are coming to dinner. And probability has its usefulness, you'd like to know how accurate your statistics were, and approximately how hungry your guests will be. But if you ignore individuality, things like allergies and dietary preferences, you may have a disastrous meal. Designing experiments to fit a mathematical model is to place theory above empiricism, it is to invite people to dinner without discussing the meal.

This is a caricature of statistics which has nothing to do with how statistics is used in practice or how scientists think. Indeed, currently a major area of research is genome wide association studies (GWAS), which are devoted toward the study of individual differences down to the genetic and epigenetic level. And the fundamental concepts of statistics, including measures of central tendency and dispersion such as mean and standard deviation, continue to be applicable and invaluable in making sense of the huge amounts of data that are being acquired regarding the nature of individual differences.

...it looks like they warrant to conduct more such independent long-term studies, this time with the proper funding so you can have, say, 100 rats in each group instead of 20, using the proper statistics methods, etc. At least one good result of this study is that Monsanto will not be able to shut down or discredit future studies and the scientists who perform them as they did for dozens of years.

Séralini only got €3.2M from one foundation and 2 supermarket brands for this private 2-year study, let's hope we can get a better public study with full transparency.

The Intellectual Property rights of the Monsanto may prevent this. Which may be the part of the reason for the high level of secrecy in the Seralini et al study. (The other part being the levers of political power pulled by Monsanto).

"Under U.S. law, genetically engineered crops are patentable inventions. Companies have broad power over the use of any patented product, including who can study it and how."

For example:“In 2001, the seed company Pioneer, owned by Dow Chemical, was developing a strain of genetically engineered corn that contained a toxin to help it resist corn rootworm, an insect pest. A group of university scientists, working at Pioneer's request, found that the corn also appeared to kill a species of beneficial ladybug, which indicated that other helpful insects might also be harmed. But, according to a report in the journal Nature Biotechnology, Dow said its own research showed no ladybug problems, and it prohibited the scientists from making the research public. Nor was it submitted to the EPA.”http://articles.latimes.com/2011/feb/13 ... s-20110213

“while university scientists can freely buy pesticides or conventional seeds for their research, they cannot do that with genetically engineered seeds. Instead, they must seek permission from the seed companies. And sometimes that permission is denied or the company insists on reviewing any findings before they can be published…”

“Conducting research requires funding, and today’s research follows the golden rule: The one with the gold makes the rules.”

“What is the impact of the flood of corporate cash? “We know from a number of meta-analyses, that corporate funding leads to results that are favorable to the corporate funder,” says Schwab. For example, one peer-reviewed study found that corporate-funded nutrition research on soft drinks, juice and milk were four to eight times more likely to reach conclusions in line with the sponsors’ interests. And when a scrupulous scientist publishes research that is unfavorable to the study’s funder, he or she should be prepared to look for a new source of funding.”

The woman got eight out of eight trials correct. Do you really need a statistical measure to believe her? Look at the raw data of this GMO study, what does your gut tell you?

Gift wrote:

Does that mean they can't be applied outside of their original scenarios? No of course not.

When one translates an English sentence into a mathematical problem, and then translate the solution back to English, there are certain nonsensical entailments which bleed through in the transfer. If you have three cows and I take away four, do you have negative one cows and me four? If from a party of nine I say half left, does that mean there are 4.5 people remaining?I'm not saying ANOVA can't be applied outside of gambling, but recognize the assumptions of your model. In cognitive neuroscience terms the metaphorical extension used to go from the source domain of English to target of Math introduced elements which did not exist in the original English sentence. Negative cows exist in math-land, but identical rats do not exist in the real world. Biological organisms, cell signaling pathways, life involves millions of factors. Trying to pin it down to one variable is inherently ridiculous. Rats aren't dice to be rolled. We say we want to know if Roundup causes cancer, but we keep asking if Roundup is correlated with cancer. In this scenario there is no sample size large enough to prove GMOs cause cancer.

Gift wrote:

Biological data is inherently variable, how do you tell, for example, what a person's predisposition to an allergy is without stats?

Sample mean and standard deviation make a lot of sense in the context of gambling; you've got your dice or coin designed for equal probability, thrown in a way on a surface that does not favor any one outcome. Life is exactly the opposite of that, it seeks out the optimal circumstances to favor itself.

And one of the things that comes out of both mathematical analysis and practical observation is that mean and standard deviation are useful measures over a broad range of conditions in which there is variation due to random or chaotic factors (which is pretty much everywhere in the real world), even when the conditions you cite are not met.

If we know the mean and standard deviation of air temperature at a location over the course of a year, that tells us little about the current air temperature. If we know the average number and distribution of trees in the county, we still don't know what kind they are. Practical observation immediately captures essential information that probability is helpless to describe. Life fluctuates in time and space, and is poorly characterized by the linear fit of a mean. Life responds to its environment in an adaptive manner no tumbling rock could imitate. Very few natural (read: not man-made) signals are best characterized as a gaussian distribution. I have worked with a wide variety of signals (radar, EEG, mass-spec, etc.) and there is no noise, just signals you don't want (birds, heartbeat, hydroxyl group, etc.). They are best characterized specifically and extracted as needed, the only signals which fit a normal distribution are the relatively small ones from the hardware.

trrll wrote:

For example, (since you seem stuck on gambling) it is also possible to characterize the behavior of a weighted die, or a die thrown on an irregular surface using mean and standard deviation.

What do you do when the dice thinks and can choose its result? How do you characterize that?

There are probably ways around this, such as getting the seeds in secret from Canada like the CRIIGEN did, or performing the study from a country that is not a signatory of these international IP treaties. But I suspect the EU can get hold of the seed legally from Monsanto themselves for toxical study purpose, since they are seeking to open the EU market.

Yeah, the link to Dr. Arpad Pusztai's video posted earlier mentioned the effect of Bt GMOs on ladybugs, so we might want to study Bt corn (or soybeans or cotton) for good measure too.

Now I almost regret systemically using Bt when putting in place young leek plants in organic farming. It wouldn't affect ladybugs, but the leek's moth and leaf-miner flies have their own predators such as some wasps that must be affected too. Using companion planting with carrots seems to still be the best prevention.

If we know the mean and standard deviation of air temperature at a location over the course of a year, that tells us little about the current air temperature.

Yes, it also tells us little about the price of corn in China. But that's all right, because it doesn't purport to tell us either of those things, and nobody thinks it does. We use statistics for the things it does well. We use other approaches for the things that it does not. You are complaining that a hammer is not a screwdriver.

Quote:

If we know the average number and distribution of trees in the county, we still don't know what kind they are.

Or any of a thousand other things that these measures don't purport to tell us.

Yes, statistics is math, not some kind of magic oracle. There are many things that statistics can't do. We use statistics for what it can do. Like, for example, to estimate whether a particular type of food will appreciably increase your risk of a certain disease when relevant individualized information is unavailable.

Quote:

Very few natural (read: not man-made) signals are best characterized as a gaussian distribution.

And as anybody knows who has gotten beyond an introductory course in the subject, statistics can be applied to any kind of distribution.

Quote:

I have worked with a wide variety of signals (radar, EEG, mass-spec, etc.) and there is no noise, just signals you don't want (birds, heartbeat, hydroxyl group, etc.).

Which is what statisticians mean by the term "noise." PoTAYto, PoTAHto

Quote:

They are best characterized specifically and extracted as needed, the only signals which fit a normal distribution are the relatively small ones from the hardware.

Again, this odd obsession with normal distributions. There are many kinds of distributions, and statistics can be applied to all of them. The normal distribution just turns out to be particularly useful because it frequently crops up in real world situations, due to the Central Limit Theorem.

trrll wrote:

What do you do when the dice thinks and can choose its result? How do you characterize that?

Many aspects of the behavior of thinking organisms can also be characterized in useful ways by using statistics. That's why we have such things as polls.

If all else is equal, the smaller the sample size the smaller the statistical power of subsequent statistical tests. (Where power is the probability of correctly rejecting the null hypothesis). So the small size of the treatment groups is a factor that does not help in finding either subtle or moderate effects of the treatment.

Wouldn't the small size increase the probability of type II errors? And isn't incorrectly retaining the null hypothesis a major concern in feeding safety trials?

They'd better be using the most sensitive statistical tests to recover some of that statistical power. So what would be the most sensitive statistical tests?