Wednesday, July 5, 2017

By now, many people have seen the two new studies claiming harm from neonicotinoids to honeybees, one was conducted in Europe and one in Canada. There have also been severalarticlescriticizingthe work for overstepping the data and drawing unsupported conclusions. I have a few issues with these studies beyond what has already been stated, so I'll shares some of the issues I have here and I won't touch on the other criticisms that others have covered.

Measuring everything without correctionIn the European study, the researchers collected quite a few data points and took many different measurements. In total, there were 258 data points taken. Despite all of this, the researchers did not adjust for multiple measurements. From the supplemental materials and methods: "We did not apply Bonferroni corrections as the lack of independence for the majority of the response variables (e.g. different life stages of honey bee) meant that there was no valid level for the correction." This is a serious issue that I'll explain below.

First it's important to understand why correction for taking multiple measurements is vital. The main reason why researchers do this is because when many things are measured, the likelihood of false positives being found increases as the number of measurements increases. To counteract this, researchers will adjust the P-value to limit the risk of false positives (called a Type I error in statistics). The Bonferronicorrection is a commonmethod for doing this. It adjusts the resulting P-value so that it is smaller when more measurements are taken. Some researchers do not like it because it tends to be more conservative; however, not using it, or other methods for correction of multiple tests, introduces false positives in research.

In the supplemental materials and methods, the researchers claim that they were not taking independent measurements. However, this is not entirely accurate. The best way to describe these measures is semi-independent. As an example, the number of brood and their health directly impacts the future number worker bees and the survival numbers the next winter. However, the number of workers the following winter does not impact the number of brood the previous spring. Furthermore, their reason for not using Bonferroni Correction does not make sense as the measurements are semi-independent. The Bonferroni Correction is only one method to reduce the rate of false discovery. This method may or may not be correct in this case, but there are other methodsforcorrectingthefalse discovery rateofdependentdata. It is very risky to not perform any correction of large data sets like this when there are so many measurements being taken. It really doesn't look like a statistician was consulted for this paper and the quality suffers as a result.

One only has to look at the data in order to see that not correcting for multiple corrections could be what has led to the confusion surrounding the paper. The figure below shows each measurement in each of the three locations for both of the neonicotinoids tested. The results are confusing and there is no consistent result by treatment or country; the positive and negative results are mixed in with no particular link to a given treatment or country. This looks like the type of result you would see if there were false positives in the study from not correcting for multiple comparisons. Because of the lack of correction for multiple comparisons, which is commonly done in cases even when the measurements are semi-independent, we can't draw conclusions from the statistical tests that were run for this data set.

The results of the European honeybee/neonicotinoid study compiled by Dr. Peter Campbell Sr., environmental specialist and head of product safety research collaboration, Syngenta, and first published by JonEntine and Henry Miller. Shared here from Thoughtscapism. The light green cells are neutral results from neonicotinoids on honeybee health, the dark green cells are positive impacts of neonicotinoids on honeybee health, red cells are negative impacts from neonicotinoids on honeybee health, and white cells represent measurements that were not taken due to lack of bees. CLO = clothianidin and TMX = thiamethoxam.

Other issuesIn order to talk about other issues that exist with the conclusions of the European study, we need to forget everything that I just said about multiple measurment corrections and assume that the results are correct. This is because even if the results are accurate, there are still issues with the interpretation of those results that make the conclusion that neonicotinoids harm honeybees problematic.

Lack of a consistent effectLooking at the raw data presented above, one thing is clear. There is no clear impact of neonicotinoids on any of the measurements taken. For example, the number of larval cells at flowering for thiamethoxam (TMX) showed an improvement in response to seed treatment in Germany, a negative response was seen in Hungary, and a neutral response was seen in the UK. For clothianidin (CLO), it was neutral in all three locations. The other critiques focused on the number of positive and negative effects among all the measurements, but they didn't address a key issue here. If a particular neonicotinoid was having an impact on honeybee health, it should be consistent across locations (this is why field researchers conduct experiments in several locations). If neonicotinoids in general were having a negative impact, then the effect should have been consistent across locations and treatments. Lacking this, the conclusion that neonicotinoids are negatively impacting honeybee health is not supported. This conclusion was toned down in the discussion for the article, but the press release had this as a firm conclusion and led to quite a bit of confusion in the reporting. It's another example of the disturbing trend of sciencebypress release where the conclusions are touted without showing the needed supporting data. This often leads to bad science reporting as press releases will often overstate results. I've previously spoken about this issue here.

What about the varroa mite and the viruses?The authors briefly touch on the differences in varroa mite infestations in the different countries. On average, Germany had the lowest rate of varroa mite infestation at 1.04% (+/- SE 1.00) followed by Hungary at 2.12% (+/- SE 1.34), and the UK having the highest at 8.05% (+/- SE 1.34). The authors then mention that the UK had a different hive treatment than Hungary and Germany, adding in another layer of variation that isn't accounted for. However, no mention is made of testing for any of the viruses that the varroa mite transmits. Simply put, this is a huge oversight on the part of the authors for this reason: both the varroa mite and many of the viruses they transmit weaken the immune system of bees. In fact, the varroa mite and at least one of these viruses, Deformed wing iflavirus, can act synergistically to reduce honeybee health to the point that the colony collapses. This is an excellent review on the impacts of the varroa mite, honeybee virus infection, and nutrition and how all three impact honeybee health. This review also discusses bee viruses and provides details on some of the more common ones. With the clear data linking honeybee health decline and honeybee viruses, it is not appropriate to measure honeybee health without addressing these viruses. This is because the viruses could impact the honeybee immune system to the point that the bee is more susceptible to other factors, such as neonicotinoids, and make it seem like those factors have an impact on honeybee health. This can lead to incorrect conclusions, such as neonicotinoids negatively impact honeybee health.

The Canada study: just a single seasonI'll now address some of the concerns I had with the Canadian honeybee/neonicotinoid study. The biggest problem I have is that they based the laboratory exposure levels on a single growing season rather than monitoring the neonicotinoid levels over multiple years. Field work needs to be replicated for at least two years but often three or more years are required. One of the reasons for this is because pesticide residues can vary from year to year. Let's stop and think about that for a second. If pesticide residues can vary from year to year, does it make sense to use just a single growing season to determine what a field relevant dose of neonicotinoids is? All they can state is that the residue levels seen in the fields tested that year had an impact when given to bees. They cannot use the results from a single year to make blanket conclusions for all areas and years. For all we know the seed treatment could have been excessive that year (accidents do happen) and the converse is equally as true. This is precisely why replication of years is crucial for field work. Multiple years of measurements are needed to draw conclusions of any value.

What about the viruses and varroa mite? In the supplemental materials and methods, the authors of the Canadian study state this: "We actively managed the colonies during the season, including adding empty honey ‘supers’ (i.e., a shallow 5-11/16” D x 19-7/8” L x 16-1/4” W chamber) and removal of swarm cells, but we did not chemically treat the colonies to control hive pests or diseases." In addition to not controlling hive pests or diseases, they also did not measure them. There is a reason why this matters. Previous work has clearly demonstrated that infection with bee viruses negatively impacts honeybee foraging behavior. Nosema also negatively impacts honeybee behavior. Not accounting for these diseases, let alone controlling for them by treating them, is a huge misstep as it introduces potential variation to the study that cannot be accounted for. Because of this, we do not know if diseases altered the foraging behavior of the studied honeybees and caused them to gather more pollen with neonicotinoids where it was more plentiful (in areas that were "treated"). Previous work has demonstrated that combining diseased honeybees with pesticides further reduces honeybee health, so ignoring this important finding in previous literature does not make sense.

The bottom line It's difficult (even impossible) to account for all factors in a field study. However, if there is a known factor that has been shown to reduce the very thing you are measuring, then it must be accounted for in the experimental design. Neither paper is exceptionally well designed for an agricultural study. Each study has issues that make it hard to draw conclusions from the data due to confounding factors that were not accounted for. If these papers had been sent to either an entomology or agronomy journal, the issues in each would have held up both papers. Both studies do not adequately address previous research on the topic in the experimental design, introduction or discussion. The format could be partially responsible for this, but these issues should have been addressed in the design of the experiment. Publication of weak studies like this only serve to confuse people, especially when coupled with poor science reporting that relies heavily on information in press releases.

Wednesday, May 10, 2017

Non-enveloped, head-tail structure. Head is about 60 nm in diameter. The tail is non-contractile, has 6 short subterminal fibers. The capsid is icosahedral with a T=7 symmetry. Via ViralZone.

Viruses and their hosts are engaged in a constant evolutionary battle with one developing defenses and the other subverting those defenses. Along the way, it is thought that some of these adaptations in the virus are lost as the host mutates targets. But this isn't known for sure. To answer this question, scientists reconstructed an ancient ancestral thioredoxin, a small protein that is vital in many organisms, using bioinformatics software for E. coli. Thioredoxins are also a protein that many viruses interact with and will suppress as thioredoxins are involved in defense against viruses. The researchers found that not only was the reconstructed thioredoxin functional in E. coli (if thioredoxin is not functional, it is lethal), but the cells were highly resistant to infection by the T7 bacteriophage.

This work was proof of the concept for the researchers. Their end goal is to use this technique with crop plants to develop novel sources of plant virus resistance by using an ancestral thioredoxin in place of the modern version. They outline the following steps that would need to be taken:

"(1) A known proviral factor in a plant is selected as a target. Obviously, this factor would be a protein that is hijacked (or suspected to be hijacked) by the virus or viruses for which we wish to engineer resistance.

(2) The known sequence of this protein is used as a search query in a sequence database.

(3) The sequences recovered from the search (belonging to homologs of the targeted protein) are aligned and the alignment is used as input for ancestral sequence reconstruction.

(4) The “modern” proviral factor is replaced by a reconstructed ancestral counterpart. Actually, ancestral sequences for many phylogenetic nodes can be derived from a single alignment of modern proteins and, therefore, the replacement could be actually performed with many different ancestral proteins leading to many engineered plant variants.

(5) The engineered plant variants are screened for fitness under conditions of interest (normal growth conditions, for instance) and for virus resistance."
The use of reconstructed ancestral genes is an newer area of research and it could be a novel source of diversity for plant breeding efforts. With CRISPR-Cas systems, it would be relatively straightforward to "swap" out the modern version of a protein for a reconstructed ancestral version. The researchers are still developing this technique for plant virus resistance, but it holds promise and I look forward to the results.

Monday, March 27, 2017

Note: I've decided to go back and post some of my older TMV Facebook page posts as blog posts to preserve them and make them more accessible. I decided to start with the post I made for my infographic on the acellular pertussis vaccine and how we got to that point.Today's post is a bit different, but I would ask you read my entire post before reaching a conclusion about what I'm saying. Part of being a scientist is following the data no matter where it leads and how uncomfortable it is. With that, I present the case of the acellular pertussis vaccine and how we got to the point we are at now questioning the effectiveness of the vaccine. However, as I point out below, scientists are not the ones responsible for this vaccine that is far less effective at preventing asymptomatic infections, we have the anti-vaccine movement to thank for that. The acellular pertussis vaccine has been under heavy scrutiny the last few years and for good reason. Although it is a very safe vaccine, it just isn't nearly as effective as the older whole cell version of the vaccine. There have been several studies questioning the effectiveness of the acellular pertussis vaccine in preventing asymptomatic transmission, including the infamous baboon study that still found that the acellular vaccine provided robust protection against all but asymptomatic infections. New research strongly points to the acellular vaccine preventing serious infection and illness but not in preventing asymptomatic infections. This was the conclusion of a study that found that the genetic variation of pertussis was too great if it was being limited by the vaccine. Coupled with several other reviews, including a Cochrane Review, of how effective the vaccine is at preventing serious and mild cases of pertussis, it appears that asymptomatic infections may indeed be happening in the vaccinated populations. However, intentionally unvaccinated populations are still at a higher risk of pertussis than the vaccinated population. But how did we get to this point? By all measures, the whole cell pertussis vaccine is highly effective and offers superior protection compared to the acellular vaccine. Why would we abandon it for an inferior vaccine? It turns out that fear and the birth of the modern anti-vaccine movement caused this. Back in the early 1980s, a television news channel released an "expose" of the whole cell pertussis vaccine and inflated the risks of febrile seizures due to the vaccine. In actuality, the risk of febrile seizures is quite low at 6-8 out of 100,000 kids vaccinated, with risk of the seizures only on the day of the vaccination. However, parents and lawmakers demanded a safer vaccine and as a result the acellular pertussis vaccine was created and released in the early 1990s. It was deemed a success at the time as it is incredibly safe. But we sacrificed effectiveness for safety. Even as early as 1999, researchers began questioning the effectiveness of the acellular vaccine compared to the whole cell version. The latest research is the final nail in the coffin as it were for the acellular pertussis vaccine. Does this mean we should scrap the idea of a pertussis vaccine? Of course not. But we need to use science to improve the whole cell vaccine and maybe identify the component (or components) of the whole cell vaccine that induces lasting immunity so that we can generate a safe and effective vaccine.

Sunday, March 19, 2017

Let me start off by saying that as a scientist, I firmly believe that scientists should speak more to the public and let our voices be heard. However, I have to join the growing number of scientists who won't be participating in the march. Part of my problem with the group and the movement stems from the fact that it is disorganized and has become co-opted by those advocating for pseudoscience. Others have expressed different concerns about the political fallout from speaking out, to the march becoming a liberal movement rather than sticking to what is important which is science funding being cut, to the march being too political, to the march being a potential trap due to the low levels of support for scientists among those in the lowest tax brackets. Others have weighed the pros and cons of the march and discussed some of the issues surrounding it.

Awhile ago, I commented on a post about the importance of values in science and how morality plays into the scientific method. Ethics in science is incredibly important. I trust that my fellow scientists are being truthful in what they report to the community. People who commit fraud eventually are caught and removed from the profession. There are numerous examples of this, from Andy Wakefield to Olivier Voinnet being caught and disciplined. Retraction Watch chronicles this self-policing and although the system can always be improved, it still works and frauds are eventually outed.

Imagine my surprise when I saw a comment in the group about science in the US being bought and paid for. This type of conspiracy theory had been cropping up more and more in the group as well as other pseudoscience in general. So I commented that I found the idea of scientists being bought and paid for offensive and I felt that type of attitude had no place in a group that was meant to organize scientists to let our voices be heard. I often see the idea that scientists are bought and paid for coming from people that have no connection to science or scientists and this case was no different. I may have been a little curt in my reply where I pointed out that the idea of science being bought and paid for is offensive, but I never would have guessed what the response from other members would have been. I was told that I was being an elitist and snobbish in my tone and that I was being divisive for pointing out that the idea that science as a whole is bought and paid for is offensive. When I pointed out that pseudoscience had no place in a movement for science, I was told that all thoughts and opinions should be given equal footing. This was the point that I left the group.

I've told my concerns about how the march could be co-opted by antiscience groups and weaken the message that was trying to be shared to a few friends of mine. Sadly, this seems to be what is happening to the march as they've recently partnered with the Center for Biological Diversity. On the surface, this seems to be okay; however, this group is rabidly anti-GMO and often repeats bad science when it comes to discussing GE crops. Stephan Neidenbach addresses some of these misconceptions here and was the first to point out that this antiscience group is taking part in the march. But this isn't the only questionable group to partner with march. They've also partnered with the Union of Concerned Scientists, a group that isanti-GMO and anti-nuclear power. Another problematic partner is the Center for Science in Public Interest which has problematic positions on artificial sweeteners and food dyes. No, aspartame doesnotcausecancer and the link of artificial food dyes to hyperactivity is tenuous at best. Earth Day Network is another troublesome partner as they have posted antiGMO stories on theirfacebook page. There are many other partners that are fantastic scientific organizations, but my fear here is that the event is going to be tainted by the organizations that do not hold science in the same regard. Much like my experience in the main FB group, I wonder if these pseudoscience organizations are being included for "diversity of opinion." If so, it hurts the message that the march is trying to send. Scientific facts are not based on opinion, but rather careful analysis of data that has been collected. Just because someone has a differing opinion, it does not rise to the same level of evidence that a scientific fact does. So trying to be "inclusive" of other opinions is not helpful when trying to advocate for science. After all, some might try and state that human involvement in climate change is a matter of opinion and not scientific fact. Or they might state that differences in opinion mean that vaccines can be dangerous too.

Science is not a buffet where people can pick and choose the parts that they like and disregard the rest. It is a method for examining the natural world and answering how and why things happen. Climate change denial, young earth creationism, anti-vaccine and anti-genetic engineering arguments are not equal to the science on those topics. It's incredibly sad to see a group that purports to be standing up for all science to willingly partner with groups that are antiscience or hold antiscience positions. Although there are many other partners that actively promote all science and I do believe that it's important for scientists to speak, I don't want to add credibility to antiscience rhetoric because let's face it, they are going to use partnering with the march to amplify their own antiscience messages. I just can't be a party to that.

Tuesday, February 14, 2017

I recently added the following infographic to my page, but lost the write up I had crafted to explain the terms fully. Luckily Mommy, PhD happened to have the post open on her phone and sent me a copy of what I wrote. I'm copying the text from my lost write up and some additional information here.

I've been contemplating whether or not to discuss this topic as it's a bit outside my normal area of expertise and page focus. However, the issue of fake news has only gotten worse recently with some people calling everything they don't like fake news. There is a lot of confusion over what fake news is and what it means, so I thought it might be helpful to define what fake news actually is. I decided to take four types of misinformation and come up with clear definitions for them. Even though each might be a distinct type of misinformation, there is a lot of overlap between the types. Fake news is entirely made up with the intent to misinform and often has the goal of making money off of the misinformation. However, it can still be used as propaganda and use click bait titles to drive traffic to the story. However, not all propaganda or click bait is fake news. Fake news is also similar to satirical news stories like one might see in The Onion or The Science Post. However, satirical news is meant to entertain and not intentionally deceive people despite some being fooled by it. A lot of pseudoscience sites will use fake news to intentionally deceive people and scare them into buying a fake cure for the thing that they just scared them about. Science advocates have been pushing back against fake science and health news for awhile now, but it's now going mainstream.Propaganda can come in several different forms. it can be fake news, badly reported news, or news that is one sided but accurate. The thing that makes a news story propaganda is the intent to paint a government or entity in a favorable light while leaving out any and all information that may paint that entity in a bad light. There may or may not be any truth to what is reported as propaganda, but only one side is told.

Click bait is an interesting situation. Click bait uses deceptive or sensational titles to try and draw in views. Often the title will not reflect the actual content. Click bait is often used in badly reported science (for example, students invent a nail polish that detects drugged drinks which doesn't reflect that this is still in the idea stage and not a viable product yet) and often leads to mistaken conclusions from people not versed in reading scientific papers. Legitimate news stories can use click bait and fake news stories often use click bait titles to drive traffic (for example, you won't believe what <insert product here> does to cure <insert condition/disease here>). The last type of misinformation is common, but quite different from the others. Sometimes legitimate sources of information will make a mistake in what they report. However, one thing that sets this type of misinformation apart is legitimate news sources will offer a retraction and explain their error. The problem here is that some people seize on an isolated incident of a legitimate source misreporting to declare everything from the source fake news. Making an error does not make a news source fake news but rather it means that humans are involved in the news process. Mistakes happen from time to time and a good source will own their mistakes. Sometimes an even more flimsy justification is used to declare something fake news: some people will declare something they don't agree with fake news. Since fake news has a negative connotation, calling something fake news that isn't is an attempt to denigrate the source. Trying to discredit an argument by denigrating the source is a type of logical fallacy known as Poisoning the Well. Basically poisoning the well means that you malign a source of information to try and discredit anything they say regardless of whether it is accurate or not. It's not a good way to have an open and honest dialog with anyone.

Hopefully these definitions and brief explanations help explain why fake news is different than other types of misinformation and why it can be dangerous. Additional reading on fake news and other types of misinformation can be found here and here. I'd like to thank a journalist friend of mine, Reaux Packard, for looking over my definitions and helping to make sure I'm on the right track here.

Monday, January 2, 2017

There has been quite a lot of talk about the latest paper from Seralini's group that claims that there are substantial metabolome differences between genetically engineered corn and non-GE corn. The paper was published in an online journal run by the Nature group (and not in Nature as some websites are claiming). At first glance, this paper seems to detail some results that are seriously concerning. However, when one examines the methodologies used, several glaring issues emerge that challenge the conclusions reached from the results presented. Many others have addressed several of the methodological problems with this study, but I'd like to focus on the corn lines that they used and the claims that they were isogenic as the entire experiment hinges on using the correct lines. To start with, I need to explain what an isogenic line is as most people (even other scientists outside of the plant sciences) do not know what this is. When discussing isogenic lines, it's not a single line but at least two lines. Genetically, these lines differ by only a few genes but are identical beyond that. Achieving this is nearly impossible, so researchers will use near-isogenic lines (genetically these are at least 99% identical). Generally speaking, near-isogenic lines (NIL) are not available for purchase and must be generated. To do this, a donor plant with the gene of interest (in this case, the gene would be a GE trait like glyphosate resistance or Bt production) is crossed with what is called the recurrent parent (see figure below). Using genetic markers (in a process called markerassistedselection), progeny with the appropriate genes are then back crossed against the recurrent parent and the trait selected for until the progeny are 99% genetically identical to the recurrent parent. It takes time and effort to generate an NIL like this, and there simply are no shortcuts to getting there.

Figure caption: In plant breeding, selected individuals are crossed to introduce or combine desired trait characteristics into new offspring; this necessitates numerous generations of backcrossing to establish the desired trait characteristics fully. Each successive backcross increases the genetic similarity of the new offspring to the recurrent parent, e.g. 75% similar at BC1 through to 99.2% by BC6. These numbers are based on how much of the recurrent parent genome can be theoretically regained at each step; however slight variations can occur. Marker-assisted methodologies that utilize DNA markers to enable selection of plant individuals that contain the greatest number of favorable alleles can reduce the number of generations required to get close to 99% similarity as adopted in the generation of the inbred variants of this study. From Harrigan et al., 2016 via PMC. DOI: 10.1007/s11306-016-1017-6

So what do NILs have to do with this new study and why do they matter? In order for the authors to clearly demonstrate that the differences seen are due to the GE trait (NK603, resistance to glyphosate), they need to use lines that have nearly identical genetic backgrounds. This is because it is well known that different lines have different transcript expression patterns, metabolomes, flavors, etc. For example, this paper by Wen et al. looked at the differences in the kernel metabolomes from different corn lines. This is not unexpected as different lines can have different phenotypes. One only has to look at a seed catalog to see the variation that is available for crops such as tomatoes or apples and yet these are the same species. Because of this, a NIL is needed to demonstrate that observed differences are due to the gene of interest and not the normal variation that is seen between lines.

In this study, the authors state that they are using isogenic lines in several places, but in the materials and methods they state they used the "closest" isogenic lines. For example, they state in the abstract that they used isogenic lines (see figure below).

In the materials and methods they state this:

These are not isogenic or near-isogenic lines. The DKC stands for DEKALB seeds, which is owned by Monsanto, and the numbers are the identifiers used by the company to denote the line. These numbers are akin to a catalog number used to sell seed. The lines offered one year may not be the same ones offered the next. It's also important to note that these numbers do not denote lineages, which is a carefully guarded trade secret for seed companies. The authors did not provide any information on the lineage of the two lines (DKC 2678 or DKC 2675 [which was also labeled as DKC 2575 in some places in the manuscript]) and just because they have similar numbers, that does not mean that they are genetically related. To help illustrate this, I found a DEKALB catalog from 2012. On page 5, there is a line, DKC 27-55 that has VT Double PRO technology (a stacked Bt trait). On page 6, there is a line called DKC 27-45 that has a single Bt trait (YieldGard Corn Borer)and RoundUp Ready 2. DKC 27-55 was new for 2012 and DKC 27-45 was on the market for awhile and was a recommended line for growing silage (for more on what silage is, see this video from the Peterson Farm Brothers harvesting silage for their cattle). Although these two lines have similar DKC numbers, they have different traits and very different uses. Beyond just picking two lines with similar catalog numbers, there is another issue with their choice of lines; they used hybrids.

With hybrids, two unrelated lines are crossed and the resulting progeny has higher yield through a process called hybrid vigor. It's very common for corn to be hybridized and it's something to be mindful of for a study looking at genetic effects as this can introduce a source of genetic variation if the same parent is not used in the hybridization process. If the lines did not share the same hybridization parent, then they would not be isogenic even if the original two lines were isogenic (for which no evidence is provided that they are). They ordered seeds that were already hybridized and provided no information on which parent was used to make the hybrid. Since there is no guarantee that the two hybrids share the same genes from the same hybridization parent, these cannot be considered isogenic. To do so is bad science.

The choice of lines used in this study introduces several major sources of variation that make it impossible to account for. Because the lines are not isogenic (or near-isogenic) and there is no information on if they were hybridized to the same parent line, it is impossible to say if the observed differences are due to the transgenic trait or due the fact that lines with differing genetics were used. The risk of misinterpreting the results based on this would be far too great and this type of experiment is too expensive to waste money like that. This is one of the reasons why researchers will generate their own NILs. Other issues with the study include the poor plot design that lacks randomization (or any other standard design for an experiment like this), lack of replications (different blocks within the field, other locations, repeated growing seasons, etc.) and the lack of information about these lines. No pedigree is offered and these lines are no longer on the market (I couldn't find any information on these lines online), so we can't say for sure how likely it is that they are closely related or not. This is bad science and this paper never should have made it through the peer review process with these and other major methodological issues intact (not to mention all of the grammatical and typographic errors).

This type of experiment really required the use of NILs that the researchers made themselves by crossing and backcrossing. Luckily such an experiment was published prior to the submission of this paper. Harrigan et al. (2016) examined this same issue. However, the experimental design for this study was superior in every way. The Harrigan study generated NILs for the same trait that the Seralini paper did (NK603); however, they generated four lines each that were transgenic or not as well as the recurrent parent (as a control). These lines were hybridized with two different female testers and planted in randomized complete block design with three replicate blocks in three separate locations (Illinois, Minnesota, and Nebraska). The metabolomes were then measured by GC-MS. They found few differences between the lines but that genetic variation between the lines accounted for more differences than the GE trait, for which no trait-specific effect was found.

As I stated above, there are numerous other issues with the methodology used in the Seralini paper that are not limited to just the use of improper lines. Generating NILs is time and labor intensive, but if you are truly trying to answer this type of question it is simply something that must be done. You cannot take shortcuts with experiments like this and you cannot assume that just because two lines have similar numbers in a seed catalog that they are related. Shortcuts in science lead to bad results that are a waste of time and money.

Thursday, December 29, 2016

With the resurgence of Andy Wakefield through his "documentary" Vaxxed, his prior work has come back into focus. It's clear that his work as a researcher contains many errors of omission and commission, but why is the scientific community certain that something nefarious happened in his prior work? There is of course the excellent work from Brian Deer showing how the paper published in The Lancet was fraught with fraud and undisclosed conflicts of interest (being paid nearly a half a million pounds to show that the MMR is dangerous is kind of a conflict of interest). That investigation led to Andy losing his medical licence in the UK. However, much of the focus has been on the 1998 Lancet paper and not as much on his other work. I'll discuss one of his other papers here as it is incredibly clear that at best, inept data analysis and sample handling was done. In 2002, Andy was an author on a paper entitled "Potential viral pathogenic mechanism for new variant inflammatory bowel disease" published in Molecular Pathology (the PubMed Central deposit can be found here). The paper uses qPCR detection of the Measles virus using the Taqman chemistry (see image below).

This is how Taqman qPCR works.

The results of the paper look normal to the lay person; however, if a researcher who is familiar with qPCR looks at the raw data from the paper, an enormous problem is evident. I rarely share youtube videos; however, this video from C0nc0rdance explains qPCR and why the issue is such a huge deal.

For those who don't want to watch the video, the problem is simple. qPCR is done using a thermocycler that uses a laser to detect a fluorescent target and is linked to a computer. The software on the computer automatically does calculations for determining what is positive and what is not. One of the calculations that is done is the threshold for determining what is considered a positive reaction and what is not. In the 2002 study, the threshold was manually lowered so that samples that should have been negative came up as positive. These samples were all after cycle 30 which means that it was a small quantity of material being detected. Additional analysis of the raw data by Dr. Bustin (an expert on qPCR) found that these were false positives due to contamination. So we have two major issues to contend with here. First the samples were contaminated (qPCR is sensitive to contamination so extra care has to be taken to prevent this). Second, the data was improperly analyzed and the Ct threshold improperly lowered so that these samples became positive. At best, this is poorly conducted science that should be retracted. At worst, if the threshold was knowingly altered to generate positives, this would be scientific misconduct. It's not likely that the contamination was intentional as it was in low quantities (why spike a sample with an amount that would look like contamination rather than spike with an amount that would clearly be positive?) and some of the researchers involved in this study published a subsequent study that found no measles virus when the experiment was replicated (three independent labs did the tests on each of the samples), so the adjusted analysis was probably done in ignorance rather than malice.

But this paper (and several others) did have a positive impact on the scientific community. It lead to the creation of the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE). These standards dictate what the minimum amount of information needs to be included with a paper using qPCR data. This includes information on how the samples were extracted, how the test was validated, what controls were used with the test, and how the data were analyzed. Many of the top journals have adopted these guidelines and many researchers follow them even if it isn't required for publication (it's just good science).

Information needed to fulfill the MIQE guidelines.

So even though Andy Wakefield is at best an incompetent researcher (and a con artist at worst), he has done something good for science. As a result of his work (and the work of others), the standards for qPCR reporting have been increased so that the results are standardized across disciplines and publications. It does bring up an interesting question, how many of Wakefield's previous studies are faulty and what should be done about them. In this case, a correction should have been submitted, at the very least, when it became clear that the data analysis was not done properly and that the resulting conclusions are suspect.

However, one thing is clear. Andy Wakefield is not the virtuous crusader for the truth that has been unfairly attacked by the forces of evil. Despite the narrative that he tells, his work speaks for itself. It is full of errors and improprieties that make it all suspect. His actions are what led to him being shunned by the scientific community. Andy shouldn't be blaming anyone but Andy.

https://www.facebook.com/themadvirologist

This page is part of an effort to flood social media with science and to engage the public in science-related issues. I work with plant viruses and the arthropods who vector them but I'll cover all aspects of virology here from human pathogens to archaea viruses and everything in between.
Some general rules: Please be respectful of each other and we'll get along just fine. Spam will be deleted without notice. Remember to have fun and science hard!
Note: The views expressed on this page do not necessarily represent the views of my employer and are mine alone.