But the story wasn’t as simple as that. Science has learned that a paper describing the new findings, already accepted by the Proceedings of the National Academy of Sciences (PNAS), has been put on hold because it directly contradicts another as-yet-unpublished study by a third government agency, the U.S. Centers for Disease Control and Prevention (CDC). That paper, a retrovirus scientist says, has been submitted to Retrovirology and is also on hold; it fails to find a link between the xenotropic murine leukemia virus-related virus (XMRV) and CFS. The contradiction has caused “nervousness” both at PNAS and among senior officials within the Department of Health and Human Services, of which all three agencies are part, says one scientist with inside knowledge.

*rubstemples*

*rubstemples*

*headdesk*

Sooooo… the ‘positive’ paper was submitted by Harvey Alter to PNAS. Harvey Alter is a NAS member, so this paper had, um, ‘a different peer review history’ than you all are accustomed to. Normally, you send a paper to a journal, and they send the paper out to three reviewers, and they give the journal editor a thumbs-up or thumbs-down. When you are a NAS member submitting to PNAS, you pick your own reviewers and send in your own reviews. So, you could, like Lynn Margulis, send a controversial paper out to seven people to get two ‘good enough’ reviews, and be accepted for publication.

Just to be 100% clear, Alter isnt doing anything ‘wrong’, even if he sent it out to 50 people to get two good reviews. He wouldnt be doing anything ‘sneaky’ and its not ‘cheating’, thats just the way things work at PNAS. Which he must understand is a valid concern for Average Joe/Jane scientists who also know how PNAS works. And considering the crap PNAS got for publishing that insane ‘caterpillars and butterflies are two different species!’ weird ass paper for Margulis, I dont blame them at all for being ‘nervous’ about not looking-before-they-leap into this controversy.

… the journal’s editor-in-chief, cell biologist Randy Schekman of the University of California, Berkeley, sent the paper out for further review after government agencies requested the publication delay. That review came back with requests for additional studies, the scientist says.

Again, this does not mean Alter did something sleezy or wrong. It also doesnt mean that Alters reviewers are idiots or gave him softball reviews. But people are more critical with anon reviews than in non-anon reviews for colleagues. The PNAS editor didnt want to jump into anything before they sent it out to a few reviewers of their own, and the reviewers wanted a few more experiments. No big whoop, no massive government conspiracy, just exactly what would have happened if Alter sent the paper to a non-PNAS journal.

Western Blot– They looked for antibodies to XMRV in 51 CFS patients and 53 matched controls, 121 random blood donors, and sera from 26 people infected with a retrovirus (HTLV, HIV-1, and/or HIV-2). Their positive control sera lit up the appropriate bands for Gag and Env. None of the other samples had any antibodies to XMRV.

ELISA– They also sent the CFS + matched samples off to another lab for blinded testing– this other lab had no idea which samples were from CFS patients, and which were from the matched controls. With ELISAs, you put a LOT of the protein of interest (Gag, Env) into a well, instead of just a band in a gel in a Western, and look for reactivity. They found a couple weakly positive samples for Gag (not Env), one CFS, one control– but those couldnt be confirmed with a different test. They were also sent positive and negative controls, which they were also blinded to, and they properly identified them.

PCR– They used the same damn PCR as WPI, and then a separate primer set they designed. They could see 10 copies of XMRV diluted in human genomic DNA for their controls. They couldnt find it in whole blood or PBMC from the CFS+matched set, or 41 totally different (not the ones used in the Western test) blood donors.

They also sent their stuff to another lab for blinded testing, with blinded positive and negative controls. Positive were positive, negative were negative, and none of the experimental samples had any XMRV.

They used the same definition of CFS (1994 CDC Fukuda Criteria) as the Science paper. They used the same PCR conditions. They had two different labs blindly double check their findings two different ways. They couldnt find XMRV in healthy blood donors to an appreciable degree, much like BigPharma (WPI also did not find antibodies to XMRV in healthy donors, but for some reason is sticking with their 3.7% PCR number).

So what did this paper do ‘wrong’?

Whats the excuse now?

And what did Alter do that was so magically different that his lab could replicate the Science paper?

Related

Comments

Re #95
yes, quetzal, evidence of the virus in serum is what I meant to say. Um, testing for antibodies is also one of the things Switzer attempted: “utilized a Western blot (WB) assay that showed excellent sensitivity to MuLV and XMRV polyclonal or monoclonal antibodies.” The van Kuppeveld group seem to have checked for the virus as integrated into the human genome in stored blood samples. The Erlwein group looked for the virus itself.

The Groom group looked for both the virus and serological responses and found: “Some serum samples showed XMRV neutralising activity (26/565) but only one of these positive sera came from a CFS patient. Most of the positive sera were also able to neutralise MLV particles pseudotyped with envelope proteins from other viruses, including vesicular stomatitis virus, indicating significant cross-reactivity in serological responses. Four positive samples were specific for XMRV.”

A Japanese symposium found antibodies (2/32 prostate and 5/300 healthy) and the virus (1 prostate) in blood, with a low prevalence (5/300) in the healthy population.

So I see several groups (studying whichever disease) finding antibodies, and some people finding virus in tissue; in both of these there is usually a low prevalence in the general population (except Groom, which was just confused overall). Switzer doesn’t match the prostate antibody findings. Lombardi’s general prevalence seems similar to the others who were able to find the virus at all. The Japanese symposium did report finding the virus itself in one prostate cancer patient’s blood. Other attempts to find the virus in blood failed (so far).

Fischer group seems to think it’s difficult to find even in tissue: “However, the finding of XMRV in PBMCs from patients with chronic fatigue syndrome is controversial because multiple studies in Europe have failed to detect XMRV (6–8). Similarly, frequency of XMRV in prostate cancer samples ranges from 0 to 23%, depending on geographic restriction of the virus or, more likely, diagnostic techniques used (PCR, quantitative PCR, immunohistochemistry) (1–3,9,10).”

Yes, exactly what you said, the things various groups are looking for are different. Methods are varied. There don’t seem to be any actual replication studies for CFS. Which leaves wide open the question: how important are the technique differences? Actual replication studies are necessary to answer this question (exact technique match with new samples).

Re: #82

ERV, yes, they should have found evidence of XMRV if Lombardi’s numbers are right (or approximately right) and if their technique was good. Out of 147 samples (Switzer, I think the largest), one would expect 3-4 positives (2-3%) from the low general prevalence found in other studies. Sadly, you can’t necessarily expect a bump from ME/CFS patients because of the way the definition is tortured (besides the Reeves example, you have the British equivalent, Wesseley, where they positively diagnose CFS for any patient for whom they cannot readily find an alternate diagnosis for their symptoms). So at this point, statistically one would expect about 5 of Switzer’s 51 fatigued patients to have ME/CFS (because of the 10-fold inflation due to broadening).

Then you have the added complication that these were people drawn from the community (in Switzer), not from medical centers (the way the lower figures are produced). A Korean study says “CF [chronic fatigue broadly] was common but CFS was rare in community-based primary care settings” (making community-based with 1/2 not seeing a doctor at all, more rare still). You don’t find many strict-Fukuda CFS patients by looking this way. We seek medical help. [Typically we don’t get much help, but we do try] So it’s possible that no patients had ME/CFS, or possible that maybe 1-3 fatigued participants had ME/CFS.

As far as statistical significance goes, this number is irrelevant in a sample of 51, but if they had any ME/CFS patients and if Lombardi and the Japanese blood bank symposium (and the tissue sample prevalance rate) are right, they should have found a very small positive set, if their technique was good.

So, no, the varied results don’t seem unusual to me, especially in light of #93. It doesn’t much matter what it was they didn’t find this time; the coauthors of some of these non-XMRV “CFS” studies include authors who aren’t credible regardless of what it was they didn’t find because they tend to fail to find any physiological correlate of CFS/ME (this goes for the Switzer study, the Erlwein study, and the van Kuppeveld study, and possibly the Groom study as well).

For important context, read a recent NYT article, which provides some important background to understanding the skepticism towards “no findings” studies. Otherwise good, except that it notes without comment a CDC finding that childhood trauma predisposes to CFS–a question which would seem to be an unnecessary question to ask since we already knew fatigue in general could be correlated but ME/CFS is not, and since there are other, more useful, things we could research with our scarce resources, especially given what is known about CFS/ME… yet the CDC and the other “no significant/useful pathophysiological findings” groups continue to tout this and similar “findings” and this study is featured prominently on the CDC’s CFS page.

I understand ERV has a similar bias against WPI, and I agree that some statements made by some people, regarding comparison of Lombardi and Switzer studies, were unprofessional. While not excusing or condoning that, I think it’s a little more understandable in context. Unprofessionalism is present on both sides and it seems to have started with the CDC (as you can see from the NYT article above). The whole history of CFS study is full of fraud, (2), scandal, feud, tepidity, bias, prejudice, andabuse. It’s not unusual that this ruckus extends to the XMRV hype, too. This is all very unfortunate for us patients (both the CFS patients and the other fatigued patients whose conditions are also not being appropriately studied)–and for science–but it is what it is.

If it comes down to a choice between some jerks who do science and some jerks who do philosophy (their pet psychosocial model is evidently unfalsifiable to them: ergo, not science), I’ll go with the jerks doing science.

Re #96-#99:

No studies showing an increased risk for CFS/ME related to blood transfusion probably means no one checked. You’d have found a “no risk” study when looking for “risk” studies if the topic had been examined. Meaning, we don’t know and can’t provide evidence either way. So far.

We do already know viral infections can be a trigger, so unless XMRV is somehow causative, it isn’t a needed, I agree. There are theories about how this retrovirus could be causative (create the observed deficiencies in the immune system) and those are theoretically interesting, but we can’t know that without longer studies (of a different type than those looking for a possible correlation), and, as has been mentioned, it’s equally possible that some other cause (genetics, for instance) leads to immune deficiencies which allow pathogens (viruses, etc.) to hang out instead of being fought off and degraded.

We also know a lot about pathophysiology of ME/CFS, so another virus, even one that potentially infects the majority of the ME/CFS population, also isn’t needed to valiadate the condition as a disease.

Soapbox: We know little, however, about non-CFS (non-other-diagnosis) fatiguing conditions and one would hope these would be studied separately however they can be divided into discrete groups. These patients are also being hurt by the “fatiguing illness” rubruics because their conditions aren’t being studied.

@daedalus2,
I think you might be correct that reactivated herpes viruses are a result of a dysregulated immune system; not necessarily xmrv. I don’t discount the possibility. I do think there is a hereditary component since my mother and grandmother had symptoms going way back. I also believe there is some relevance to the “hygiene hypothesis” in that certain illnesses are taking advantage of modern, sterile, hyper-sensitive immune systems. Really going out on a limb: I’ve noticed that with some of the patients I’ve met, many are either grandchildren or children of people who were younger/youngest members of huge families. As in the crap genes got dumped on the youngest of many-kid families, and that ended manifesting as CF in later generations. Just an observation.

Another new and hyperbolic paper: Proteins of the XMRV retrovirus implicated in chronic fatigue syndrome and prostate
cancer are homologous to human proteins relevant to both conditions. [From Nature Precedings]

Because I’m a nasty stalker, I looked up his blog and found him on facebook. He has a PhD in biochemistry (well, he says he has on fb, so it MUST be true!) but I think he may now have retired. It IS one of the beauties if NCBI that anyone can do a Blast on just about anything, but I sort-of feel this guy has used his internet powers for evil. or at least, he’s massively misinterpreted his findings. Doesn’t sequence homology in the genes this guy’s pulled out just result from… I don’t know, convergent evolution? I can’t do the maths to work out of odds of getting five amino acids in a row by chance, but it can’t be THAT rare. AND retroviruses have been jumping in and out of our genomes for millions of years, so it’s not that usual that they have left remnants behind that might have been co-opted because they conveyed some sort of advantage.

JohnV’s calculation is correct, if his assumptions were correct. They aren’t of course – some base pairs are more common than others, some amino acids are coded by more triplets than other, and some amino acids are more prevalent than others for other reasons – but is useful for a rough glance at how “surprising” the results are.

Note that the human genome is roughly 3×10^9 bp, or about 10^9 aa if you ignore alternate reading frames. More than 98% of the human genome is non-coding, so let’s estimate a total of 1.6×10^7 aa in human proteins (ignoring alternative splicing – this is the total size of human exonic sequences). A naive calculation is that for every specific 5 aa sequence there are likely (3.125×10^-7)*(1.6×10^7)=5 matches in human proteins. Alternatively, we could say there is a (1-(1-3.125×10^-7)^(1.6×10^7)=99.3% chance that any given 5 aa sequence can be found in human proteins.

The actual probabilty calculations would, as JohnV pointed out, be much more complex. But even the simplified equations point out that these results are likely convergent.

That said, I don’t believe his primary argument is that these sequences are true homologies (though he does appear to make that assertion at one point), but rather that the immune system is generating antibodies in response to XMRV that also target regular human proteins because of these identical sequences. The problem is that he is begging the question. Nowhere does he cite research demonstrating that having one specific pentapeptide make a protein susceptible to being targeted by an antigen.

With our back-of-the-envelope calculation, it would seem unlikely that that would be the case. If an antigen only targeted a single pentapeptide, nearly every antigen would target one or more human proteins. This is unlikely, to say the least. And as it turns out, structure is a very large portion of antigen activity. Having the same pentapeptide, but located in a completely different shaped protein, will make it very difficult for antigen for XMRV to target a human protein.