It’s well known that HIV mutates rapidly and continuously in infected people. An individual is infected with a handful of HIV viruses, but quickly becomes the host of a vast cloud of virus genomes, with the dominant strain of HIV evolving over time.

There are several factors selecting which HIV sequences are dominant at any one time. The most interesting force selecting mutations, at least from my view, is the immune system — this has been shown in many papers; one of the most recent comes from Bruce Walker’s empire1 in a paper I may talk more about some time — but other factors, often lumped together as “viral fitness”, are also important. A successful immune response to HIV is probably one that forces the virus (in its attempts to avoid the immune system) to lose overall fitness.

HIV isn’t unique in this, of course, although it’s probably the clearest example. Other persistent viruses very likely do the same thing, with mutations and selection driving the virus toward a more “fit” phenotype. An unusual example was just described in the Journal of Virology.2

Poliovirus and vaccination

The oral poliovirus vaccine is remarkably safe; even though the vaccine is live virus, the vaccine itself only causes paralysis in 1 out of 3 million vaccinees. 3 (See the chart to the right4 for rates of wild polio vs. rates of vaccine polio — note that the inset, showing total cases of wild polio, has a log scale. Click for a larger version.) The reason that’s remarkable is that the virus — both the vaccine virus, and the wild-type virus — has very high mutation rates, even higher than HIV’s. So you might wonder why the vaccine strain doesn’t rapidly mutate back to a virulent form as soon as it gets into the host and starts replicating — that is, why doesn’t the virus mutate toward a more fit form, as happens with HIV.

A big part of the answer, in the case of polio, is the immune system. Unlike HIV, the immune system normally rapidly controls poliovirus replication; that means that the virus only has a narrow window between infection and clearance in which to find a virulent sequence.

In that case, what if the immune system doesn’t work properly? If the virus isn’t eliminated by the immune system, it should have more chances to find a virulent variant. In fact, as you’d expect, immune deficient people are at much higher risk of vaccine-associated polio disease — thousands of times higher; the risk is something like 3000 cases of disease per million vaccinees.

Mutation of the vaccine strain

This has all been known for quite a while. The new paper2 looked in detail at the evolution of vaccine-strain polio in an immune deficient (hypogammaglobulinemic) patient over a couple of years. (This was actually a re-analysis of an old case; the patient was infected in 1963, as part of a study that I’m pretty sure would never pass an ethics committee today. Normally, known immune deficient patients are not, of course, given oral polio vaccine, because of the obvious risk. Although it does happen that people are vaccinated and later found to be immune deficient, in this study, an immune deficient patient was deliberately infected, along with a treatment that was hypothesized to protect him. That meant stool samples, with excreted poliovirus, were saved from the initial stages, and we are now at the stage where it’s technically feasible to sequence large chunks, and in some cases whole genomes, of hundreds of the excreted viruses.)

There were lots of interesting findings in the evolution that I’m going to ignore here. The bottom line is that the vaccine virus mutated fast (six changes within the first 21 days) and continuously, and before long reverted to wild-type characteristics; ultimately the patient was excreting polioviruses that were “essentially wild-type viruses in their measurable properties“.

That means that it’s possible for vaccinated immune-deficient patients to act as functional reservoirs of wild-type poliovirus, for a long time. The child in this study died in 1966, but there’s a case of an immune deficient person who has excreted vaccine-derived virus for over 20 years.5 That has obvious implications for eliminating polio from a country. I think there are also implications for HIV vaccination — a vaccine that forces the virus into an unfit state, which is a goal of some vaccine strategies, will produce a vast reservoir of virus that can rapidly switch back into a wild type, virulent form in unvaccinated people. That would probably be better than we have today, but it’s something to think about.

Of course, even that rate is higher than anyone would like; as the rates of authentic polio have dropped to nearly zero, the relative risk of the vaccine have increased; and so the current recommendation is for the killed polio vaccine, which is even safer.[↩]

Vaccine Policy Changes and Epidemiology of Poliomyelitis in the United States. Alexander et al., JAMA. 2004;292:1696-1701.[↩]

Fred Goldberg has just published a paper that may have interesting implications for Jon Yewdell’s DRiPs theory.

Over a decade ago,1 Yewdell proposed that, first, protein translation sucks in terms of accuracy, so that many defective proteins are produced; second, that these defective proteins are rapidly degraded, which was why they hadn’t been identified before; and third, that these defective proteins are the dominant source of T cell (MHC class I) epitopes. It had long been known, of course, that MHC class I epitopes are produced as a byproduct of protein degradation; Yewdell’s suggestion was that because defective proteins are the most abundant class of degraded proteins, they are also the dominant source of MHC class I epitopes.

Lots of people were uncomfortable with the concept of sloppy translation, since I think it was generally believed (perhaps without much evidence) that translation is a very accurate process. That particular issue never bothered me very much, for reasons I’ve discussed earlier. (And there seems to be support for this concept, too; see the paper I quoted from a couple of weeks ago, that concluded that some 20% of newly-synthesized proteins are defective.)

However, it did bother me very much that there were quite a few examples of MHC class I epitope production that was clearly not linked to degradation of newly synthesized, defective proteins. As we interpreted Yewdell at the time, he wanted to propose that the vast majority of MHC class I epitopes came from DRiPs (“Defective Ribosomal Products“, a terrific acronym that has probably helped contribute to the success of the theory). Either he has subsequently softened his stance, or we overinterpreted his proposal, or — most likely — both; I think I’m fairly comfortable, now, with the current model that a significant fraction of epitopes come from DRiPs, and a significant fraction do not. What exactly a “significant fraction” is, remains to be determined.

One prediction Yewdell has made from his DRiPs theory is the presence of “immunoribosomes”. He reasons that if sloppy translation by ribosomes is a major source of MHC class I epitopes, then in the presence of inflammation (i.e. when there is the potential for infection, when you’d want to increase T cell surveillance) you would expect translation to become even sloppier, generating even more epitopes. (This, like the “immunoribosome” name, is an argument based on proteasomes and immunoproteasomes, which conceptually do something very similar.) This, I think, has been a much less successful prediction.

Fred Goldberg’s paper offers a bit of weak support to the concept,2 though it is far from vindication. The paper is

Because this work is done in yeast, which of course lack anything like MHC-based immunity, there’s no reason to expect immunoribosomes here. What Medicherla and Goldberg did find, though, is that cellular stress leads to differential degradation of proteins, with newly-synthesized proteins being particularly targeted for destruction. (There are a lot of other interesting things about this paper, including the evidence that these rapidly-degraded proteins require ubiquitination, which has been a little controversial. But I’m just going to talk about the one aspect of the work here.)

“Cellular stress” in this case was due to things like heat shock and toxins like paraquat, but mammalian cells undergo cell stress when they’re infected with viruses, among other things, so this might be something that could be adapted to immune responses. Their suggestion is that “many cytosolic proteins proceed through a prolonged “fragile period” during which they are sensitive to degradation induced by superoxide radicals or increased temperatures” and that this fragile period is because many proteins need to interact with binding partners or whatever before they become resistant to misfolding and degradation.

Because it is unlikely that the folding of many proteins would require 30-60 min, this “fragile period” presumably represents the time necessary for postsynthetic modifications, proper multimer formation, and localization, which together contribute to thermal resistance.

This fragile period apparently lasts an hour or so, longer than Yewdell had described for his DRiPs,3 and Goldberg seems to hint that such fragile, incompletely assembled proteins may be a more plausible source of rapidly-degraded proteins than DRiPs.

This fragile population, in Medicherla and Goldberg’s hands, represents a rather small fraction:

In yeast growing at 20 or 30°C, such rapidly degraded components comprise 3-4% of newly synthesized proteins, but the short-lived fraction reached 13% after shift to 38 or 42°C, and 10-13% of the proteins synthesized in the presence of cadmium or paraquat at 30°C. These treatments accelerated the degradation of 10% of proteins that would otherwise be long lived so that they behaved like short-lived components.

(My emphasis.) Nevertheless, it represents a 4-fold increase in degradation, and potentially (if these findings can be extended to cells with an MHC class I system) a 4-fold increase in antigen presentation. What’s more, if the cellular stress was triggered by a viral infection, the targeted proteins would probably be disproportionately biased toward viral over cellular proteins. All in all, this sounds more like the concept Yewdell was pushing toward with his immunoribosomes: Shoud we call it “Immunostress”?

I posted a chart, last week, showing the spectacular reduction in measles cases and measles deaths following the introduction of measles vaccination in the mid-1960s. Anti-vaccine loons often dismiss such charts by claiming that they only demonstrate the effect of sanitation, or something — as if sanitation was only introduced into the US in 1965. In any case, here are some more data, showing the effect of measles vaccination at very different times, in very different countries.

Finland had a problem with measles, as well as mumps and rubella, in the 1970s. The vaccine coverage was about 70%.1 It’s important to note that for measles, which is probably the most contagious disease known to man, very high vaccine coverage (probably over 90%) is necessary to protect against outbreaks — this has been shown by modeling as well as by experience.

So the 70% coverage was far too low to offer protection against outbreaks, and there were an average of about 100 deaths per year in the 5 million or so Finnish population. (That would extrapolate to, what, about 6000 deaths per year in the US, to put this in context with the previous chart.) In 1982, a national vaccination program was put in place for measles, and after 1986 (when an extra push was put in place) coverage increased to 97%.2

Here’s the chart showing measles incidence in Finland. Note that this is a log scale, not a linear scale, on the Y axis. Also pay attention to the dates: Remember, 1982 national vaccination — 1986, final push.

OK, that’s Finland in the 1980s – a rich Scandinavian country, with excellent sanitation and so forth. Here’s a very different situation: three poor African countries in the early 21st century. Measles vaccine coverage in Burkina Fasso, Mali, and Togo was very low at the turn of the century, between 33% to 69%3. In Dec ’01 to Jan ’02, a series of nation-wide projects boosted vaccine coverage among children to over 95% in each country. Here’s the effect on cases and deaths:

It would be hard to find more different circumstances than between Finland and Burkina Fasso; yet in each case, increasing measles vaccine coverage to the proper level vastly reduced measles cases and deaths.

It’s happening again; vaccinations have been so successful that Americans are becoming complacent and forget what “disease” is. Snake-oil pedlars and profiteers are selling alarmism, people decide not to vaccinate, and, guess what, diseases surge back. The latest preventable outbreak is measles;1 there are already over 130 measles cases in the first half of this year, twice the annual average for the past decade. And these are local cases, not imports; for the first time in years, most of these cases, some 85%, are not imported.2

These cases are being spread by the unvaccinated; by people who, from fear, apathy, or ignorance, have avoided getting their child vaccinated.

What did measles look like before vaccination? Here’s a chart showing case numbers before and after vaccination in the mid-1960s. Note the scale, in hundreds of thousands of annual cases.

But measles is an amusing and harmless childhood disease, isn’t it?

That’s not showing the permanently brain-damaged survivors (about twice as many as deaths), or the thousands of hospitalizations.

Of the 131 cases in this year’s outbreaks, “112 (91%) were unvaccinated or had unknown vaccination status. Among these 112 patients, 95 (85%) were eligible for vaccination, and 63 (66%) of those were unvaccinated because of philosophical or religious beliefs.”1

I don’t know about those 63 peoples’ parents, but personally I’m philosophically opposed to having my children risk a lifetime of brain damage.

I don’t actually have much to say about this paper1 from Kay Grünewald‘s group — it’s on the mechanisms by which herpes simplex virus enters cells, specifically how the viral and cellular membranes fuse — but the images (using cryoelectron tomography) are so gorgeous I feel like I have to mention it. Subscription-only, I’m afraid, but check it out if you can.

It’s rapidly becoming accepted, if not quite dogma, that T cell quality (rather than, or as well as, quantity) is a critical factor in controlling HIV infection. (I’ve talked about T cell quality several times previously. What it means, simplified, is that antiviral cytotoxic T cells can have a range of different functions, and those CTL with multiple functions seem to do better at controlling HIV than those with only one or a handful of functions.) As a result, there’s a lot of interest in developing vaccines that induce multi-functional CTL, in the hope that those vaccines will better control the virus itself. A recent paper1 from Norm Letvin’s lab, though, supports the concept but doesn’t offer a lot of encouragement for the vaccine strategy.

Letvin’s group vaccinated monkeys against immunodeficiency virus using several different vaccine strategies, and evaluated the quality of the antiviral CTL elicited by those vaccines. As we have now come to expect, there were big differences in both the quantity and the quality of T cells with the various approaches. No surprises so far.

Next, they challenged the vaccinated monkeys with virus. Again, as expected, those monkeys who controlled the virus best, had the largest and most multifunctional CTL response to the challenge (“both the magnitudes and functional profiles of the virus-specific CD8+ T cells generated by vaccination were associated with control of viral replication following SHIV-89.6P challenge“).

The unexpected part, though, was that the vaccine response didn’t tell you anything about the challenge response. That is, even though some vaccines gave lots of multifunctional T cells and others gave relatively little, that did not correlate with the eventual response after challenge; “Although the different vaccination regimens generated qualitatively different virus-specific T-cell populations, those differences were lost following the virus challenge.” Letvin’s group concluded that the similar levels of virus after challenge overrode the vaccine pattern.

The good news — kind of — is that any of the vaccines seemed to work relatively well. After challenge, “the profile of cytokine production by the virus-specific T lymphocytes in the control monkeys was heavily biased toward cells that produce only IFN-?, while the virus-specific T lymphocytes of all of the experimentally vaccinated monkeys following challenge were uniformly polyfunctional.” That is, even though the vaccines didn’t ultimately differ from each other, vaccination did lead to a different, and probably better quality, CTL response than in unvaccinated monkeys.

This might suggest that even testing CTL quality as well as quantity after vaccination may not be very predictive. However, it’s also possible that the monkey model is once again being deceptive. For example, if their suggestion that the challenge dose re-set the CTL quality is correct, this might be highly sensitive to both the number of infecting viruses and, even more, to the precise kinetics of early viral replication; and there are a myriad of other differences as well. The bottom line, though, is a reminder that we really don’t understand antiviral immunity very well in any system, let alone the baroque interactions between HIV and the immune system.

Primary infection and late-stage infection were estimated to be 26 and 7 times, respectively, more infectious than asymptomatic infection. High infectiousness during primary infection was estimated to last for 3 months after seroconversion, whereas high infectiousness during late-stage infection was estimated to be concentrated between 19 months and 10 months before death.

Missense errors in translation occur at rates of one per 103-104 codons; at an error rate of 5 × 10-4, 18% of proteins expressed from an average length (400 codon) gene contain at least one missense substitution.

XPlasMap is a DNA/plasmid mapping program for MacOS 10.4 or higher. The present version is beta, version 0.96. I’ve finished adding all the new features that I want in v1.0, and have stomped all the bugs I know of. My plan is to continue testing for a week or so (I expect I’ll turn up a few more bugs); then compile and test on a couple of other computers with different OSes. At that point I could use some help, if anyone is interested in beta testing. If that goes smoothly I will release it (probably late August/early September) as “v.0.99 beta”, fix any bugs that turn up, and release a non-beta version in fall some time.

What’s new in XPlasMap 0.99?

New features include:

Annotation, with text, arrows, circles, etc.

Convert groups of enzymes to multiple cloning sites, and vice-versa

Auto-positioning of enzymes and text

Copy, cut, and insert fragments by selecting restriction sites

Import from EMBOSS format, as well as GenBank, FastA, and plain text sequences

Export to a plain-text description of the plasmid (not all that useful, but someone asked for it)

Improvements include:

Fine-grained control over fonts

Improved control over resolution of exported images (PNG and JPG)

Much more responsive List View

More control over text positioning

(Click on the image above to see a larger version illustrating annotation and fonts.) As well as a host of interface improvements (at least, I think they’re improvements), there are extensive changes under the hood that I hope will make the program more maintainable and stable.

Our results suggest that in some cases, the lower replication capacity of HIV-1 isolates in LTNP1 and ES2 may be the result, rather than the cause, of suppressed evolution: a qualitatively superior HIV-1-specific immune response that limits viral replication will prevent evolution toward greater fitness. … we conclude that the immune system of ES9 is controlling viral replication by at least two different mechanisms: there is a direct inhibition of viral replication by polyfunctional HIV-1-specific CD8+ T cells that proliferate in response to autologous viral peptides, and there is selection for and maintenance of escape mutations that have a negative impact on viral fitness. Vaccines that elicit CD8+ T cells with both properties may be very effective at controlling HIV-1 replication.