All you want in a vaccine is that (1) it doesn’t do any harm, and (2) it prevents disease. When you’re running initial tests on a potential vaccine, though, you often can’t actually include (2) in the tests — especially for a human vaccine– because it’s rarely acceptable to infect your volunteers with, say, HIV. Instead, you identify surrogate measures like level of antibody, or number of T cells, and you judge your vaccine on those surrogate measures at first. If your vaccine doesn’t induce a lot of antibody, say, cytotoxic T lymphocytes (CTL), or whatever it is that you’re measuring, then back to the drawing board.

The problem with that approach, of course, is that we still don’t understand the immune system all that well, and so the surrogate measures that get used may not be ideal. We’ve been seeing this with a number of HIV vaccines, which in preliminary tests induced great surrogate measures but ended up not protecting against disease. This (among other things) has led to a recent focus of quality of immune responses as well as quantity. A recent paper emphasizes this, coming at the issue from a very different direction.

Malaria vaccines haved been a huge challenge to develop. Getting strong immune responses against protective antigens has been pretty difficult, and then moving these into clinical settings has usually shown rather underwhelming efficacy. One of the approaches that was tested, as a way of safely inducing immune responses, was genetic immunization (or DNA vaccines). In this approach, rather than a protein antigen, you inject your patients with DNA encoding the antigen of interest. The DNA gets taken up by cells, expresses your antigen of interest, and hopefully that antigen then induces an immune response.

As I understand it, this approach has worked pretty well in mice, but not so well in humans. Immune responses in humans vaccinated with DNA have usually been very low; so low that (based on these surrogate measures) the vaccines were abandoned and people weren’t challenged with the pathogen.

… it became clear that, for reasons that remain poorly understood, the same high immunogenicity could not be reproduced in humans.. Genetic immunisation of volunteers could induce at best CTL cells but low, or absent, CD4 T-cell and antibody responses. The immunogenicity was so low that, although reported, several clinical trials were considered unsuitable for publication. In many of these trials, challenge by the infectious pathogens was either not possible or decided against in view of the limited responses induced.1

Pierre Druilhe, at the Pasteur institute, decided to test the approach more directly, with a challenge experiment.1 They vaccinated some chimps with a DNA vaccine against a malaria antigen, an approach that had previously led to very weak immune responses in humans. Here again, the immune responses were not very exciting: there was no humoral (antibody) response, though there were detectable MHC class I and MHC class II-restricted T cell responses. But the effect on malaria infection was dramatic; there was
sterilizing immunity, no detectable parasites, in 3 of the 4 chimps.

The authors comment “these findings suggest that the relative scarcity of an immune response should not necessarily exclude the assessment of the protective efficacy of a vaccine candidate when ethically feasible,” which is a reasonable suggestion, though I think their data don’t really show that this was a “scarce” immune response. Did they really see a low response when you consider T cells, or is it actually a strong response that is strictly biased to the cell-mediated immune response? They didn’t include some information that I’d consider critical — how the T cell responses here compared to a more conventional vaccine approach. They cite an earlier paper2 showing some protection against schistosomiasis in cattle after a DNA vaccine in spite of undetectable antibody responses, to argue that DNA vaccine protecting in spite of a low immune response may be a general effect; but since the schistosomiasis study didn’t even look at T cell responses as far as I can see, it doesn’t answer that question either.

Overall, I think this is not a very strong paper. It’s just a handful of animals (and some of their stats are highly questionable; I don’t think you can challenge 4 animals twice and call that a cumulative N=8), and they don’t show me how the T cell responses compare to other approaches, so I don’t know if it’s even a “weak” immune response. Still, it’s an interesting observation, and emphasizes that it’s important to understand the quality of an immune response as well as quantity.

I can imagine that somebody called Pierre Druilhe might speak with a Bretonian accent, but did he really say

“these findings suggest that the relative scarcity of an immune responseuudc should not necessarily excludediusbyuvlx the assessment of the protectiveuffdug Ojojduhf I efficacy of a vaccine fjlopivdlojdtpvysnpd”

The claim that the relative scarcity of an immune response should not necessarily exclude the assessment of the protective efficacy of a vaccine candidate when ethically feasible is great. However, the Plos One journal is not peer reviewed and I still had some concerns about this paper.

Zimmorn: However, the Plos One journal is not peer reviewed and I still had some concerns about this paper.

As I said, “Overall, I think this is not a very strong paper” — so I certainly agree with you. But technically, PLoS One is peer reviewed. They differ from (most) other journals in that the reviews are not supposed to consider relevance and impact of the articles, but the articles are peer reviewed for technical quality:

The peer review of each article concentrates on objective and technical concerns to determine whether the research has been sufficiently well conceived, well executed, and well described to justify inclusion in the scientific record. … Unlike many journals which attempt to use the peer review process to determine whether or not an article reaches the level of ‘importance’ required by a given journal, PLoS ONE uses peer review to determine whether a paper is technically sound and worthy of inclusion in the published scientific record.

It leads to an interesting mix of papers; surprisingly, to me, there’s a relatively low percent of trivia, but there do seem to be a number of papers like this one, that could (in my opinion) have been weeded out by the technical quality question.

Leave a Reply

Comments are closed.

Mystery Rays from Outer Space is powered by WordPress | Using Tiga theme with a bit of Ozh