Rotavirus Vaccine in the NICU: Is it Safe?Reviewed by Terri Stillwell, MD

In 2009, the Advisory Committee on Immunization Practices (ACIP) updated its recommendations regarding administration of rotavirus vaccine. In that update, the maximum age for the first dose of the vaccine was raised to 14 weeks and six days. Given the risk of viral shedding and potential horizontal transmission to other inpatients, the ACIP maintained its recommendation for preterm infants that rotavirus vaccine be given at the time of discharge from the neonatal intensive care unit (NICU).

In the case of extremely premature infants and those with complex congenital medical conditions, hospitalization can be prolonged, with nearly one-fourth of patients remaining hospitalized at 14 weeks and six days of life, thus making them ineligible for rotavirus vaccine under current recommendations. A recent article in Pediatrics addresses the safety of rotavirus vaccine administration in the NICU setting. The institution in the study routinely administered rotavirus vaccine if an infant was receiving other scheduled two-month vaccinations and was tolerating enteral feeds.

During the study time period, 96 patients received rotavirus vaccine while still in the NICU. A majority of those patients (76 percent) were either asymptomatic or symptomatic but unchanged from baseline after receiving the vaccine. Twenty-four percent had changes in their clinical baseline post-vaccination, none of which were attributed to rotavirus vaccine.

As a comparator, 801 unvaccinated patients, hospitalized in the same geographic location as the vaccinated patients, were monitored for clinical changes. Only 10 patients (1 percent) exhibited gastrointestinal symptoms within 15 days of their neighbor receiving the vaccine. Upon further review, it was determined that all 10 patients had other medical reasons for their clinical change.

Despite the limitation that rotavirus testing was not routinely performed in these patients, the study suggests that the risk of post-vaccine shedding may not be as great as previously thought. Further studies are needed to address this issue and the potential use of rotavirus vaccine in infants in this setting.

A minority of patients with tuberculosis are concurrently infected with drug-susceptible and drug-resistant strains of the organism. Much about such individuals remains unknown, including the pathogenesis of infection with organisms of different drug susceptibilities, optimal therapeutic regimens, and whether there are differences in clinical outcomes. To help address this last point, investigators conducted a retrospective cohort study of 475 patients with multidrug-resistant (MDR) tuberculosis in Botswana; their findings appear in a recent issue of The Journal of Infectious Diseases.

Phenotypic heterogeneity in drug-susceptibility test (DST) results was defined as the recovery of isoniazid- and rifampicin-susceptible M. tuberculosis within three months of treatment initiation for MDR tuberculosis. Outcomes were classified as “good” (cure or treatment completion) or “poor” (treatment failure, treatment default, or death). Time to culture-conversion was also studied. There were no baseline between-group differences in bacillary burden, prevalence of extrapulmonary tuberculosis, number of effective drugs in the treatment regimen, or HIV infection status.

Thirty-three (7 percent) of the patients had heterogeneity in phenotypic DST results. Nineteen of these 33 (58 percent) had poor outcomes, as compared with 106 of 442 (24 percent) patients without such heterogeneity (P <0.001). This difference was driven by (and only significant among) HIV-infected patients (unadjusted HR 3.4; 95 percent CI, 1.8-6.4). HIV-infected patients also had significantly longer times to culture-conversion, and nearly half of HIV-infected patients with phenotypic DST heterogeneity did not achieve culture-conversion, as compared with 15 percent of patients without phenotypic DST heterogeneity.

These intriguing results beg a number of questions, including ones about surveillance for phenotypic DST heterogeneity, particularly among HIV-infected persons, and ones about optimal treatment, such as the inclusion of isoniazid and/or rifampicin in the medication combination for such patients. Implications include not only individual patient health but, given the longer times to culture conversion among HIV-infected patients with phenotypic DST heterogeneity, public health as well.

Bacterial skin and skin-structure infections are among the most common reasons for adult hospitalizations in the United States. These infections are primarily caused by Staphylococcus aureus and streptococcal spp. The emergence of methicillin-resistant S. aureus is problematic due to the toxicity of currently available agents, development of antibiotic resistance, or the need for daily intravenous treatment.

In the June 5 issue of The New England Journal of Medicine, researchers report the results of a double-blind, double-dummy, multicenter randomized trial comparing intravenous dalbavancin (a lipoglycopeptide with a long plasma half-life and activity against Gram-positive pathogens) on days one and eight to vancomycin intravenously for at least three days followed by linezolid in the treatment of acute bacterial skin and skin-structure infections.

The primary outcome of interest, an early clinical response at 48-72 hours, was no different between treatment groups, with a response observed in the dalbavancin group (79.7 percent) no different than that of the vancomycin/linezolid group (79.8 percent). The secondary end-point of response at the end of therapy was also no different between groups: dalbavancin (90.7 percent) vs. vancomycin/linezolid (92.1 percent).

Adverse events were more common in the vancomycin/linezolid group, although these were primarily limited to nausea, diarrhea, and pruritus.

Dalbavancin may be a welcome addition to the fight against Gram-positive infections, and the possibility of once-weekly dosing offers a considerable advantage over currently available agents. However, pricing has not yet been released, and the potential for the development of resistance with continued exposure to subtherapeutic levels has not yet been assessed.

In a multicenter, prospective study of ICU patients published in the April 15 issue of Clinical Infectious Diseases, researchers sought to examine the pharmacokinetics (PK) and pharmacodynamics (PD) of β-lactam antibiotics in critically ill patients and correlate established PK/PD determinants of efficacy with clinical outcomes.

Patients treated with β-lactams had blood antibiotic levels measured half way through—and at the end of—a dosing interval. The percent of patients in which free drug levels were maintained above the minimum inhibitory concentration (MIC) of the infecting organism for 50 percent and 100 percent of the dosing interval (50 percent T>MIC, 100 percent T>MIC) were calculated. These parameters were then correlated with a positive clinical outcome, defined as completion of the treatment course without change or addition of antibiotic therapy for 48 hours after completion.

Significant limitations were noted in an accompanying editorial. Most prominently, MIC data was not available for a significant number of patients, as only 73 percent had an organism recovered, of which only a third had MIC determined. For these patients, the “worst case scenario” was used: The highest European Committee on Antimicrobial Susceptibility Testing (EUCAST) MIC for all organisms for which that drug was active was used. This may have caused an overestimation of patients failing to achieve 50 percent T>MIC. Additionally, 62 percent of patients received combination antibiotic therapy, which may have influenced clinical outcome; results were not stratified on this basis.

Despite these limitations, this study raises questions about standard dosing regimens and adds interesting data on the PK/PD of β-lactams in critically ill patients, a topic that warrants further study.

In a report recently published online in Transplant Infectious Diseases, researchers describe their genomic analysis of 570 CMV-positive plasma specimens. In this study, specimens submitted by physicians to a commercial reference laboratory for resistance testing were evaluated by nucleic acid sequence analysis. Overall, 28 UL97 and 43 UL54 mutations (all previously confirmed to confer antiviral drug resistance by marker transfer experiments) were included in the analysis. Associations between CMV viral loads and the presence of resistance mutations were examined.

Antiviral drug resistance mutations were detected in 176 of 570 specimens (30.9 percent). Within the 570 specimens, 17 and 29 different mutations were identified out of a total of 173 UL97 mutations and 69 UL54 mutations, respectively. The mean viral load for specimens without antiviral resistance was 3.92 log10 copies/mL (range, 1.72-6.99 log10 copies/mL) while that for specimens with any resistance mutation was 3.93 log10 copies/mL (range, 2.03-7.15 log10 copies/mL).

Statistically significant differences were not observed when viral loads were compared between groups without and with resistance mutations, including those with UL97 or UL54 mutations only or mutations in both. Likewise, there was no association between CMV viral load and the presence or absence of resistance mutations when paired specimens from a subset of patients (n=85) with multiple specimens (mean, 2.36 samples) were analyzed.

The study is limited by pre-selection bias and the lack of clinical data. However, this work does augment the current understanding of drug-resistant CMV and demonstrates that CMV viral load alone is not predictive of drug-resistant CMV.