A systematic review and meta-analysis found fecal immunochemical tests (FITs) to be moderately sensitive and highly specific with high overall diagnostic accuracy for detecting colorectal cancer (Ann Intern Med 2014;160:171–81). The authors emphasized, however, that the diagnostic performance of FITs depends on the cutoff value for a positive test result.

The researchers conducted the analysis because several professional associations have endorsed FITs in place of fecal occult blood tests (FOBTs) for their purported improved performance characteristics. However, the scientific literature has shown quite varied FIT results, with reported sensitivities for detecting colorectal cancer ranging from 25–100%, and specificities often higher than 90%. This has left questions in the clinical community about how best to apply FITs in colorectal cancer screening, the optimal number of stool samples for testing, the optimal cutoff value for positive results, and whether any particular FIT is better than others.

The authors considered 19 studies involving a total of 113,360 patients. They found that FITs had a pooled sensitivity of 79% in detecting colorectal cancer, and a specificity of 94%. The overall diagnostic accuracy of FIT was 95%.

The researchers were not able to identify an optimal cutoff value for colorectal screening, but they did find that a cutoff value <20 µg/g had the best sensitivity and specificity and the lowest negative likelihood ratio.

Overall, no single commercial FIT test performed markedly better or worse than others. However, studies that used quantitative FITs reported lower sensitivity compared with qualitative FITs. Overall sensitivity of the former improved, however, after the researchers recalculated sensitivity without the now discontinued OC-Hemodia test.

Sepsis Risk Model Promising for Risk-Stratifying Pediatric Patients

An updated version of a previously developed sepsis risk model prospectively estimated the probability of death reliably in a heterogeneous test cohort. These findings lay the groundwork for this model to be used as a benchmark to objectively evaluate septic shock outcomes, and to conduct risk-stratified analyses of clinical data, according to the authors (PLoS ONE 2014;9:e86242).

A consortium of researchers previously reported development and validation of the pediatric sepsis biomarker risk model (PERSEVERE), which incorporates five biomarkers along with clinical variables to assess risk of 28-day mortality in children hospitalized with septic shock. The five biomarkers include C-C chemokine ligand 3, interleukin 8, heat shock protein 70kDa 1B, granzyme B, and matrix metallopeptidase 8.

After initially developing and validating PERSEVERE, the researchers combined the derivation and validation cohorts, updated PERSEVERE, and then sought to test the prognostic value of the updated version in an independent test cohort. The latter involved 182 subjects at 17 participating institutions. All were 10 years old or younger, and had been admitted to the pediatric intensive care unit (PICU) and met pediatric-specific criteria for septic shock. All serum samples were obtained within 24 hours of patients’ presentation to PICU.

The overall actual mortality in the test cohort was 13.3%, compared with 9.3% predicted by PERSEVERE. Using a risk cutoff of 2.5%, the researchers found that PERSEVERE had a sensitivity of 83% for predicting mortality, specificity of 75%, positive-predictive value of 34%, and negative-predictive value of 97%.

New research shows that fructosamine and glycated albumin not only are strongly associated with incident diabetes and diabetes-related microvascular disease, but also have prognostic value comparable to HbA1c (Lancet 2014; http://dx.doi.org/10.1016/S2213-8587(13)70199-2). The findings suggest that these two analytes might be useful complements to HbA1c in clinical practice, especially when HbA1c testing is not available, or when HbA1c results might be considered unreliable.

Fructosamine and glycated albumin are markers of short-term, 2–4 week glycemic control, but neither are used routinely in clinical practice. On the other hand, HbA1c, a measure of long-term glucose exposure in the blood, has been the primary test used to manage diabetes, and in 2010, also was recommended as a diagnostic test for the disease. However, HbA1c has some limitations, including assay interferences such as hemoglobin variants, and other conditions like hemolytic anemia and pregnancy that can affect validity of HbA1c results.

The authors measured fructosamine and glycated albumin from 11,348 non-diabetics and 958 diabetics, as part of the Atherosclerosis Risk in Communities (ARIC) studies. All ARIC participants included in the analysis had undergone ARIC's second clinical examination between 1990 and 1992 as well as visit three, when retinal photographs were taken. The outcomes of interest were relationships between fructosamine and glycated albumin with risk of incident diabetes, retinopathy, and risk of incident chronic kidney disease (CKD) during 2 decades of follow up.

The researchers found that hazard ratios for incident diabetes were 4.96 and 6.17, respectively, for fructosamine and glycated albumin above the 95th percentile. Fructosamine and glycated albumin also were strongly associated with retinopathy. Fructosamine and glycated albumin predicted incident CKD almost as well as HbA1c, although the reverse was true when it came to predicting incident diabetes.

The authors used a standard commercial assay to measure fructosamine, but employed a novel enzymatic method for glycated albumin. Both showed "excellent" performance, with coefficients of variation ≤3%. However, they also have limitations, including being affected by alterations in serum protein turnover, and by certain conditions, including liver disease, hyperuricemia, and thyroid dysfunction.

Based on their findings, the authors suggested that fructosamine and glycated albumin testing might be particularly useful when short-term measurement of glycemic control is important, such as for monitoring changed treatment regimens.

IGRAs Six to Nine Times More Likely Than Tuberculin Skin Testing to Yield False-Positive Results

A longitudinal study involving 2,563 healthcare workers tested for latent tuberculosis (TB) infection at four institutions found that in comparison to tuberculin skin testing (TST), interferon-γ release assays (IGRAs) were six to nine times more likely to have false-positive results (Am J Respir Crit Care Med 2014;189:77–87). The findings suggest that individuals newly converting from negative to positive results should be repeat tested to identify false-positive results.

TST for decades has been the mainstay of TB testing, but it has low sensitivity, a subjective end point, and its results can be influenced by prior bacillus Calmette-Guerin (BCG) vaccination or infection with nontubercular mycobacteria. However, TSTs have annual conversion rates <1% in most U.S. hospitals. IGRAs offer the advantages of requiring just one patient-provider interaction to obtain results and of not being affected by prior BCG vaccination. However, studies have shown them to have high rates of positivity, negative-to-positive conversion, and positive-to-negative reversion.

The study involved healthcare workers undergoing annual occupational screening for TB. In addition to TST, the researchers evaluated two IGRAs, QuantiFERON-TB Gold In-Tube (QFT-GIT), and T-SPOT.TB, both of which are U.S. Food and Drug Administration-cleared for diagnosing TB infection. Overall, 5.2% of participants had positive TST results, 4.9% positive QFT-GIT results, and 6% positive T-SPOT.TB results. A baseline positive TST but negative IGRA was strongly associated with BCG vaccination, with a 25.1 odds ratio. Conversions ranged from 0.9% for TST to 8.3% for T-SPOT.TB. Of T-SPOT.TB and QFT-GIT converters, 77.1% and 76.4%, respectively, had negative results when retested 6 months later.

Based on these findings, the researchers called into question the existing practice of routinely serial testing healthcare workers at low risk for TB infection.

KDIGO Panel: No Need to Routinely Check LDL-C Levels in CKD Patients

Patients newly diagnosed with chronic kidney disease (CKD) should have lipid profile testing with measurement of total cholesterol, low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol, and triglyceride levels, but follow-up lipid testing is not needed in most patients (Ann Intern Med 2014;160:182–9). These were two of 13 recommendations contained in a guideline on lipid management in CKD made by a workgroup of the Kidney Disease: Developing Global Outcomes (KDIGO) organization.

The authors specified that after patients undergo initial lipid profile testing, it is "unnecessary" to measure LDL-C in situations in which the results would be unlikely to change management. They also found no direct evidence that routine lipid testing improves clinical outcomes or adherence to lipid-lowering drugs. LDL-C, the authors wrote, is not suitable for assessing coronary risk in CKD patients. The guideline’s de-emphasis on using LDL-C levels to manage statin therapy mirrors that of controversial new guidelines on assessing and managing cardiovascular disease risk issued by the American College of Cardiology and American Heart Association.

The panel recommended statin therapy in all adults age 50 or older with CKD who have an estimated glomerular filtration rate (eGFR) ≥60 mL/min/1.73m2, and in those who are at least 50 years old with eGFR <60 mL/min/1.73m2, but who have not been treated with chronic dialysis or kidney transplantation.