UM Study Finds Inconsistent Scoring Of Newborn Intensive Care Units

ANN ARBOR — Scoring methods commonly used to evaluate Newborn Intensive Care Units are inconsistent, according to new research from the University of Michigan.

The research published last week in the journal Pediatrics compared 10 well-known scores that have been developed to evaluate NICUs. The researchers found more differences than similarities.

“This raises the question: do these scores level the playing field well enough, or are scores still somewhat unfair? And what more can we learn about the major causes of mortality for infants in neonatal intensive care? By doing research to improve tools to adjust hospital scores, we believe that it will be possible to improve care for these very vulnerable infants,” says Stephen W. Patrick, M.D., lead author of the study and a fellow in the University of Michigan’s division of neonatal and perinatal medicine at C.S. Mott Children’s Hospital.

Parents and payers want to be able to know which hospitals do the best job taking care of newborns — especially newborns with life-threatening illness, Patrick says. Currently, much effort is put forth to help the public understand the quality of care that hospitals are providing, using scores like these applied to NICUs.

Patrick and his UM co-authors — Matthew M. Davis, M.D associate professor in the Child Health Evaluation and Research Unit, and Robert Schumacher, M.D., professor of neonatal-perinatal medicine — looked at 10 different neonatal mortality risk adjustment scores, including the Clinical Risk Index for Babies and the National Institutes of Child Health and Human Development “calculator.” The scores differed substantively in intended purpose, in areas like research, clinical management or performance.

The scores are also inconsistent in timing of data collection and inclusion of co-morbidity indicators.

Giving scores to hospitals is trickier than it may seem — largely because some hospitals take care of especially high numbers of very sick babies, and their scores can look worse than hospitals taking care of healthier babies. In other words, hospitals with sicker infants are taking a harder ‘test,’ says Patrick.

The researchers stress that an evaluation or scoring process is essential, but more meaningful comparisons are needed.

“To make fairer comparisons, researchers have developed different ‘risk adjustment’ techniques over the last 20 years,” Shumacher said. “But our research shows that these adjusted scores may not always level the playing field when comparing one hospital to another. Moreover, some of these tools are being used in ways they were not originally intended. We hope additional research in this area can both improve the care for patients and allow for reliable comparisons of institutions.”

This work was supported by a grant from the Robert Wood Johnson Foundation Clinical Scholars Program.