Let's share the glad tidings: Johns Hopkins is again ranked as the number-one hospital in all the land. I've written about this before, sharing my misgivings about ranking hospitals. What is the methodology? How sensitive is the ranking to random error, bias, and qualities of hospitals that have nothing whatsoever to do with their - quality, like reputation? What are we supposed to do with that information, who really uses it, and do they get better care as a result?There are enough misgiving here to fill several chapters of a book, and in fact one chapter of mine is devoted to them. But the problem with measuring extends far past the ranking of hospitals. Doctors are being ranked this way, too, with the idea that public reporting of such information will help people make better choices about their health.At the same time, many are trying to urge our health care system towards greater patient-centeredness. Various research teams are developing measures to quantify how well a given visit with a physician enables shared decision making on the part of the patient-doctor pair.So, when presented with an array of various numbers - the rank of the hospital; the quality of the doctor; and the patient-centeredness of the practice - which one should the patient choose? Do we ask patients, as a whole, which ranking they find more important? Is each person to mix up a batch of numbers to find whatever aggregate satisfies their preference?These are big questions. As I outline in my book, there is evidence that precious few patients or doctors actually use these rankings. Perhaps if we include patient-centeredness in the mix, and automatically generate a weighted average (or some other statistical combination of measures) that corresponds to patients' preferences, people will feel like they are getting the best doctor they can find. That would be something to truly celebrate.