Let’s fix performance measurement for physicians

I have a “like-hate” relationship with clinical metrics, performance measurements, and other such things. By now, almost all physicians live with them in the form of insurer “report cards,” PQRS, and “meaningful” use. Some of us have even more exposure to them by participating in patient-centered medical homes and accountable care organizations.

Why “like”? Because I believe they can help you to know how you’re doing. Happy patients, full schedules, phones ringing off the hook with new patient requests, and the belief that you’re doing a good job delivering care aren’t enough. Few things are more sobering than seeing data on the percentage of your diabetes patients who are not at goal, those with hypertension whose pressures are not under control, or those who haven’t undergone colon cancer screening. I know that many question the relevance of some of the clinical measures, which often look at intermediate and perhaps less meaningful outcomes or report on process, but they can be more informative than the gut sense that we have on how we’re doing our jobs.

In addition to “going viral,” Berwick’s and Wachter’s comments were hailed as vindication by many who were warning us of the dangers of the performance measurement movement, including my friend Dr. Robert Centor, who blogged that ten years ago he and others “understood what could go wrong and predicted the current problems. Of course, we were labeled kooks back then.”

Some of the measures align with what I wanted to know as a physician trying to deliver the best care — the percentage of my diabetic patients who are poorly controlled or my immunization rates, for example. Others seem much less relevant, if not outright silly. As much as I think that obesity is a health risk, I don’t think that the fact that I didn’t counsel my healthy, low-risk, and lean 20 year old with a BMI of 26 to lose weight should count against me. Also, why do I have to track and report that I recorded a family history?

Enter the electronic health record (EHR). At first glance, it should have helped us with measuring quality, since after all, everything that we needed was in electronic form, so finding what we needed should be as easy as counting how many quotation marks I used in this column. We could do what was very difficult, if not impossible, with paper charts or claims data. In some ways, that was true, when the data was in a form that was easily extractable. However, entering in the EHR that I discussed smoking cessation with a patient wasn’t enough — it had to be in a “structured data” field that the reporting software could find.

Instead of cutting steps, the EHR added more than a few. Just last week, I was handed the latest “cheat sheet” of templates that my IT staff created to make it easier to add structured data to facilitate performance measurement. There’s a template for patients who are obese, one for those with diabetes who can’t take aspirin, and another for those with a history of myocardial infarction who can’t tolerate a statin. The list covers an entire page. I won’t have to use these templates often — my medical assistant and others do much of the template entering — but the fact that despite our having these expensive computer programs, we require such workarounds says a lot.

I welcome the calls for a more thoughtful approach to measurement and for EHR products to make more seamless and less intrusive. The American College of Physicians’ (ACP) 2012 paper on performance measurement had many recommendations that, if followed, would prevent much of our current suffering. For example, minimizing burdens in collecting data, using EHRs to facilitate (not complicate) the process, and most importantly, that “performance measures that have not been shown to improve value to include higher quality, better outcomes, and reduced costs (and higher patient and physician satisfaction) should be removed from performance–based payment programs.”

Recently, ACP joined other professional societies, CMS, health plans, and other stakeholders to form the Core Quality Measures Collaborative, which seeks to align measures among payers, reduce the burdens of measurement, and make the whole process more meaningful.

It was supposed to be that if we provided high-quality care to our patients, the measurements would reflect that. Instead, the mantra is that if we score well on our measures, then that means that we provided high-quality care. In other words, the cart has become the horse. It’s time to fix that.

Yul Ejnes is an internal medicine physician and a past chair, board of regents, American College of Physicians. His statements do not necessarily reflect official policies of ACP.

Physician Coaching by KevinMD

Let’s fix performance measurement for physicians 7 comments

Comments are moderated before they are published. Please read the comment policy.

Dr. Drake Ramoray

“Some of the measures align with what I wanted to know as a physician trying to deliver the best care — the percentage of my diabetic patients who are poorly controlled”

I am part of a small private single specialty group on an underserved area. We have a several month wait list to get appointments. While we don’t submit our A1c levels for PQRS we have been internally tracking them for three years. Over this time frame our rate of A1c levels below 9% has not significantly changed (except for one doctor). Furthermore the newer a provider the worse the A1c numbers. Why you may ask, inexperience perhaps (I would argue obviously no here, I mean we are talking about an A1c of 9%), no not at all. In fact the youngest doc had he only meaningful improvement over three years. Why?

The reason is new patients. Because we are in an underserved area our practice is not in the habit of holding into well controlled diabetics unless they are extremely complicated (pump for example). So we see patients for 6 mos to 2 years or so and then when satisfied with their control return them to their PCP. Over the three years our influx and outflux of patients was nearly constant. For every patient who is now well controlled that we release to their PCP, this person is replaced by a patient with a terrible A1c. The new docs numbers improved not because of a change in quality of care, but because as her practice matured she had less slots for new patients. In our office the only improvement in numbers was by the physician who over that time frame saw less sick patients. When compared at the state level we have some of the worst A1c levels in the entire state, sharing that distinction with, you guessed it, other Endocrinologists. So at the minimum we get bad publicity if the numbers are made public, and at the worst the diabetes specialists get paid less to care for the most complicated diabetics.

The problem with pay for performance is that it assumes that the doctors are too lazy, incompetent, or unmotivated to do better. As it stands our practice is not submitting A1c metrics to private insurers or the government. If I am ever required to do so, and especially if the data were to be made public, the best thing I could do for myself is either move to a more affluent zip code or stop taking new diabetics (or more likely both). How exactly is that supposed to be good for the community (or your notion of population health)?

“that “performance measures that have not been shown to improve value to include higher quality, better outcomes, and reduced costs (and higher patient and physician satisfaction) should be removed from performance–based payment programs.””

Here is the rub. Metrics should be required to be shown that they improve patient care prior to being foisted upon physicians in the first place, you know evidence based medicine and all. Why are we spending millions of dollars and millions of physician hours annually to perform statistician metrics that are not either accurate or useful? Of course, because the big insurance companies, hospitals, and government have their own interests at heart and good representation by their organizations and physicians get the AMA, ABIM, and ACP.

PW

I would like to propose a thought:
Let’s say docs had the ability to have small patient panels, and spend as much quality time as needed with their patients, and were paid accordingly.
Let’s say the docs could develop strong relationships with these patients and using those relationships and use their own “quality measures” including happy patients and a full waiting room, as well as numerical measures such as BP and A1Cs.
Let’s also assume that all patients are not going to automatically fall in line with our diet, exercise and smoking recommendations, nor will all willingly expose their backsides for a colonoscopy, yet still have a good strong relationship with the doc, so they know if they have a problem, the doc will come through for them.
I don’t see a problem with this scenario, do you?
Instead we have administrative minions and physician “leaders” expounding on how we should practice, all the while driving us and our patients apart from each other. This, I have a problem with.

PW

“Or, more accurately, measurement for the sake of financially punishing doctors.”

No no no, you’re supposed to threaten, cajole and bully them into following the diet and exercise and taking their meds, until they comply or leave and become another doc’s problem.

Dr. Drake Ramoray

I would agree with this statement if it weren’t for the patient satisfaction surveys. Of course if my choice is between having a cadre of patients with excellent A1c levels and terrible patient satisfaction surveys (from chasing all the non-compliant/difficult patients away) or having terrible A1c levels made public with great patient satisfaction surveys I’d have to go with good A1c with bad patient previews. Such self defeating metrics doctors have made for themselves. Isn’t it one of Dr. G’s rules about picking the physician with the most experience and worst bedside manner?

PW

I can’t post the videos here, but my hero for lousy bedside manner is Doc Martin.

GoCougs

I think he vocalizes every physicians internal monologue. Great show.

Thomas D Guastavino

You can’t fix stupid. Performance measures as they currently exist need to be dumped.