Health care cost is the central measure to gauge the impact of UHG’s provider rating tool. To investigate the impact of provider ratings on cost, we completed a claims-based analysis using data from UHG. The unit of analysis was continuously enrolled health plan participants over two years. Individuals were chosen based upon the deployment of the provider rating tool within a specific UHG geographic market. Currently, UHG has full claims data available for over forty million subscribers in markets that span the United States. In most markets, UHG has approximately 20% (on average) of the eligible enrollees.

To answer our research questions we used a quasi-experimental design where we tracked the health care cost and utilization of a specific subscriber and dependents over a two-year period from 2005 to 2006. The tool was not available to consumers in 2005, so this serves as the pre-tool base year. However, UHG collected information that enabled us to create provider rankings for 2005 and thus to calculate a difference score described below. In 2006 the tool was introduced in selected markets, and it was introduced in more markets in 2007.

Data for our study came from two large employers with over 8,000 covered lives where all of the insurance contracts are managed by UHG. We had access to medical and pharmacy claims and enrollment data for two years: pre- and post-exposure to the provider ranking system.

2006 was also the year in which the two employers had ‘full replacement’ of their PPO/POS plans with CDHPs. Neither firm had prior experience with CDHPs. Of these two employers, Firm #2 adopted a Health Reimbursement Arrangement (HRA) and a Health Savings Account (HSA) in 2006, while Firm #1 adopted only a HSA in 2006. Because exposure to the provider ranking system occurred simultaneously with full replacement, we cannot generalize the findings to employers that adopted the provider rankings, but did not implement full replacement.

We selected employees who were enrolled in the employers’ health benefits programs for two continuous years. This provided us with a cohort sample to identify the effects of the provider rankings. Firm #1 had higher cohort retention with 61.6% of the first-year population also being in the second year. Firm #2 had a lower retention rate of 47.2%. These cohorts include not only the employees but their spouses and dependents. As a result, even if a firm has relatively low employee turnover, changes in coverage among spouses and dependents can substantially reduce the size of a continuous cohort. From both firms, the cohort sample had 3,928 continuously enrolled subscribers, spouses, and dependents.

The demographics of our study sample are described in Table 1. We see that Firm #1 has a slightly older population (34.1 years of age versus 33.9) and a higher share of dependents (37.3% versus 29.5%). Firm #1 is also associated at baseline with a higher illness burden, as computed from claims data based on the Johns Hopkins ACG system (Weiner, 1991), and the presence of serious health events that could be catastrophic.2

Variable

Firm 1

Firm 2

Table 1 – Study Sample Demographics.

Age (years)

34.118

33.928

Female=1, else Female=0

0.527

0.439

Baseline Illness Burden

3.406

2.472

Catastrophic Shock=1, else 0

0.268

0.234

Enrollee is subscriber=1, else 0

0.375

0.445

Enrollee is spouse=1, else 0

0.252

0.258

Enrollee is dependent=1, else 0

0.373

0.295

Observations (total=3,928)

2,464

1,464

One of the critical variables for this analysis is the ‘provider portfolio index’ of quality and efficiency. This index is derived from UHG’s provider rating system. The concept of a portfolio index is similar to that of a person having a portfolio of different stocks and their associated rates of return. The portfolio index works in the following fashion. A patient will see different physicians, each with a different UHG provide rating. To get an aggregate measure of the quality of the patient’s providers, one needs a numeric score for each provider, and then one weights the extent of exposure to a given provider by either reimbursement or service contact with a physician. For example, if a patient sees two physicians where one has a quality rating of 3 and the other a rating of 1 (3 is the best score and 1 is the least score possible), an average un-weighted portfolio score would be 2.0. However, if the patient saw the 1-rated physician for 90% of all expenditures and the 3-rated physician for 10% of all expenditures, the reimbursement-weighted portfolio score would be 1.2. If the percentages were reversed, the score would be 2.8. Thus, simply taking the average without accounting for exposure could lead to different results. An alternative and more traditional approach is to identify a usual source of care and then associate the provider rating score with that physician. Of concern with this method is the array of different providers with whom patients can come into contact and the significant variation in their provider ratings. The portfolio approach considers the effect of all providers with variation in efficiency and quality.

To use the portfolio approach, we needed a numeric score that would create the data for a weighted portfolio score. We transposed UHG’s provider star rating system in the following way:

Value

Situation

UHG’s provider star rating system

1

No provider rating3

2

Good quality rating only

3

Good quality and efficiency ratings

The rationale for placing quality over efficiency is the patient’s perspective. Given that most health care costs from a significant unplanned or discretionary procedure are borne by the insurer/employer and not the patient, we assume patients would care more about quality than efficiency.

With a patient-level provider portfolio score, we can measure any changes in the patient’s portfolio score from the pre-ranking year to the post-ranking year. A reduction in the portfolio score might be due to lack of access or an overriding desire to maintain a relationship with a provider, regardless of quality or efficiency. An increase in the portfolio score would indicate increased interest in physicians who are efficient and practice with high quality.

Our econometric method to answer question #1 is simply a nonlinear regression where we identify the factors associated with an improvement in the provider portfolio score. Specifically, the dependent measure equals 1 if the difference between the 2006 physician portfolio score and the 2005 physician portfolio score is greater than 0. The dependent measure is 0 otherwise. Factors considered affecting the change in portfolio are age, gender, firm, contract holder status (e.g., employee, spouse, or dependent), baseline illness burden, and the catastrophic health shock variable. The provider portfolio rating was weighted based on total allowed expenditures which include those paid by the health plan and the consumer.

To examine the second research question, we test whether those who upgraded their provider portfolios had statistically significant differences in expenditures and the use of preventive services. We used a difference-in-differences regression model to test the impact on cost of those who switched or remained with their physicians using methods similar to those used in our prior empirical analyses (Parente, Feldman, and Chen, 2008; Feldman, Parente, and Christianson, 2007).

We also used descriptive statistics to see the scale of the switching effect as well as the cost differences for patients who switched in a manner consistent with the star rating and with those who did not switch. Analytic files with cost as well as preventive care measures were constructed based on claims data provided by UHG. We used a set of preventive care measures developed in previous collaborative research with clinicians at the University of Pennsylvania, (Pollack et al, 2008).

Survey Disclaimer

According to the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless it displays a valid OMB control number. The valid OMB control number for this information collection is 0990-0379. The time required to complete this information collection is estimated to average 5 minutes per response, including the time to review instructions, search existing data resources, gather the data needed, and complete and review the information collection. If you have comments concerning the accuracy of the time estimate(s) or suggestions for improving this form, please write to: U.S. Department of Health & Human Services, OS/OCIO/PRA, 200 Independence Ave., S.W., Suite 336-E, Washington D.C. 20201, Attention: PRA Reports Clearance Officer.