Archive for category Health Outcomes

A new survey, by Fidelity Investments and the National Business Group on Health, reported that the use of employee incentives to promote health and wellness continues to rise. The Fidelity survey covered 147 companies with between 1,000 and 100,000 employees. The average employee incentive rose 65% to $430 last year from $260 in 2009.

While incentives can be used in all sorts of way, one of the most common uses is for promoting completion of a health-risk assessment (HRA), the idea being that employers can then identify patients who are at highest risk for poor clinical outcomes and target these patients for enrollment in targeted wellness and disease management programs.

When I first joined the care management industry and began to study evaluate wellness programs, I had two fundamental questions: 1) what is the validity of the self-report HRAs that are so common in the industry and 2) how well do incentives improve participation and longer-term behavior change? Here is what we found:

We examined more than a dozen employers who had simultaneously incentivized completion of a self-report HRA and biometric screening.. Participation was between 70 and 80% across employers, providing an ideal scenario for assessing the validity of the self-reported HRA. Examining more than 5,000 patients, we found that the percent of members failing to report or under-reporting their risks at baseline ranged from about 20% to more than 60%, depending on the particular measure, as shown below. An example of under-reporting would be when a member reports that their total cholesterol is 180 md/dl (low risk), but the biometric test finds that the actual score is 250 (high risk).

Percent of Members Failing to Report or Under-Reporting Their Risk

For individual patients, the underreporting was not limited to just one risk factor. Nearly 50% of respondents failed to report or underreported 3 or more risk factors. The reasons for underreporting are multiple, likely reflecting a lack of prior testing, lack of memory, or confusion about different biometric scores. Of course, some of the gap also reflects an unwillingness to share health-related data with their employer due to confidentiality concerns, embarrassment, questions about the program’s value, etc. Regardless of the reason, the impact of the poor quality data on an employer’s ability to identify high-risk patients was profound– Across these employers, the biometric scores identified 18 percent of the responders as being high risk where the self-report HRA identified only 1 percent of responders as high-risk. In other words, the self-report missed more than 90% of the high-risk patients.

Improving the quality/validity of responses to self-reported HRAs is no small task given the complex nature of the problem. However, the alternative of incentivizing biometric scores, while increasingly popular, raises fundamental questions about the employer’s role in promoting wellness.

A second note of caution relates to the effectiveness of incentives.

2. Incentives can increase enrollment in a wellness program, but their ability to impact longer-term behavior change and ultimately, outcomes has yet to be seen.

Incentives can be quite effective in promoting completion of an HRA, enrollment in an online wellness program, or other programs that represent a low “cost” to the patient if they enroll. However, incentives for participation have yet to be shown to lead to longer-term behavior change in employer-based wellness programs. Perhaps more concerning, there is a notable lack of research which demonstrates that incentives paid for health improvement are effective. This is not particularly surprising as we know the non-adherence to healthy behaviors (e.g., medication compliance) is a multi-factorial problem in which financials play only a small, if any role, for the majority of employees.

As evidenced by this recent survey, employers are spending a growing amount of money on employee incentives, perhaps with an over-confidence in the effectiveness of these programs given the research to date. Furthermore, as many companies will inevitably pass on the cost of these incentives to workers in the form of higher premiums, the importance of prudent purchasing becomes even more critical.

For one large employer, asthma patients who volunteered to participate in a care management program that included copay waivers for asthma controller medications as well as 3 educational mailings were compared to asthma patients who chose not to participate in the program. The study found a 10 percentage point higher medication possession ratio (MPR) in the year after program implementation for the treatment group control (54% versus 44%).

The critical question surrounding this study’s validity is the comparability of the control and intervention group. Research has shown that patients who choose to enroll in a behavioral change program are more motivated to improve their behavior than are patients who do not enroll. As evidence of the differences in motivation between the two groups, 99% of patients in the intervention group and 25% in the control group were enrolled in a traditional disease management program. A second point of evidence of the presence of selection bias is that 74% of intervention patients were using an inhaled steroid versus 64% of control group subjects prior to program enrollment. Given these differences, the most one can confidently conclude from the study is that patients who chose to enroll in the program were more compliant with their steroids than patients who did not enroll. That difference is likely explained in part by selection bias and in part, by the copay waivers; but it is not possible to determine from the data how much each component contributed.

Viewing these results in light of other studies of VBID provides further support of the flawed control group. VBID evaluations have found between a 2 to 4 percentage point increase in MPR following a copay waiver, depending on the therapy class. Yet, this study reported a 10 percentage point improvement, which is a 2.5 to 5-fold greater effect than previous studies. Advocates will likely argue that this improved effect size was due to the combination of copay waivers and education, but research on educational mailings suggests otherwise.

To the authors’ credit, they acknowledge the study’s key limitation and the fact that a stronger study would have compared this employer’s asthma patients (both enrolled and not enrolled in the program) to another employer population having comparable clinical and sociodemographics. Finding a comparable group can be challenging but not impossible and provides a much stronger study design for making a causal interpretation. I would expect this more robust type of comparison to show a compliance improvement anywhere from zero to 4 percentage points given what has been seen to date in other research.

The study also examined asthma-related pharmacy and medical costs. After controlling for covariates and baseline differences in costs, the intervention group had a lower (but not significant) adjusted monthly cost at 12 months of follow-up compared with the control group ($18 vs. $23, respectively; p = .067). Asthma-related pharmacy costs were higher ($89 vs. $53, respectively; p <.001). Summing these two measures, total monthly asthma-related costs for the year after program implementation were higher for the intervention ($107) than for for the control group ($76). However, the authors never reported total asthma-related costs as I just did; and study abstract only mentions all-cause medical costs, reporting that pharmacy costs increased, other medical costs decreased, and there was no difference in overall costs.

The use of non-participants as a control group, while known to be a weak research design, sometimes occurs due to its convenience and employer preference but simply prohibits making any causal conclusions. The discussion of overall medical costs in the abstract as the primary endpoint rather than asthma-related medical costs may reflect a classic reporting bias or “spin” as others have called it. It is a questionable practice to watch for as I have seen it used in other pharmaceutical policy literature, such as step therapy evaluations, in the absence of any plausible explanation.

The study and application of health outcomes research to the management of pharmaceuticals is a messy business. Research tools range from the large randomized controlled trial to the small, self-reported health outcomes study. Considerable uncertainty still exists about the best methodology for many areas of inquiry, and commercial interests and publication bias runs rampant. While pharmaceutical manufacturers are the most studied offenders, health care vendors are all potential violators; and of course, bias is not limited to those with commercial interests.

A study published earlier this year in JAMA once again brought awareness to the extent of the bias problem, with headlines reporting “Science for Sale.” Considering top medical journals, scientists reviewed over 600 studies from 2006 that had reported statistically non-significant primary outcomes. They subsequently conducted a detailed analysis of 72 of those they considered to be of the highest quality—all randomized controlled trials. Upon detailed review, they found that 50 percent of the articles misrepresented the data in the study conclusions, leaving the impression that the treatments were effective even though the primary results indicated otherwise. This “spin” —which the study authors define as specific reporting strategies, whatevertheir motive, to highlight that the experimental treatment isbeneficial, despite a statistically nonsignificant differencefor the primary outcome, or to distract the reader from statisticallynonsignificant results—also appeared in nearly 60% of the conclusions found in the study abstracts.

If the conclusions in 50 percent of the studies published in top medical journals are being spun toward independent interests, what is the magnitude of distortion in health outcomes research related to pharmaceuticals, where the methodological choices are greater, the standards less well-defined, the chance of study registration prior to initiation far less likely, and the quality of peer-review often suspect? The issue of bias only serves to compound decision-makers’ challenge in reviewing and applying the rapidly growing body of health outcomes research to make a well-informed decision about their pharmacy benefits. Recognizing this challenge, in the months ahead, our plan is to laud those who use the right evaluation methods for health outcomes assessment and call out those who do not and to provide simple tools for decision-makers to increase their knowledge and ability to see through the rhetoric. Finally, by adding another voice on the side of rigorous analysis, the truth about what works and what doesn’t will continue to crowd out studies that are merely repackaged marketing.