Report Cards Decide Medical Care Program Success

Report Cards Decide Medical Care Program Success

Article excerpt

We're in the midst of what trend-watchers call the Information Age, the era in which data is a valuable possession. The quest for data has spawned a by-product of the Information Age--the Age of Standards--that calls for the use of norms and program-specific data on results. Improvement in available data and the search for valid norms for comparison are changing the shape of the behavioral health industry, impacting the way programs, providers and treatments are assessed and perceived.

This impact will be felt through behavioral health outcome measurements, which are basically program--and provider--report cards that make it easier to determine the effectiveness of medical care. The trouble is, the industry can't agree on a standard definition of treatment success.

Couple this with the fact that outcome measurements often reflect more the nature of the methodology than the actual outcome of the treatment program, and you have data that can be everything and nothing, depending on what's compiled, who's analyzing it and who's reading it.

Outcomes, as they exist now, could be considered the most valuable and deceptive pieces of information that purchasers of care consider when they make decisions. Buyers shopping for managed care organizations or EAPs will be inundated with information touting which managed care organization or EAP has the best outcomes, the best success records and the best potential for saving money.

These data represent a double-edged sword. The concept itself is fundamentally sound. As is the case with the standardized scholastic aptitude test (SAT), outcomes could be used as an indicator to predict how well one treatment program will do compared with other programs. Unlike the SAT, however, outcomes aren't quantified using one set of structured measures, all channeled into an unbiased evaluation clearinghouse that assigns grades on a scale from 400 to 1,600.

Currently, treatment outcome measures include an array of data measured on a variety of scales, many of which can't be used in straight-forward comparison with other programs. For example, can you compare re-admission rates based on a six-month follow-up with those seen 12 months or two years after treatment? What can you says about these rates if patients who drop out of treatment before their programs are complete are included in a finally tally, but not in another program's results?

For several reasons, the behavioral health industry increasingly emphasizes treatment outcomes as a barometer of success. Some see published outcomes as a means to establish a competitive marketing edge. A growing number of national providers and managed care firms report these data as part of their marketing programs. Some use outcomes as a primary means to evaluate the quality of the treatment programs and monitor ongoing care, an approach seen in HMOs and other managed care companies.

Regardless of the motivation, purchasers often demand objective standards by which to judge programs. Much of the treatment outcomes data available today are subject to bias, misinterpretation and subjectivity. Until consensus is reached and standards are agreed to, the myriad of definitions and methodologies currently employed to measure treatment outcomes will yield results in direct proportion to the integrity of the data.

Purchasers must look beyond the data and consider the factors that affected the results. Here are some ways to do just that:

1) Find the key person. Managed care organizations and, many times, other treatment providers generally have reporting or research staff members who play key roles in the selection of the data (outcome and other) that's presented to corporate clients. Although these individuals aren't easily identifiable, it's important to find them because they can explain the data in a straightforward manner. A good place to start your search is with the person who supplied the data.