Health Insurance

By John C. Goodman

SHARE
POST:

The Birth of the “Blues”

In the 1930s and 1940s, a competitive market for health insurance developed in many places in the United States. Typically, premiums tended to reflect risks, and insurers aggressively monitored claims to keep costs down and prevent abuses. Following World War II, however, the market changed radically. Hospitals had created Blue Cross in 1939, and doctors started Blue Shield at about the same time. Under pressure from hospital and physician organizations, the “Blues” won competitive advantages from state governments and special discounts from medical providers. Once the Blues had used these advantages to gain a monopoly, the medical community was in a position to refuse to deal with commercial insurers unless they adopted many of the same practices followed by the Blues. The federal government also later adopted some of these practices through its Medicare (for the elderly) and Medicaid (for the poor) programs.1

Cost-Plus Finance

Four characteristics of Blue Cross/Blue Shield health insurance fundamentally shaped the way Americans paid for health care in the postwar period.

First, hospitals were reimbursed on a cost-plus basis. If Blue Cross patients accounted for 40 percent of a hospital’s total patient days, Blue Cross was expected to pay for 40 percent of the hospital’s total costs. If Medicare patients accounted for one-third of patient days, Medicare paid one-third of the total costs. Other insurers reimbursed hospitals in much the same way. For the most part, physicians and hospital managers were free to incur costs as they saw fit. The role of insurers was to pay the bills with few questions asked.

Second, the philosophy of the Blues was that health insurance should cover all medical costs—even routine checkups and diagnostic procedures. The early Blue plans had no deductibles and no copayments; insurers paid the total bill, and patients and physicians made choices with little interference from insurers. Therefore, health insurance was not really insurance; it was prepayment for the consumption of medical care.

Third, the Blues priced their policies based on “community rating.” In the early days, this meant that everyone in a given geographical area was charged the same price for health insurance, regardless of age, sex, occupation, or any other factor related to differences in real health risks. Even though a sixty-year-old can be expected to incur four times the health care costs of a twenty-five-year-old, for example, both paid the same premium. In this way, higher-risk people were undercharged and lower-risk people were overcharged.

Fourth, instead of pricing their policies to generate reserves that would pay bills not presented until future years (as life insurers and property and casualty insurers do), the Blues adopted a pay-as-you-go approach to insurance. This meant that each year’s premium income paid for that year’s health care costs. If a policyholder developed an illness that required treatment over several years, in each successive year insurers had to collect additional premiums from all policyholders to pay those additional costs.

Even though most health care and most health insurance were provided privately, the U.S. health care system developed into a regulated, institutionalized market dominated by nonprofit bureaucracies. Such a market is very different from a truly competitive market. Indeed, the primary reason that the medical community created the Blues was to avoid the consequences of a competitive market—including vigorous price competition and careful oversight of provider behavior by third-party payers.

One area where consumers become immediately aware that the medical marketplace is different is that of hospital prices. Even today, most patients cannot find out in advance what even routine surgical procedures will cost them. When discharged, they receive lengthy itemized bills that are difficult for even physicians to understand. Thus, the buyers (i.e., the patients) of hospital services cannot discover the price prior to buying and cannot understand the charge after making the purchase.

Contrast this experience with the market for cosmetic surgery. Because neither public nor private insurance any longer covers cosmetic surgery, patients pay with their own funds. And even though many parties are involved in supplying the service (physician, nurse, anesthetist, and the hospital), patients are quoted a single package price in advance. Moreover, during the past decade, the real price of cosmetic surgery actually fell, while prices of other medical services rose faster than the rate of inflation. Consumers spending their own money have achieved something that few health insurers have.2

Managed Care

For all its faults, the cost-plus approach to health care finance worked tolerably well until the establishment of Medicare and Medicaid in 1965. These two programs unleashed a tidal wave of new demand. Partly in response, an era of technological innovation emerged with opportunities to spend expanding in every direction. Since there were no market-based mechanisms to deal with these pressures, double-digit increases in annual health care spending were inevitable.

The system began to unravel in the 1970s and 1980s. Large employers began to manage their own health care plans, started paying hospitals based on set charges rather than on costs, and negotiated price discounts. Through the Medicare program, the federal government began paying hospitals fixed prices for surgical procedures (the Prospective Payment System). Health maintenance organizations (HMOs) emerged as competitors to traditional fee-for-service insurance.

In 1980, fewer than ten million people were enrolled in HMOs. Today, more than seventy-four million are, about one in four Americans. Three-fourths of all employees with health insurance are covered by some type of managed care.3 What difference has this change made?

First of all, it has meant fewer choices for patients and doctors. Only a few years ago, a person with private health insurance could see any doctor, enter any hospital, or (with a prescription) obtain any drug. Today, things are different. In general, patients must choose from a list of approved doctors covered by their health plans. But because employers switch health plans and employees often switch jobs, long-term relationships between patients and physicians are hard to form. Moreover, many people cannot see a specialist without a referral from a “gatekeeper” family physician or even get treatment at a hospital emergency room without prior (telephone) approval from their managed care organization. Patients who fail to follow the rules may have to pay part or all of the bill out of their own pockets.

Under managed care, doctors’ choices have been curtailed even more than patients’ choices. Not long ago, most doctors ordered tests, prescribed drugs, admitted patients to hospitals or referred them to specialists, and performed procedures based on their own experience and professional judgment. No longer. Now doctors who want to be on the “approved” list must agree to practice medicine based on a health plan’s guidelines. For most doctors, the guidelines mean fewer tests, fewer referrals, and fewer hospital admissions. By the end of the 1990s, though, managed care plans faced a backlash from patients and doctors. Politicians threatened to create a patients’ bill of rights. In response, the plans began to loosen their control over patient access to specialists and expensive treatments, and the rate of increase in health care costs began to rise.

Consumer-Driven Health Care

As the twenty-first century began, many large employers and some large health insurers became convinced that a market-based solution was the answer to U.S. health care problems. Consumer-driven health care (CDHC), defined narrowly, refers to health plans in which employees have personal health accounts from which they pay medical expenses directly. The phrase is sometimes used more loosely to refer to defined contribution health plans under which employees receive a fixed-dollar contribution from an employer to choose among various plans. Those who opt for plans with rich benefits pay more of their own money in addition to the employer’s contribution, while those who choose bare-bones health plans contribute less of their own money.

As early as 1996, a federal pilot program was launched, allowing the self-employed and employees of small businesses to have tax-free Medical Savings Accounts (MSAs) in conjunction with high-deductible health insurance.4 In 2002, a U.S. Treasury Department ruling allowed large companies to implement similar plans, called Health Reimbursement Arrangements (HRAs).5 And, as of January 1, 2004, all nonelderly Americans who have high-deductible health insurance can also have Health Savings Accounts (HSA).6

Regardless of the acronym, the idea behind all these efforts is pretty much the same: to empower individual patients and encourage them to make the tough choices between health care and other uses of their money. The proponents expect to unleash into the medical marketplace an army of savvy consumers who can compare prices, investigate quality, bargain for services, and so on. Among the expected responses of suppliers are “focus factories”—highly efficient producers who specialize in treating only a few diseases. Yet, even if consumer-driven health care performs as well as advertised, five serious problems with the health care system remain.

Problem One:Medicare and Medicaid

While change has been rapid and swift in the private sector, government programs have been slow to evolve. Medicare today still resembles the Blue Cross plan that it copied forty years ago. That is why the program does not cover prescription drugs, although a partial drug benefit is being phased in. Medicare will pay to amputate the leg of a diabetic, but will not pay for the chronic care that would have made the amputation unnecessary. It will pay for hospitalization for a stroke victim, but will not pay for drugs that might have prevented the stroke in the first place. Medicaid, whose program specifics differ from state to state, exhibits similar inefficiencies.

One-third of Medicare dollars go for patients in the last two years of life; and because Medicare is use-it-or-lose-it, the only way to get more benefits is to consume more care. There has been some movement toward private-sector options. Roughly one in six seniors is enrolled in a private-sector HMO; under Medicaid, it is close to one in two. However, there are no HSA accounts available in either program, other than a very limited pilot program for the chronically disabled.7

These two enormously expensive programs are the fastest-growing programs at the state and federal levels. Medicare costs one thousand dollars for every person in the country, or roughly four thousand dollars for a family of four. Medicaid costs even more. As a result, many families pay more in taxes for other people’s health insurance than they pay for their own.

Problem Two:Private Sector Spending

Medical research has pushed the boundaries of what doctors can do for us in every direction. As a result, we could probably spend the entire gross domestic product on health care in useful ways:8

•

The Cooper Clinic in Dallas now offers a comprehensive checkup (with a full body scan) for about $2,500. If everyone in America took advantage of this opportunity, the U.S. annual health care bill would increase by one-half.

•

More than nine hundred diagnostic tests can be done on blood alone, and one does not need too much imagination to justify, say, $5,000 worth of tests each year. But if everyone did that, U.S. health care spending would double.

•

Americans purchase nonprescription drugs almost twelve billion times a year, and almost all of these are for selfmedication. If everyone sought a physician’s advice before making such purchases, we would need twenty-five times the current number of primary care physicians.

•

Some 1,100 tests can be done on our genes to determine if we have a predisposition toward one disease or another. At, say, $1,000 a test, it would cost more than $1 million for a patient to run the full gamut. But if every American did so, the total cost would be about thirty times the nation’s total output of goods and services.

Note that these are all examples of information collection; carrying them out would not cure a single disease or treat an actual illness. If, in the process of performing all these tests, something that warranted treatment was found, spending would be even more.

The spread of HSAs will encourage people to make choices between health care and other uses of money, but HSAs are designed mainly for small-dollar expenses. A possible solution for high-dollar expenses is to adopt the casualty model of insurance familiar to homeowners and automobile buyers. Insurance pays for the repair of a haildamaged roof, but the homeowners are usually free to upgrade (or downgrade), and roof repairers function as the homeowners’ agents rather than as agents of the insurers.9

Problem Three:Lack of Health Insurance

About forty-five million Americans do not have health insurance, and that number, though not the percentage of the population, has been rising.10 Approximately 75 percent of episodes without health care coverage are over within one year. About 91 percent are over within two years. Less than 3 percent (2.5 percent) last longer than three years.11

At least four government policies have contributed to this problem and made it much worse than it needs to be. The first is the tax law. Most people with private health insurance receive health insurance as an untaxed fringe benefit. Middle-income employees effectively avoid a 25 percent income tax, a 15.3 percent tax for Social Security (half of which is paid by employers), and perhaps another 5 or 6 percent state and local income tax. Thus, almost half of every dollar spent on health insurance through employers is a cost to government. In contrast, most of the uninsured do not have access to tax-subsidized insurance. To become insured, they must first pay taxes and then purchase the insurance with what is left over.12

A second source of the problem is the extensive system of free care for uninsured people who cannot pay their medical bills. Several studies estimate that we are spending about one thousand dollars per uninsured person per year in unreimbursed medical care, a practice that clearly rewards people who are uninsured by choice.13 A sensible solution would be to use the free-care money to subsidize (say, through a tax credit) private health insurance premiums for the uninsured. However, the local governments that maintain the health care safety net do not have that option.

A third source of the problem is state government regulations, including laws that mandate what is covered under health insurance plans. Under these laws, insurers are required to cover services ranging from acupuncture to in vitro fertilization, and providers ranging from chiropractors to naturopaths. Coverage for heart transplants is mandated in Georgia, and for liver transplants in Illinois. Mandates cover marriage counseling in California, pastoral counseling in Vermont, and sperm bank deposits in Massachusetts. Studies estimate that as many as one in four uninsured people have been priced out of the market by such regulations.14

A fourth problem (discussed below) is that legislation has made it increasingly easy for people to obtain insurance after they get sick.

Problem Four:Lack of Portability

One disadvantage of employer-based insurance is that employees must switch health plans whenever they switch employers. In the old fee-for-service days, this defect imposed less of a hardship because employees were generally free to see any doctor under any plan. Today, however, changing jobs often means changing doctors as well. For an employee or family member with a health problem, that means no continuity of care. Individually owned insurance that travels with employees as they move from job to job would allow employees to establish long-term relations both with insurers and with doctors. Yet, portable health insurance is largely impossible under federal tax and employee benefit laws. The reason: in order to get tax-subsidized insurance, most people must obtain it through an employer; but employers are not allowed to buy individually owned insurance for their employees with pretax dollars.

Problem Five:Lack of Actuarially Priced Insurance

An increasingly common feature of insurance markets is “guaranteed issue” regulation, which forces insurers to sell to all applicants, no matter how sick or how well they are. Perversely, this practice, when combined with community rating, encourages healthy people to avoid high premiums and stay uninsured. After all, why buy health insurance today if you know you can buy it for the same price after you get sick? Under “pure” community rating, insurers charge the same price to every policyholder, regardless of age, sex, or any other indicator of health risk. Despite the fact that health costs for a sixty-year-old male are typically three to four times as high as those for a twenty-five-year-old male, both pay the same premium. “Modified” community rating allows for price differences based on age and sex, but not on health status.

Ironically, many large corporations community rate insurance premiums to their own employees, even though not required to do so by law. To the extent that employees pay part of the premiums for these plans, the premiums tend to be the same for everyone, regardless of expected costs. Whether in the marketplace or inside a corporation, distortions in prices produce distortions in results. People who are overcharged tend to underinsure. People who are undercharged tend to overinsure. In general, people cannot make rational choices about risk if risks are not accurately priced.

About the Author

John C. Goodman is the president of the National Center for Policy Analysis, a Dallas-based think tank. In 1988 he won the Duncan Black Award for the best article in public choice economics.

Arkansas and Florida tested a program, often referred to as “cash and counseling” whereby selected Medicaid home care patients were allowed to control a portion of the funds used to pay their home care provider. The results were that providers were more attentive to the needs of patients who controlled the funds to pay for their own care. The patients also found the program to be beneficial. Even those patients who subsequently dropped out still had positive things to say about the program. For instance, see Leslie Foster et al., “Improving the Quality of Medicaid Personal Assistance Through Consumer Direction,” Health Affairs, Web Exclusive W3-162, March 26, 2003.

Carmen DeNavas-Walt, Bernadette D. Proctor, and Robert J. Mills, “Income, Poverty, and Health Insurance Coverage in the United States: 2003,” Current Population Reports P60-226, U.S. Census Bureau, August 2004.

Jack Hadley and John Holahan, “How Much Medical Care Do the Uninsured Use, and Who Pays for It?” Health Affairs, Web Exclusive, February 12, 2003. Also see Texas State Comptroller’s Office, “Texas Estimated Health Care Spending on the Uninsured,” Texas Comptroller of Public Accounts, State of Texas, 1999, online at: www.window.state.tx.us/uninsure.

Enter your email address to subscribe to our monthly newsletter:

RELATED
CONTENT By Linda Gorman

Economic Research on Direct-Purchase Health Insurance: New Models for Real Health Care Reform

Introduction
"Economists have realized that health policy has paid so little attention to direct-purchase health coverage that policy makers know very little about it."
Paying for health care has been a problem since ancient times. In the Middle Ages, guilds collected funds to provide care for infirm members. In 1789, imitating the British, the U.S. Congress funded the Marine Hospital Service by taxing American seamen 20 cents a month.
As U.S. governments grew, they continued passing la...