Any
health care economist worth his or her salt will tell you that from an
economic standpoint, an ideal health care system is one in which patients
pay directly for their medical care.In such a system, patients freely choose their own physicians, and
together with their physicians make all medical decisions, mindful that
any costs incurred thereby are theirs to pay.Cost controls are therefore automatic. During the 1920s and for the
next few decades, this “ideal” system existed in the United States.Inasmuch as doctors at the time had very little to offer in terms
of expensive (or effective) therapies, and since patients’ expectations
were (appropriately) low, this system worked extremely well from an economic point
of view.

The
"medical" golden era

This economic equilibrium began to falter
in the 1930s, and the disequilibrium rapidly accelerated in the years
following World War II.The
first kink in the armor of direct contracting between physicians and their
patients occurred during the Great Depression, when hospitals began to
suffer from patients’ inability to pay their bills.Over the initial objections of physicians, financially stressed
hospitals prevailed on state legislatures to legalize the insurance
schemes that became known as Blue Cross.In order to assuage the moral indignation of physicians, however,
the Blues were created as non-profit, provider-oriented insurance
organizations.

“Provider-oriented” meant two things.First, Blue Cross (and later, Blue Shield) did not try to tell
physicians how to practice medicine.Physicians were free to practice as they saw fit, and the Blues
would simply pay the bills on a fee-for-service basis.Second, the boards of trustees of local Blue Cross and Blue Shield
organizations were loaded with prominent local physicians and hospital
administrators.

Not only did such a system preserve the
direct physician-patient relationship, it also paid the bills more
reliably than did patients themselves. The system worked to so well that
soon physicians became willing to countenance the formation of private
health insurance companies, as long as those companies followed the same
general guidelines set by the Blues.

Health insurance proved to be so popular
that, during the wage and price controls of World War II, companies began
offering it to their employees in lieu of higher wages.After the war, American labor unions began to demand that employers
provide health insurance as a benefit of employment.The government liked this idea, too, and in order to encourage it,
tax laws were changed to make the provision of this benefit extremely
attractive to employers.

It is important
to note that this new tax policy created a fundamental change in how
health care was paid for.In
effect, it shifted a huge chunk of the fiscal burden for health insurance
from consumers and employers to the government, where it remains to this
day. Within a few years, the majority of American workers had
employer-provided health care insurance, heavily subsidized by the federal
government.

Then
in the 1960s, the federal government became directly involved in paying
for American health care on a large scale with the institution of
Medicare, and then Medicaid.Since
that moment, the proportion of health care spending directly attributable
to the government has steadily grown – from 24% of all dollars spent on
health care in the 1960s, to 40% by 1990.Today, when you include tax subsidies for health
insurance, fully 51% of America’s health care spending is accounted for
by the government, and paid for by taxpayers.

Since politicians can tax the people only
so much, a lot of this spending has been piling up in the form of the
national debt, awaiting our children and grandchildren.

But for physicians and their patients
in the second half of the 20th century, the resultant system
seemed nearly perfect.While
patients retained complete freedom of choice regarding which doctors and
hospitals they used, and while the physician-patient relationship remained
largely free of outside influence, somebody else was paying the bills.
There arose an almost complete dissociation between providing (and
consuming) health care, and paying for it.

This
economic arrangement did at least two things that would ultimately spell
its own doom.First,
it allowed the American health care myth to flourish – the notion that
the best possible care should be provided to everybody, and that where
health care is concerned, there are no limits.It created expectations that ultimately could not be met.

Second,
this system fostered the development of the medical-industrial complex.Since any medical advance that seemed useful would be paid for,
powerful corporations arose dedicated to meeting the bottomless demand for
medical advances. The pharmaceutical companies, hospital suppliers, and
medical device companies began turning out a steady stream of improved and
expensive technology.Ironically
(given that this whole system had evolved largely due to physicians’
attempts to shield themselves from corporate influence), these
corporations used their considerable marketing clout to influence the
decisions, the practice patterns, and even the demographic distribution
(such as patterns of specialization) of the medical profession.

The
bottomless expectations of patients and physicians, coupled with the
never-ending meeting (and flaming) of those expectations by industry,
created a rapidly spinning positive feedback loop. The more health care
the doctors and patients got, the more they wanted.The more they wanted, the more the medical-industrial complex was
happy to provide.It was
inevitable that those paying the ever-mounting health care costs (i.e.,
employers and the government) would eventually reach the breaking point.While the system that prevailed during this “golden era” came
to be regarded as the norm by (if not the birthright of) American
physicians and their patients, from a broader perspective that system is
clearly an unsustainable aberrancy.At
some point the mounting costs of “no limit” health care had to
generate its own backlash.The
system had to implode.