The History of the Affordable Care Act

By 1968, rising health care costs had become a hot political topic and, by 1971, Sen. Edward Kennedy had begun his lifelong efforts to reform health care by backing a universal single-payer plan [source: Schmeck]. President Nixon countered with a plan that would require employers to offer a minimum level of coverage but also mandated protecting competition for private insurers [source: Goodridge and Arnquist]. The struggle between these two options dominated the debate for years to come.

Meanwhile, in December 1973, Nixon signed the Health Maintenance Organization Act to help fund the establishment of HMOs. The following year, Congress passed the still-controversial Employee Retirement Income Security Act (ERISA), which sets minimum standards for pension plans in private industry but also exempts the self-insured health plans of large corporations from state regulations [source: Goodridge and Arnquist].

Keep Reading Below

Presidents and candidates continued to push for health care reform. Some progress was made in 1986, when Congress passed the Emergency Medical Treatment and Active Labor Act, which mandated that nearly all hospitals examine and stabilize all emergency room patients, regardless of citizenship, legal status or ability to pay. This act was part of the Consolidated Omnibus Budget Reconciliation Act (COBRA), which allows an unemployed worker to remain on an employer's group plan for 18 months.

In 1994, President Bill Clinton's attempt to push through universal coverage based on managed competition in a closely regulated private marketplace died in Congress. Clinton was successful in closing another insurance gap, however, when in 1997 he signed the State Children's Health Insurance Program (SCHIP, now known as CHIP) into law. CHIP helps insure children of families that make too much to qualify for Medicaid but not enough to afford private medical insurance.

Under his successor, President George W. Bush, Congress passed Medicare Part D in 2003, which partly covers prescriptions for Medicare subscribers but also famously left a "doughnut hole" of no coverage during which beneficiaries must pay all prescription costs.

By the early 2000s, health care costs were soaring amid a bumpy economy, driving employers to offload more costs onto workers. As news coverage in 2006 looked toward the 2008 election, health care spending exceeded $2.2 trillion, or roughly $7,421 per person, and accounted for 16.2 percent of the economy [source: Goodridge and Arnquist]. And so the stage was set for presidential candidate Barack Obama in May 2007 to announce to an Iowa City audience his plan for "affordable, universal health care in America" [source: Herszenhorn and Pear].

If history was any indication, a bumpy road lay ahead.

If none of this sounds familiar, it's probably because HMOs quickly expanded from local nonprofits to gargantuan, influential, for-profit insurers. From 1970 to 1999 alone, the HMO industry expanded from 3 million to more than 80 million members [source: Markovich].

The Down-low on HMOs

HMOs were conceived in the 1930s as community organizations, supported by monthly subscription fees, which would provide basic health services to members. Supporters believed that, since such groups would have a fiscal stake in the health of their subscribers, they would willingly invest in their preventative care [source: Schmeck]. Ironically, Republicans, who originally opposed HMOs as socialist, passed the HMO Act in 1973 as an alternative to what they saw as creeping socialized medicine.