A Timeline of Landmark Quality Improvement Measures That Shaped US Healthcare

In 1854, the Crimean War ravished Istanbul and surrounding areas, leaving healthcare facilities like Barrack Hospital overrun. Amid the chaos, Florence Nightingale was starting to make a connection that would revolutionize patient care and pave the way for quality improvement measures in healthcare for decades to come. She was beginning to address the link between poor hospital sanitation methods and the astronomical 60% mortality rate among wounded soldiers.

Nightingale was cognizant of the increasing awareness Europe was gaining surrounding the link between sanitation practices and mortality rates. Taking what she learned from then-current research, Nightingale instituted quality improvement measures such as hand washing, the sanitation of surgical tools, and regular bedsheet replenishment. These sanitation methods, which were revolutionary at the time, are standard practice across all healthcare facilities today.

What we consider a minimum standard (or lower) of care, Nightingale was just introducing to her facility. Not only had Nightingale set the standard for future sanitation practices, but by the time she left Barrack Hospital, she had successfully reduced the mortality rate from 60% to 1%. Since then, quality improvement has been one of the chief goals of healthcare and is an ever-evolving effort.

1700s – Colonial Times

Revolutionary Era America experiences widespread inadequacy in healthcare leading to the need for quality improvement measures.

In colonial times, everyone fancied themselves a doctor and hardly anyone had the true medical training to back up the title of physician. Unfortunately, that didn’t stop people from practicing medicine, and since no regulatory institutions were in place to enforce standards, people more or less got away with it. It’s estimated that in the Revolutionary times, somewhere between 3,000 and 4,000 people were practicing medicine in America. Only 400 of these so-called physicians had any formal medical training. To make matters worse, only about half of those who did have training had obtained medical degrees.

However, medical school then was a far cry from the medical school we know and respect today. At the time, attendees were only required to complete six to eight months of training along with a three-year apprenticeship. That was it. Yet, diplomas obtained from these schools were widely accepted as significant enough to practice medicine.

It became clear relatively quickly that quality improvement measures had to be implemented and, beginning with the formation of the American Medical Association (AMA) in 1897, a series of quality improvement efforts would drastically alter the national healthcare landscape.

AMA Formation 1847

American Medical Association is formed in response to the need for quality improvement measures.

The AMA was formed in 1847, largely in response to the growing concern about quality improvement measures, or the lack thereof, in the medical community. Between schools that didn’t seem to have many quality standards and physicians practicing without formal or adequate training, it was clear that something had to be done. In 1904, the AMA requested the Carnegie Foundation for the Advancement of Teaching to conduct a thorough investigation and to review the medical schools that currently existed to discover whether or not they were of acceptable quality. The foundation appointed Abraham Flexner, an education expert, to conduct the study.

When Flexner’s study was complete, it was determined that, in no uncertain terms, very few of the existing medical training facilities in both America and Canada seemed to uphold any standards whatsoever. He found that cadavers weren’t being cared for properly, antiseptics weren’t being used, and the faculty was so busy carrying on with their own private practices that they almost never taught. He also discovered that most of the qualifications these institutions purported to require in order for students to attend would be waived rather easily for those who could pay enough.

Flexner’s report, which he published in 1910 and entitled Medical Education in the United States and Canada, was the catalyst for many of the quality improvement standards that would come soon after.

1917 – Ernest Codman’s End Result System of Hospital Standardization Program

Around the same time that Flexner was publishing his report on the dismal state of medical schools, Ernest Codman, a Harvard-educated surgeon, was applying his system, which he called the End Result System of Hospital Standardization Program, to institutions providing medical care, mainly hospitals.

By 1917, the American College of Surgeons had adopted his program as the minimum standard of hospital care. This changed the way hospitals functioned for years to come, and even today we benefit from Codman’s system. IN 1918, the American College of Surgeons began using these standards not only as quality improvement standards but as inspection standards. The year they began this inspection process using Codman’s standards, only 89 out of the existing 692 hospitals met the minimum standards. By 1950, however, the Hospital Standardization Program had approved 3,200 hospitals nationwide.

1921 – Congress Passes the Sheppard-Towner Act

The Sheppard-Towner Maternity and Infancy Act was created to provide funding to states so that they could help support expecting mothers and also to help children.

The Act allowed states to provide funding that would serve to provide more access to child health care and maternal health care services.

1935 – Congress Passes Title V of the Social Security Act

By passing Title V, Congress equipped and financed pediatric and primary care services for hospitals, specifically focusing on underserved areas. Around the same time, the Emergency Maternity and Infant Care bill was passed which financed care for 1.5 million women and infants of US soldiers during WWII. Government intervention in quality improvement measures for healthcare began to increase and largely helped to expedite the progress of such measures.

1946 – Hill-Burton Act

The Hill-Burton Act provided finding for quality improvement measures such as hospital construction.

The Hill-Burton act awarded grants to states to help them build hospitals. Given that regulations and inspection procedures were in place and healthcare facilities had built on Codman’s standardization program significantly, this act allowed for new hospitals to be constructed throughout the nation, which helped to serve more patients and improve national healthcare.

1993 – NCQA Published Its First Health Plan Report Card Using HEDIS

The National Committee For Quality Assurance had begun work in the ‘80s which helped corporate purchasers enroll their employees in healthcare plans that would both measure their current quality and continuously improve it. In 1993, the NCQA published its first Health Plan Employer Report Card using the Healthcare Effectiveness Data and Information Set, or HEDIS. This marked the first time that comparing health plans based on the effectiveness of the care plan members received was possible.

1999 – NCQA Makes HEDIS an Official Part of Its Accreditation Program

This official incorporation significantly improved the quality of the accreditation program. To this day, HEDIS drives nearly 40% of the total accreditation score that the NCQA awards.

1999 and 2001 – IOM Publishes Groundbreaking Reports

Two groundbreaking reports lead the way for widespread quality improvement measures in the healthcare system.

The Institute of Medicine (IOM) published two groundbreaking reports, one in 1999 and one in 2001. These reports finally, after more than a decade of acknowledged inadequacy, fixed national attention on the critical quality improvement measures necessary to improve both Medicare and national healthcare as a whole. During the 1990s, a series of IOM-backed quality improvement initiatives were conducted, but two reports held the most weight.

The 1999 report entitled To Err is Human: Building a Safer Health Systembut the substantial safety gaps in the healthcare system under a microscope, exposing concerning statistics that strongly indicated an immediate need for quality improvement measures. For example, the report indicated that 98,000 people were dying in hospitals every year due to entirely preventable medical errors.

The 2001 report, entitled Crossing the Quality Chasm: A New Health System For the 21st Century, resounded the call for stricter quality improvement measures and interventions, stating that the system as a whole fails to provide “consistent high-quality medical care to all people”. In part, the authors of the report blamed the failures of the healthcare system on the system itself, not the individual participants in it.

The report states the following:

“Mistakes can be best prevented by designing the health system at all levels to make it safer — to make it harder for people to do something wrong and easier for them to do it right.”

Conclusion

Embracing technology like AI and machine learning can pave the way for modern quality improvement measures that address the needs of the current population.

Since the 2001 report was published, vast improvements in patient and clinical care have made it possible for the modern advancements in quality care we enjoy today to exist. However, we still have a long way to go. As today’s healthcare system faces unique challenges, such as an aging population and a growing rate of chronic conditions, it’s time for another review of our current quality control measures. It’s important to keep up with a growing population an utilize technological advances in facility management and patient care, such as AI, machine learning, and predictive models. Implementing these quality improvement measures can help the healthcare system respond to modern challenges and pave the way for the next level of patient care standards.