The American Journal of Obstetrics and Gynecology will soon publish its report, “Maternal and newborn outcomes in planned homebirth vs. planned hospital births: a metaanalysis”(1), which questions the safety of planned homebirths. The advance publication was sent by press release to major news centers around the world and is causing a stir in the media. The authors state that babies born at home have a mortality rate three times higher than babies born in hospital. Their conclusion is controversial because many large and rigorous studies have concluded that homebirth and hospital birth have essentially the same safety for mother and baby. Some recent studies, including one from British Columbia, Canada, reported that planned homebirths actually have better outcomes for the mother compared to planned hospital births.

This article is drawing attention and criticism. Dr. Michael Klein, emeritus professor of family practice and pediatrics at the University of British Columbia, and co-author of a study used in the report has already written a response to the journal saying that the data was misconstrued—and that this “is a politically motivated study.”(2)

The neonatal mortality rate for full-term birth is very low (a few per thousand at the most). It’s difficult to calculate rare events unless there are massive numbers because even one event in a small group will artificially raise the rate. A meta-analysis allows a researcher to combine many studies in order to get a larger data pool, but meta-analyses are prone to error when the groups are too small or are too dissimilar. I think that’s what happened in this report.

A good analysis uses data sets that are consistent for what they are supposed to represent. In this case, a homebirth vs. hospital study should only contain the single variable of location. Other parameters should be consistent for each study included in the meta-analysis. The births should be considered low-risk. The births should be planned to occur with an attendant. The attendants should have similar training or emergency equipment. The locations should have similar travel times to similarly equipped hospitals. And the studies should take place during similar time periods.

A good meta-analysis comparing low-risk homebirth to low-risk hospital births would not include unattended births, unplanned homebirths or high-risk homebirths. And it would compare rural homebirths to rural hospital births, not to urban hospital births. By that definition the Wax meta-analysis is not a good one!

Many of the studies in this report compare low-risk, attended planned homebirths to low-risk hospital births. All of those studies report similar neonatal death rates. But although a few are recent, many of these studies are old and two of them are from the mid-1970s and early 1980s! Since that time there have been strong advances in the development of emergency response teams and the treatment of sick babies. It’s wildly inaccurate to use homebirth data from that time period as representative of homebirth 35 years later, even if the study had been accurately conducted.

There are two US data-sets used in this report. One is a retrospective cohort from 1976 (3) and should not have been included if the goal was to compare births only for location. Interestingly the numbers conflict with the Wax report, the reference citing 274 homebirths rather than 454. The published study examined a physician-based practice that was—as the title specifies—a rural practice. Also, this study was retrospective, which is a form of study prone to error. The Wax study included quite a variety of studies, mixing matched cohorts with prospective cohorts and vital record reviews, and then throwing in a small retrospective from data that was nearly 40 years old in order to come up with a conclusion of higher neonatal mortality rates—a conclusion that contradicts the other studies used in the meta-analysis!

The old California study they used is different from the others, and it’s a basic rule of research that you shouldn’t mix the types of studies when doing a meta-analysis! Everyone knows that retrospective studies are unreliable—that’s why we generally don’t use them. And that’s why they are not usually included in meta-analyses. Even I know that, and I am just a dumb midwife!

The other study that stands out is the infamous Pang study of homebirth in Washington State, which used data from 1989–1996 and was published in 2002. There’s a nice article on this study at the BBC’s Web site (www.bbc.com/news/10465473) and it has some great quotes from midwives who point out that these numbers are so low they don’t give any helpful information (a perinatal mortality rate 0.07 at home vs. 0.08 in the hospital, and the stated neonatal difference of 0.20 at home vs. 0.09 in the hospital.)

But those numbers are hard to quantify. What does it mean for an individual to try to understand a calculation of nine-tenths of a death per thousand compared to two deaths per thousand when there are so many factors that can influence those numbers. A good study would need many thousands to really detect a true difference between a group of women planning homebirth and those planning hospital birth. And the groups would have to be consistent across as many parameters as possible. The Wax study gets the numbers by combining other studies, but there are too many variables between the studies. The biggest difference between these groups—planned homebirth vs. planned hospital birth—is that at least one of the studies does not identify if there was a trained attendant at the homebirths. Most of the referenced studies used data from records submitted by trained midwives, but the two US studies were different. The retrospective Sonoma County, California, study was of a physician practice. The Washington study (Pang) was retrospective data taken from birth certificates, but the data was never adjusted for whether the birth was planned, low-risk, unassisted or attended. It was a review of homebirths, which “assumed” that a birth that occurred at home after 37 weeks must have been planned.

We can accurately conclude that planned hospital births were attended by a trained person, but a homebirth might have a trained person, or no one at all. Even a “planned” homebirth may have no trained person present. It may be a do-it-yourself birth; it may be a birth to an exotic religious group that forbids medical care; it may be a birth to a frightened teenager who has hidden her pregnancy. Yet all of these unusual homebirths would be included in a data set, which simply looked at the questions: Was the birth planned to occur at home? Or—like the Pang study—asked the question even more inaccurately: “Did the birth occur at home?”

The Washington study (Pang) looked at homebirths from a strange collection of years (1989–1996) and cobbled them to show a high perinatal death rate. The study has been contested convincingly. (Read more about the Pang study at http://mana.org/WAHomeBirthStudy.pdf .)

There are 12 studies listed as included in the meta-analysis. Ten are of planned homebirths with trained midwives from various European the UK, Canada, Netherlands, etc. Two are from the US. One is from a small physician practice in Sonoma County, California. The other is a random collection of Washington homebirths attended by a wide variety of practitioners—or not attended at all by any practitioner!

The Washington (Pang) study is very different from all of the others. It does not describe “planned homebirths,” which is part of the inclusion criteria for the meta-analysis, and thus should never have been included here! It also stands out because Washington is a large rural state where some regions may require long transport times for emergency treatment. Most of these studies are quite small—fewer than 1,000 births—but the Pang study is one of the largest after the huge Netherlands study, which looked at 300,000 homebirths. The Netherlands study, by the way, showed no difference in neonatal or perinatal mortality between those planned homebirths and planned hospital births!(4) It boggles the mind why researchers would include some of these small studies, and raises the possibility that bias may have played a part in the choice of inclusion. Otherwise, why include so many old and tiny studies? One is from the UK from the 1970s and includes only 200 births, but the oddest is one that included a total of 11 women—five homebirths and six hospital births.(5) But perhaps this is a misprint.

To compute a perinatal mortality rate from such small studies, from studies conducted so many years ago and from these specific studies seems to me to be a blueprint for finding an inaccurately high death rate. In the last couple decades, many advances have been made in resuscitation techniques, emergency response training and rapid transport times (including the availability of helicopter transport). Some rural communities now have rapid emergency transport, which would have been impossible even 10 years ago. Any accurate study of homebirth vs. hospital birth should be based upon current conditions and must include the difference between urban and rural transport times.

This study is a strong attack on homebirth, but I believe that it will be used mostly to attack midwives. Already, some are pulling data from this study and claiming that it shows “unregulated midwives” are the reason for a higher neonatal mortality rate at homebirths. I think it’s important to keep the facts in mind:

Fact: The Wax study has major flaws and does not reach accurate conclusions. Extremely large studies have consistently shown there is no real difference in neonatal mortality between hospital births and planned homebirths.

Fact: The Wax study does not give any information at all about different types of midwives or higher neonatal rates associated with unlicensed midwives compared to licensed/regulated midwives. There is no such information! This report is only a few days old, but already some are claiming quite strongly that it proves this distinction. (I suspect they have not actually read it, but are basing their reaction on news briefs.)

Fact: The births in this report are all—with the exception of the study from Washington state—conducted by trained, regulated licensed midwives or physicians. There is a flaw in the meta-analysis, which contradicts the individual conclusions of many large studies showing that these births have similar risk of neonatal death as hospital births. The flaw is probably the result of using old and small studies, and of incorrectly including at least one large, but flawed, retrospective study.

Fact: This report may fuel an attack on all types of midwives, everywhere! No matter the credential, or lack of it, it is midwives who primarily conduct homebirths. An acceptance of the incorrect conclusion in this report will hurt them all! Those who are already drawing battle lines to claim that their particular credentials are better than another’s, need to look carefully at this report. This report claims (erroneously) that homebirth is dangerous and therefore any homebirth practitioner is dangerous. It doesn’t matter about the initials behind her name.

Fact: The best way to fight incorrect information is to counter it with the truth. Don’t buy into fear. Look for the truth!

Dowswell, T., J.G. Thornton, J. Hewison, and R.J.L. Lilford. 1996. Should there be a trial of home versus hospital delivery in the United Kingdom? BMJ 312: 753–57.

Response to Wax Meta-analysisby Judy Slome Cohain

Abstract: The recent Wax et al. meta-analysis (1) review of previously published homebirth research reflects the willingness of some medical journals to use data long ago dismissed as unreliable and pathetic lies about needing more data about maternal mortality to publish faulty conclusions in a flimsy attempt to discredit homebirth.

The meta-analysis analyzes perinatal mortality (deaths and stillbirths in the first week) and found that unplanned homebirths, planned homebirths and hospital births all have the same perinatal mortality rates: a little less than 1/1,000 babies die in the first week. In order to show a difference between homebirth and hospital birth, this study had to discard stillbirths, include statistics from unplanned, unattended high risk and premature homebirths after 34 weeks gleamed from birth certificate data and look at neonatal mortality (deaths and stillbirths within the first month) but inexplicably exclude stillbirths, and only then managed to show a 2/1,000 rate of babies who died in the first month following every type of homebirth while hospital birth supposedly had a 1/1,000 rate.

The study blatantly excludes any reference to long-term newborn morbidity. Does hospital birth produce 1/1,000 extra healthy babies or 1/1,000 extra babies that suffer from serious, long-term morbidity such as cerebral palsy? By omission, Wax would have you believe that the 1/1,000 newborns “saved” at planned hospital births are healthy, although the evidence is lacking. Hospital birth is associated with the risk of hospital-caused infections such as newborn meningitis, which can have serious long-term mental sequelae. Routine vaginal exams, routine rupture of membranes and increased interventions in hospital cause higher rates of newborn infections, some of which have long term implications. Even though 33% of newborns in the US are now delivered by cesarean, the 1/1,000 rate of cerebral palsy (CP) has not decreased in the past 100 years. Therefore, if cesarean on rare occasions prevents cerebral palsy, cesareans must prevent the deaths of an equal number of babies who survive with CP, or the rate of CP would decrease. In fact, in about 60% of full-term CP cases, the CP was preventable, but hospital caregivers did not respond to the signs of distress as they should have.(2)

The authors of this meta-analysis incorrectly state “more data are necessary before drawing any conclusions regarding the maternal mortality rates of planned home and planned hospital delivery.”(1) Had Wax et al. been honest, they would have had to admit that anesthesia has a certain and established death rate of 1/13,000 since 1980.(3) The total rate of maternal death from cesareans has been demonstrated to vary by location, but the biggest review found it to be 1/3,000 or 333 per million cesareans(4). Take note: There are over a million cesareans performed per year at US hospital births. Although maternal mortality is underreported, the reported 2007 California maternal mortality rate was 180 per million. Therefore, hospital birth, with its proven higher cesarean rates, involves between 200 to 300 times the maternal mortality of low-risk women at planned, attended homebirths. At homebirth, the maternal mortality rate is at most 1 per million, the woman who dies from an amniotic fluid embolism, a complication for which the cause is still unknown. Even so, the only documented homebirth death from amniotic fluid embolism was an unattended woman.(5) When the first symptom of an amniotic fluid embolism (shortness of breath) appears at an attended homebirth, even with a 30–45 minute transfer time, there is still time to save the woman with modern intensive care.

When the media write about “homebirth” they are referring to the kind of birth that Hollywood stars like Ricki Lake, Demi Moore and Meryl Streep have had—with fully trained attendants—not an indigent, homeless woman who had no prenatal care and delivers in an alley, but Wax includes unplanned, unassisted “homebirths” in his meta-analysis. The risk of higher neonatal mortality applies to unplanned, unassisted homebirths not the Meryl Streep kind of homebirth. The increased neonatal mortality rate within the first month, excluding stillbirths, is only evident when the meta-analysis includes studies based on birth certificate data that are unable to differentiate between attended and unplanned, unassisted homebirths. To this day, when one registers a homebirth in many states it is not necessary to bring anything signed by the attendants to differentiate attended from unattended homebirth. As a result, birth certificate data is very inaccurate. A 2010 study (6), which used birth certificate data found doctors and CNMs attended 20% of unplanned homebirths. While that is what appears on the certificate, in our litigious society, finding even one doctor or CNM willing to attend unplanned homebirths and an insurance company willing to insure them, particularly in the US, is a stretch of the imagination. Birth certificate data is the simply too unreliable to produce scientific conclusions—though it is very good for making research claims that simply reinforce the prevailing bias.(7)

Share this:

Related Articles

About Author: Gail Hart

Gail Hart graduated from a midwifery training program as a Certified Practical Midwife in 1977. She has held a variety of certifications over the years; she was a Certified Midwife through the Oregon Midwifery Council, and an LDEM in the state of Oregon. She is now semi-retired and no longer maintains her license, but keeps active with a small community practice. Gail is strongly interested in ways to holistically incorporate evidence-based medical knowledge with traditional midwifery understanding.

About Author: Judy Slome Cohain

Judy Slome Cohain, CNM, has run All the Way Homebirth practice in Israel since 1983. She would love to hear from women who have tried to change a positive GBS culture to a negative one by using garlic. Please email with the outcomes, which will be collected for future research.