What's the deal with baby sign language? When did families start using it, and does it have positive effects?

​

The first research investigating baby sign language (BSL) took place in the 1980s (we’ll get to that in a minute), but BSL really became a phenomenon in the 90s. In the twenty-first century, not only is there a huge market for BSL products, it’s used all over the place--not just in individual family homes. Daycare centers sign. Pediatric offices encourage it. It’s on TV and the internet. It’s in the news. It’s everywhere. And the claims about its possibilities are equally far-reaching. BSL can teach your child to speak sooner; increase your child’s IQ and cognitive development; reduce tantrums; boost self-esteem; alleviate parental frustrations; improve bonding. Heck, even the American Academy of Pediatrics (AAP) came out in 2012 to say that BSL “helps improve communication.”[1] All of this sounds pretty wonderful, of course, but let me be clear at the outset: none of the alleged claims about BSL’s benefits have a shred of evidence behind them. So where did this idea even come from? Let’s briefly take a closer look:

Researchers first became interested in studying baby signing pretty recently, in the 1990s, after they recognized that some babies with deaf parents (who signed) seemed to benefit in their speech development. The (uncertain) finding prodded scientists to explore whether the same advantages might extend to children with hearing parents. They hypothesized that a true “baby sign language” would promote and advance infant language development.[2] As it turns out, they were wrong. Almost all the early research on baby signing emanated from two scholars: Susan Goodwyn and Linda Acredolo. The duo’s most comprehensive project was published in 2000. It followed about 100 babies and reported that babies who learned signing displayed advantages “on the vast majority of language acquisition measures.” The results, Goodwyn and Acredolo concluded, “strongly support the hypothesis that symbolic gesturing facilitates the early stages of verbal language development.”[3] But besides being subject to very legitimate criticisms for scientific procedures (more on that in a second), the advantages Goodwyn and Acredolo found were modest. At best. More importantly, any differences they documented had already disappeared by the time children were 1.5-2 years old.[4]Even the duo itself clarified: “significant positive effects [of BSL] do not appear to last.”[5] (We might harbor some concerns, too, that Goodwyn and Acredolo have strong ties to the BSL industry: they authored a popular book on BSL (Baby Signs) and run a corresponding business that trains BSL instructors. Nonetheless, their enthusiasm does appear genuine. I don’t hold it against them…that much.) The BSL studies—conducted by Goodwyn and Acredolo as well as others—suffer many weaknesses, methodologically: few are randomized or controlled; all have small sample sizes (some are very small—just a handful of children); most fail to explain participant selection, procedures, and group allocation; they don’t verify the extent to which infants even learned gestures; and they are thus subject to selection bias.[6] And virtually all the work that has come out on BSL in the last 20 years runs counter to both Goodwyn and Acredolo’s findings and the broad cultural mindset that champions BSL as a stimulus for infant development. One of the first reviews to assess BSL came out in 2005. It looked at 17 studies on signing babies between 1980 and 2002, and concluded that “the existing research was methodologically flawed and, because of this, there was ‘no evidence to suggest that Baby Sign had any benefits for child development.’”[7] Subsequent projects suggest the same. A really interesting study (in 2012) assessed claims made on more than 30 BSL websites. The authors performed one of my all-time favorite types of analysis: footnote-tracking. Lauri Nelson and her team followed the citations for every single claim about the benefits of BSL on these websites, and found that more than 90% of citations were opinion pieces. Opinion articles can be useful and informative, of course, but they are not data. They are not evidence. This means that a mere 10% of BSL “benefits”—just 8 citations—had any grounding in empirical research whatsoever. Furthermore, zero of the claims about fewer tantrums, improved self-esteem, and heightened parent-child bonding had any kind of evidence base, be it opinion or scientific. Nelson’s final words to parents: “decisions about whether to teach sign language to their young children with normal hearing must be based on opinions and beliefs but not on research.”[8] Two years later, in 2014, fresh reviewers lamented that “the pace of scientific contributions to understanding [BSL] is relatively poor,” and concluded that there is no evidence substantiating that BSL improves communication development in young children.[9] A more recent study tells us more. Babies were randomized to either learn to sign or not (a third group received verbal training to control for the instructive component of BSL), and researchers evaluated the babies’ language development periodically over the course of a year. “While the babies learned and used the signs (often before they could speak),” lead researcher Elizabeth Kirk reported, “doing so made no significant impact on their language development.” Babies who signed didn’t start talking sooner or faster. Parents’ efforts to teach babies to sign, Kirk concluded, “may be unnecessary.” Other projects have corroborated these results, all amounting to the same takeaway: we don’t have any evidence that indicates BSL is beneficial or advantageous for babies’ development.[10] Anecdotally, some parents even worry that baby signing detracts from verbal language development because babies have less impetus to speak, and their cognitive faculties are being used up for signing. In other words, since they can already effectively communicate with signs, and a great deal of their mental energy is devoted to signing, signing babies might be less compelled to develop language skills. There’s no science to support this, but the issue frequently arises with parents, enough so that media pieces set out to set the record straight: “Can Baby Sign Language Delay Speech?” one article asked. (No, it concluded.)[11]

To my mind, the most interesting aspect of baby signing really has to do with parent-child interaction. (In formal research lingo: the “wider non-linguistic impact of encouraging infant gesturing upon the dyadic interaction between mothers and infants.” Whew.) In short, BSL proponents claim that signing facilitates better parent-child relationships. This strikes me as a logical argument, even if science doesn’t offer us much data to support it. Indeed, results assessing quality of life measures are “inconclusive,” and some even suggest that enrollment in formal signing education programs might actually instigate parental stress.[12] That said…find me a family that thinks signing wasn’t a boon to its connections, that was unhappy with signing. (I’ll wait.) Just because we don’t have any convincing evidence that BSL expedites babies’ development doesn’t necessarily mean that it isn’t useful, or even beneficial. It certainly isn’t harmful. (None of the projects assessing BSL have ever documented any detrimental outcomes with BSL. As one 2014 review concluded: there is no evidence to suggest that BSL is “effective,” but nothing indicates that teaching BSL impacts development.[13]) There’s just a lot we still don’t know (or can’t measure). Here are a couple things we do know:

A LOT of parents love baby signing. Seriously, parents are crazy for it. I’ve never met or heard of anyone who was disappointed with it…And in fact, we actually do have some research to support parents’ devotion to baby signing: parents tend to respond and interact more with babies who gesture (this doesn’t have to be formal BSL—it applies to any babies making simple gestures, such as pointing). It stands to reason, then, that parents who teach their babies to sign might be forging more interactive time with their children, and that’s never a bad thing.[14] Put simply: we shouldn’t overlook the fact that many families simply really enjoy signing. That’s worth something.

Babies learn by interacting with the world around them--hence, the whole “Einstein-never-used-flash-cards” philosophy.[15] And we also know that: babies benefit from early, frequent, and variable exposure to language, as well as when adults talk with them, as silly as that might seem.[16] It all comes down to interaction. Since the ability to gesture is intricately linked to language development, it isn’t unreasonable to think that teaching babies to sign is contributing to overall learning (and/or communicative development).[17] Again, no evidence doesn’t necessarily mean no dice.

So where does this leave us? I think it depends on parents’ purposes and expectations for signing. Parents who see BSL as an avenue for enhancement—something that will give their child a leg up in terms of brain development or IQ or vocabulary…a means to an end—are probably going to be disappointed (at least that’s what the data says.) But parents who see BSL through a more relaxed lens—as a way to begin communicating before babies start talking, or as a fun activity to engage in together—just might love it. And when it comes down to it, even the most invested advocates of BSL express this same moderate take: the “right” reason to do any level of BSL (it’s not an all-or-nothing undertaking, of course) has nothing to do with IQ or cognitive development or language acquisition. It’s simply a “nice activity” for families to do together.[18] (And do we really need science to tell us as much?)

I'm starting a new series in honor of my all-time favorite assignment as a student: book reports. Check out my first write-up, on Carla Naumberg's Parenting in the Present Moment: How to Stay Focused on What Really Matters.

​Quick Recap:Parenting in the Present Momentis a wonderful read for parents struggling to stay calm with their children or maintain their own identities. It’s full of practical strategies to engage with and appreciate your children more fully. (As a bonus, these tips also apply to interpersonal relationships more broadly.)This book is about how to practice “mindful parenting”(more) effectively; at its most basic level, this quest boils down to being non-reactive as a parent. Naumberg’s introduction is a bit repetitive, but the remainder of the book is quite succinct, besides being well-organized. (I might recommend parents skim the introduction and just jump into chapter 2.) The writing is approachable and each section contains anecdotal examples, literature-based evidence, and practical everyday strategies for implementing various components of mindfulness... Highly recommended.

Book Breakdown:Naumberg divides mindful parenting into three arenas:staying connected, staying grounded, and staying present. Although these constitute three separate chapters, in reality they are all interconnected, and presenceis definitely, fittingly, the mainstay for Naumberg’s approach.

“Staying Connected” is the foundation for mindful parenting. As Naumberg explains, parents’ connectivity with their children is the root of everything in parent-child relationships. Parents’ most important job in this regard to is to “show up”; presence is everything. Two of the most important areas of discussion here regard safety – both literally and metaphorically – and acknowledgement (really “attuning” to children).

“Staying Grounded” explains that parents can sustain meaningful connections with their kids even when things become challenging. In this section, Naumberg emphasizes that parental composure flows from within, and encourages parents to consider their own anxieties, treat themselves with kindness and respect, and seek support.

“Staying Present” is the most utilitarian section of the book, and focuses on daily rituals parents can reconsider with an attitude of mindfulness (such as reading and feeding) as well as habits and tips (such as implementing rules to limit phone use and rejecting multi-tasking in favor of single-tasking) to help parents tune in and re-value (appreciate) parenting moments.

There are a few noteworthy refrains in Parenting in the Present Moment. Naumberg sings the praises of meditation throughout the book, and even skeptics might be willing to give it a go after reading it. Naumberg suggests, in accordance with a growing body of scholarship, that meditation – concerted focus on the breath, over and over again – is both extremely beneficial and universally accessible. Another thread is that parents striving to be mindful need to take care of themselves – Naumberg consistently encourages parents to do so, both in the long-term through things like therapy, as well as the day-to-day, through things like getting enough sleep and finding time to exercise or meditate. Lastly, Naumberg describes her strategies as examples of “North Star parenting” – methods to help parents get back on track and redirect themselves in the face of any kind of parenting “setback.” Moreover, she really advocates that parents not dwell on “mistakes” or fault themselves, but, accordingly stay present.

So – I’m pregnant…again. (Needless to say – yay!) I’m due at New Year’s, and the first thing I did after taking a positive pregnancy test at the end of April was to read up on pregnancy testing basics. This is a quick primer explaining some of the basics and history on how women have “found out” they are pregnant.​​

​The Basics of hCGHome pregnancy tests are made possible by the “pregnancy hormone”: human chorionic gonadotropin (hCG). A woman’s body begins to produce hCG after a fertilized egg successfully implants into the uterine wall. Numbers rise quickly, doubling about every two to three days. Home pregnancy tests work by detecting hCG in urine, and different brands can detect different levels – for most women, the lower the hCG levels the better, because this means they can learn about a pregnancy sooner. More sensitive tests might pick up hCG at just 10-15 mIU/ml (milli-international units per milliliter), while less sensitive tests might not pick it up until levels reach 50 or even 100 mIU/ml. Urine pregnancy tests are qualitative, meaning they simply ascertain whether the body has produced any hCG; they answer a yes-or-no question. If hCG is present, at any level equal to or higher than the threshold the test is capable of detecting, the test turns positive. (A faint or barely-there line is thus a definitive positive, because it indicates the test did indeed detect hCG. I learned this in my first pregnancy.) In comparison, blood tests are quantitative; they measure the body’s overall levels of hCG, giving you a number rather than a yes/no. They have the potential to give some indication of gestational age, but hCG levels vary so widely that any given number can only be nominally telling. More important than any static hCG measurement (a one-time test on a single day) are the results from serial hCG testing (tests performed daily for two or three days in a row), since hCG should be increasing quickly in early pregnancy. In the case of any cause for concern (such as bleeding), serial hCG testing is a common approach. Numbers should climb. The classic escalating hCG patterns were first documented in the late 1930s.[1] Most studies offer similar values and ranges by week, with peak levels around weeks eight or nine, although there is a tremendously wide range of “normal.”[2] Beyond the first trimester, hCG levels plateau and then fall off. (There is evidence that women carrying multiples tend to have higher hCG levels, on average, compared to women carrying singletons, but this is just a correlation – any one hCG measurement or set of hCG measurements cannot reveal an embryo count. The only way to do that is with an ultrasound.) Because there is no exact model of hCG growth, a Google search yields many examples. Here is one template from the American Pregnancy Association. If you’re like me and want even more specifics, you might like to check out the online “Betabase” at http://www.betabase.info. It’s an interactive website where visitors can enter their hCG measurements and corresponding gestational age; the outcome is a wealth of data about average, high, and low hCG levels at any given point in early pregnancy. It’s not scientific, strictly speaking, but it contains data for almost 120,000 pregnancies. Plus it’s fun.

Home Pregnancy Tests It is hCG that has afforded women the luxury – or the burden, depending on your point of view – of using early home pregnancy tests. Before the 1970s, when home testing kits were introduced in the U.S., pregnancy was confirmed either through a period of waiting or, after the 1920s, by laboratory analysis. The first test that could identify hCG in humans was developed in Germany; it worked by assessing the response of rats or mice injected with human urine. If the urine contained hCG, the rat went into heat. In the mid-1900s, scientists upgraded this system by replacing rodents with rabbits; later they used toads. In the 1960s, researchers devised a way to detect hCG without animal sacrifice. By adding a urine sample to a slide full of hCG antibodies, scientists could discern whether hCG was present by observing the antibodies’ response (or lack thereof). If they met hCG, the hCG antibodies reacted, resulting in a sort of muddy ring on the slide. In the next decade, scientists moved beyond the quantitative hCG testing and learned how to test for precise hCG levels with a blood test.[4] Companies scrambled to introduce the first home test to the market, and consumers could purchase the first “early pregnant test” by 1977, for around $10 (no small price - this would be equivalent to about $41 today). It was a crude version of its more convenient descendants – a far cry from the simple “pee on a stick” undertaking. The initial product “consisted of a test tube, two droppers, and a plastic tube-holder fitted with a special mirror to reveal the results from the bottom of the tube.” Remembering the ordeal, one woman explained that the test would have been simple for anyone with previous lab training and experience. Consumer Reports described: “to use the EPT, a woman must follow a nine-step procedure that permits ample opportunity for error.”[5] These cumbersome initial home pregnancy tests carried a certain stigma. For decades, the notions that 1) a woman would need to know she was pregnant, and 2) would prefer to know she was pregnant as soon as possible, were simply implausible in a culture that condemned both single mothers and women who opted not to have children. In the twenty-first century, these ideas are (mostly) foreign. Women from all different walks of life expect to be able to learn about pregnancy at their own behest. Home pregnancy tests offer women the chance to discover and experience a life-changing turn of events in their own bathrooms, on their own terms.[6] And yet, as is true of so many forms of knowledge, more information can sometimes impose unforeseen suffering. The early self-knowledge furnished by home pregnancy tests has led more women to be aware of miscarriages they would not otherwise have known about. I think this phenomenon helps account for my first-trimester stress with my previous pregnancy – I found out I was pregnant so early, when miscarriage rates are highest, that it compounded my anxiety about loss. Whether a pregnancy comes to light earlier or later, miscarriage is a concern for almost every woman who becomes pregnant. There are a handful of resources that explicate miscarriage risk, and I became intimately familiar with some of them in 2015 (during my first pregnancy). If you absolutely need to look, the “miscarriage odds reassurer” at datayze.com uses day-by-day miscarriage statistics based on the cumulative data from more than 50,000 patients in five peer-reviewed studies to spit out heartening facts about the likelihood of not miscarrying. In the interest of optimism, I tried my best to avoid this kind of thing with this pregnancy. Unfortunately, there is virtually no way to predict or prevent a miscarriage, so women’s time is better spent pondering other matters. For me, looking into hCG and pregnancy tests made for a good distraction.

​ When have parents started toilet training their children? How have they gone about training?​ What have the experts said?

This article explores the history of toilet training and explains how the timing and methods for training have changed over​ time.

I’ve been starting to think about toilet training as of late, and my default plan was to do whatever my mom did, but that doesn’t appear to be an option. Apparently, my siblings and I were all trained on weekend getaways with my grandma and a giant bag of M&Ms. This was an excellent strategy for my mom – send your kid off for the weekend, pick her up toilet trained – but it’s no help to me. I’m considering shipping my son off to his grandmother despite her lack of personal experience, but I’ve started to prepare for the probable inevitability that I will have to train my son, personally. Plan B: Survey the terrain. See what the medical historical literature has to say about the endeavor. When should I start? What should I do? What shouldn’t I do? Despite a paucity of evidence (and who can blame pediatricians? who wants to run toilet training studies?), I actually learned a lot. (It turns out M&Ms might be the linchpin.) To start, people talk about the pendulum swinging back and forth with toilet training in the U.S., but I see a different trend. I only see the pendulum swinging one way. In the early 1900s, mothers endeavored to train their babies very, very early. (Although, as later observers noted, “it was the mother who was trained” – I'll explain in a minute.) On the heels of that rigidity, “permissiveness” took hold, and parents in the mid-1900s started training their kids later (older than 1!).[1] Since then, the drift has continued – families are toilet training later and later. Maybe we’ll see a backlash to this in the coming decades, but the shift towards training at an older age has been stable since the 1960s – and it’s backed by current pediatric literature.

Your Great-Great-Grandma’s Method: Potting Advice-writers in the early 1900s were serious about toilet training. Except that what they were pressing for wasn’t really toilet training in the sense that we think of it today – it was more like parental-urine-and-stool-catching training. They started early – sometimes as early just a couple of weeks old. More often, experts recommended starting around two months. The training process was rigid. It involved trying to get babies on regular evacuation schedules, using some sort of bin or basin to catch their excrements – one author described it as “potting.”[2] Precision and routine were essential; the idea was that by placing a child over the bin at exactly consistent intervals every day, he would learn to respond to the stimulus and soon thereafter be “trained.”[3] (If things weren’t working, mothers read to resort to more drastic measures. Like soap stick enemas. Multiple times per day.[4]) Of course, this wasn’t really toilet training – the endgame was not independent bladder and bowel control, but much more Pavlovian: drilling a baby to void itself when prompted. This kind of approach is referred to as “parent-oriented” training because it is entirely determined and circumscribed by the parent. Through all this early training in the first half of the twentieth century, there was no expectation that a youngster would be able to reliably use the bathroom on his own until at least two years of age.[5]

Your Great-Grandma’s Method: Delayed Potting Into the middle of the 1900s, advice literature started to project more lax ideas. In the 1940s, the Better Homes & Gardens mothers’ handbook gave the option of starting training as late as about 6 months, by which point most babies can sit up on their own, and comforted mothers that they had no need to “get worried or desperate.”[6] In 1944, Dr. Dorothy Whipple observed that most parents began training at 8-10 months. Before this point, Dr. Whipple explained, babies weren’t capable of bladder or bowel control – their muscular development wasn’t matured enough.[7] Other experts told parents that kids were incapable of voluntary control before 15-18 months of age. Indeed, it is really interesting that even while health professionals talked about the ability to “condition” babies as young as a few months to use some sort of toilet chamber, they declared that children wouldn’t be able to go to the bathroom on their own until they were two or two-and-a-half.[8] So things started to loosen up as far as starting times, but the basic potting tactics stayed in place. Then, in the 1950s things really started to shift. A 1953 guidebook for parents (with the amusing title Your Child and His Problems) conveyed that there was no hurry, and that parents could more or less let their kids drive the ship. Mothers should be patient instead of urgent or insistent, guiding a child to the toilet when they were ready. Advice books started to talk about waiting until children were “ready to learn.” Apparently, the low-key approach might lead to success by age 2. Ideally, parents should wait until at least about 15 months, and avoid making too big a deal of toilet training. Placing amplified pressure on a child could backfire, professionals warned. “When toilet training becomes a battle, a mother cannot win,” wrote Joseph Teicher.[9]

Your Grandma’s Method: The Child-Oriented Approach Enter Spock and Brazelton – the renowned pediatric gurus of the 1960s. Both of them forwarded a “child-centered” approach to toilet training (TT). Both men’s writing on TT is considered revolutionary, and for good reason, but it’s clear from the shifts in literature in the 1940s and 1950s that their ideas were in vogue more broadly. (We’ll actually start discussing this as “toilet training” now because when these guys wrote about toilet training, they meant it in the way we consider it today – where the intended outcome was autonomous toilet use.) T. Berry Brazelton published “A Child-Oriented Approach to Toilet Training” in Pediatrics in 1962. It was based on more than 1,100 kids from his own practice over the previous decade (although, to be clear, it was not a formal scientific study). At its root, Brazelton’s method involved gradual, patient moves toward TT. A child’s interest in using the toilet, and his physiological and psychological readiness were essential. The whole idea was to orient TT around the child, rather than the parent. Not before 18 months, parents could introduce a potty chair and start talking about the toilet. Then the child could practice sitting on the chair, with his clothes on, according to some sort of daily schedule. The next step was to remove the diaper and continue with the daily potty sitting schedule. Parents could then let their child play naked for small bursts of time with access to the chair. Throughout all of this, praise was important (but criticism was to be avoided), and parents were instructed to keep their composure. Stay Calm and Carry On. Eventually, over time, the child learned. In Brazelton’s practice, most parents started the process around 24 months, and the average age of TT completion was around 28 months.[10] Benjamin Spock came out with a piece in Pediatrics in 1964 (“Parents’ Fear of Conflict in Toilet Training”). He recommended parents begin gearing up for TT around 18 months, and urged parents be optimistic rather than fatalistic about TT. “Many mothers today have come to see training too exclusively as a conflict,” he said. Like Brazelton, Spock advocated incremental training. For him, part of the problem was that mothers were so stressed and worked up about TT that the whole project often snowballed into a sort of epic mother-toddler clash. If mothers could “see that the balance is in favor of training,” he wrote, “they can go at it in a more assured and effective manner.”[11]Ultimately, both men had a similar take – start slow, follow the child’s lead, be patient, use praise (and treats!), and maintain composure. Their relaxed approach took time – many months – but was intended to minimize stress. Other writers mirrored Brazelton’s and Spock’s serene approach to TT. Training “should be easy for both parent and child,” asserted Gordon Jensen in The Well Child’s Problems. “The most important point,” he stressed, “is that the child needs help and support when he is ready.”[12] In 1971, a pair of pediatricians observed the ongoing “trend away from aggressive training . . . toward the less hurried approach.”[13]The accepted age of TT completion was shifting later and later, as pediatricians reported that most kids didn’t accomplish true daytime TT until sometime after they were 2 years old.[14]

Why did this happen?Probably, for lots of reasons. Including the overwhelming trend in pediatric and parental literature towards a more tolerant, forgiving kind of care. And changing ideas and expectations about babies. But also, likely, because of diapers.

Dirty Diapers Without doubt, the history of diapering dovetails with the history of TT. It is impossible to consider one without the other. Before you laugh off the subject, consider that to parents of young children, diapers are a big deal. The evolution of diapers matters a great deal to a great many people. (In 1987, the Children’s Museum of Holyoke, in Massachusetts, actually featured an exhibit called the “Diary of a Dirty Diaper.” It displayed all kinds of diapers throughout history.[15] I couldn’t find any records indicating its popularity.) In any event, having tried-and-failed with cloth diapers (shortest experiment ever), I know I am thankful for all the perks of my Pampers: the super-absorbent nighttime soaking mechanisms, the Velcro straps, the wicking capabilities . . . The history of the diaper has been greatly shaped by the minutiae of everyday life. In the 1800s, for example, westerners increasingly turned toward diapers (where they hadn’t used them much previously) as owningfurniture became more common – diapers protected personal property. In the late 1800s, the invention of safetypins made cloth diapers easier to use. Cleansing agents made their mark on diapers: in their initial incarnation, cloth diapers presented health challenges such as rashes and irritations in babies, and the problem worsened as harsher cleaners came onto the market. The societal changes inflicted by WWII also influenced the diaper scene: in the 1940s, as women entered the workforce in droves, diaper services emerged – they exchanged soiled for clean diapers and did all the laundering.[16] And new scientific developments, especially paper and water technology improvements, made new things possible. In the 1940s, the first disposable diaper designs showed up. It took years before any product was ready for a mass market (Pampers didn’t hit the shelves until 1961), but the disposables, primitive as they are compared to twenty-first century models, completely revolutionized diapering. Since the first prototypes, diapers have changed drastically – they’ve become more absorbent and smaller, they fit babies better, they’re better at preventing leakage, and they’re better at wicking away moisture to keep babies’ skin dry.[17] It’s no secret that the trend towards delayed TT coincided with the mass production of disposable diapers. Most observers think disposables have played a key role in moving back the average age of TT – by their description, parents in the first half of the twentieth century who were washing tons of cloth diapers had a greater incentive to get their kids out of diapers sooner, so they started training sooner. On the flip side, parents in the 1960s and later, with access to an abundance of disposable diapers, might have had fewer motives. (Although, interestingly, today, the onset of TT also tends to correlate with income – parents earning more money report beginning TT later (around 24 months) than parents earning less, who often begin TT earlier (by 18 months). This is just another correlation, but it could point to the significance of diapers’ cost as a driving factor in TT.[18])

Another Option in the 1970s: TT in a Day The so-called Azrin and Foxx TT method (popularized in the book Toilet Training in Less Than a Day) was an outgrowth of an original study (run by Nathan Azrin and Richard Foxx, both behavioral psychologists) that involved mentally disabled institutionalized individuals. It is highly structured and intensive, a rapid conditioning approach. The goal is success after just a couple of days – and it does seem to work. And stick. The Azrin and Foxx method is an accelerated TT program; it is all-inclusive and all-encompassing for a couple of days. The method itself was highly specific, but here are the basic components and ideas: ideally, training occurred in a single, distraction-free room, beginning with a parent taking a child through the motions of using the toilet (dolls were used here, too). After the initial demonstration, the child was pumped with fluids (juice!) to make them need to use the toilet as much as possible throughout the training period. The parent-trainer was 100% focused on the child at all times, constantly showing the child how to use the potty. The child was reminded of the potty every several minutes, and praised for correct usage and reprimanded or put in time out for accidents.[19] The very few studies that have evaluated this method have found that “success is relatively high and achieved soon after training,” (sometimes as soon as 1 day) and that “success was maintained” months after the training.[20]

TT in the Twenty-First Century In the 2000s, research indicates that parents are beginning, and finishing, TT much later than in previous decades. This delay has been gradual, as we’ve seen, over time. In the earlier 1900s, the onset of TT was very early; that began to shift later beginning in the mid-1900s and has continued to trend towards older ages ever since. Now, most parents don’t start TT until around 2 years and many don’t finish until 3 years or even later.[21](This trend has also been occurring in other western developed countries, not just the U.S.[22]) The two predominant “methods” are still the Brazelton-Spock approach (aka the child-centered approach) from the 1960s and the Azrin-Foxx approach from the 1970s (aka TT in a day). (If anyone has any better ideas, we’re about due for a new method, right?)

Which One is “Better?”

The short answer is “we don’t know.” The slightly longer answer is “we don’t know, but . . . ”There isn’t a ton of good data on either method. (As one team snidely noted: “Toilet training is not a subject that invokes passion among researchers”).[23]And both tactics are “equally capable of achieving toilet-training success in healthy children.”[24] The American Academy of Pediatrics formally recommends a gradual, child-oriented approach. The child is introduced to the potty and gets accustomed to sitting on it with his clothes on. Then he sits on it without his clothes; then he’s placed there after diaper changes. The last stage involves short periods of diaper-free play with lots of encouragement to use the potty. This whole process takes place over a period of many months, and there is ample potty talk throughout. But there really isn’t much evidence that shows this method is necessarily “right.”[25] In 2006, a team of authors working for the U.S. Agency for Healthcare Research and Quality published a review of TT methods. They considered 26 studies and 8 randomized-control trials, some of which involved healthy children and some of which involved handicapped children. (The studies weren’t perfect, because they didn’t directly compare both methods, but this report is the best we have at the moment.) The review authors found that “for healthy children, the Azrin and Foxx method performed better than the Spock method.” To be fair, the crash method and the gradual, child-oriented approach both worked. They both “resulted in quick, successful toilet training.” But the crash course was slightly better – it yielded “rapid success rates at relatively young ages” – and maintained results.[26] As the authors summarize: “It appears from the literature that parents who want quick results should consider the Azrin and Foxx method of toilet training but must be prepared for a regimented approach and should use positive reinforcement. For parents who are not prepared to put as much focus into attaining confidence, the child-oriented approach can be successful but may take somewhat longer.”[27] Importantly, recent studies of the Azrin Foxx method indicate that rewards or praise (verbal or culinary) help with training, while reprimands or time outs probably work against you.There are tons of spin-offs and variations of the Azrin Foxx method out there – the best and most approachable one I’ve found (and the resource I plan to use when I give this whole thing a go) is written by Meg Collins at LuciesList – you can find it here.

When Should I Start?

The literature varies in terms of what it says about “readiness.” Some authors indicate that most kids are ready to start TT around 18 months (and virtually no one suggests that kids are ready before this), but some suggest later, noting that “mastery of the developmental skills required for toilet training occurs after 24 months of age.” Most authors reason that children can complete TT by 24-36 months of age.[28] In reality, readiness is going to vary from family to family and child to child. It’s all relative. These choices depend on your goals and priorities as a parent, not to mention your time and personal “best strategies.” Plus, there are a host of different behaviors cited as “readiness indicators” in children. (One study cited 21 signs of readiness, only to explain that “there is no consensus” on how many or which signs to rely on. That’s super helpful.) The visible TT readiness signs that consistently show up in the literature include the abilities to: walk/sit/stand up, pull underwear/pull-ups up and down, follow simple directions, signal for the potty, say no, and recognize a dirty diaper.[29] Hsi-Yang Wu, a pediatric urologist on faculty at Stanford who’s written about toilet training methodologies, explains that the multitude of readiness markers proposed occur at widely varying ages and offer “no guidance to physicians or parents on when to start TT.” “At a minimum,” he says, “the child must be able to signal to its parent that he or she needs to urinate.” For most western children, this occurs quite late – around 28 months on average for girls and 33 months on average for boys. For Wu, a “reasonable approach” is for parents to think about TT once their child can: 1) communicate the need to go to the bathroom and 2) stay dry for a couple of hours during the day.[30] (Here’s the link if you are interested in checking out Wu’s article. Word of caution: academic toilet training literature is a dense thicket, but Wu’s piece is one of the most approachable.) Based on what researchers report, it seems like if parents aren’t sure whether to start, it’s best (most efficient, anyways) to wait a little bit. “Although earlier initiation of intensive toilet training is associated with earlier completion, overall training duration increases.”[31] Studies bear this out – starting TT earlier doesn’t necessarily mean finishing TT earlier (although it can), but it almost always means longer TT. Most of all, it’s not worth it to get too hung up on the exact timing – as Wu states, “there is currently no evidence that a specific timing or method of TT is more effective.”[32]

So – all in all – parents and pediatricians don’t have a ton of good evidence they can rely on for help with TT questions. But, as usual, there are some points of continuity. Here are my takeaway nuggets:

Doctors have been increasingly stressing that children are incapable of autonomous toilet use until later than previously thought

Parents have been continuously training their kids later and later

Timing is somewhat important, but it’s not an issue worth stressing over

Whatever TT method you use will eventually work

Do you know how you were toilet trained? What’s your experience with toilet training your kids?

How have professionals advised parents to manage temper tantrums? What strategies have stood the test of time?

The historical medical literature on tantrums – scant though it is – comprises one of the most consistent bodies of writing I’ve encountered thus far in my work for this blog. Parents, pediatricians, and other professionals who work with children have mostly been singing the same tune with regards to what causes tantrums and how parents can respond to them . . . with one major exception: corporal punishment. The overwhelming message is that tantrums are simply an unavoidable parental drudgery, almost a rite of passage to suffer through. The pool of evidence here is disappointingly small; scientific medicine has barely weighed in on tantrums (although armchair psychiatrists in the first quarter of the twentieth century had some bizarre opinions). Professionals discussed tantrums in advice literature aimed toward parents, yet they rarely conversed about tantrums amongst themselves. I encountered virtually no discourse on tantrums within academic journals prior to the 1970s, and no real body of literature on tantrums existed before the 1990s. One comment in 1991 that there was “surprisingly little in the [medical] literature about temper tantrums” seems like a gross understatement.[1] (And, the handful of recent projects seeking to study tantrums are frustrating to navigate; in a 2003 analysis of 335 young children in Madison, Wisconsin, the authors converted parents’ “tantrum narratives” into impossible-to-comprehend charts they called “tantrugrams,” complete with “tantrum termination hazard plots.”(!)[2]) Pediatrics’ quietude on tantrums is not unexpected. The field is predominantly (at least in theory) devoted to the study and implementation of biomedical, evidence-based child care; it is concerned with parenting, of course, but only in so much as it demonstrably affects children’s lives. Since pediatrics is not about the scientific art of parenting, it would certainly be unfair to hold it accountable as such.

“A Fairly Common Affliction”:[3]In Search of Tantrum Epidemiology We don’t need science to tell us that tantrums are ordinary, but it does. Sort of. Every source I found discussed how usual tantrums were; almost none provided numbers, yet common sense and practical observation substantiates the claim (at least in the U.S.). (Also relying heavily on common sense, I’ve opted not to define a tantrum here; we all know one.) Tantrums are virtually a universal parental challenge – a completely “natural” phenomenon.[4] Typically, they begin around 15-18 months of age, and continue until about 3 years. For many families, coping with tantrums is a daily chore. Outbursts typically last only 2 to 5 minutes, but a few minutes can feel like an eternity, and can exert far-reaching disruptive effects. Speaking of the effects of tantrums, one physician wrote in The Medical Journal of Australia in 1972 that “parents may find that they do not like their child, a regrettably frequent phenomenon.”[5] One of my favorite items I encountered in this research was a subheading in a 1957 text for parents entitled “Annoying Characteristics of the Developing Child.” (If this wasn’t amusing enough, the first sentence began: “Any parent could say a great deal about this subject….”)[6] Despite their ubiquity, however, tantrums have always held a certain association with deficient or inappropriate parenting (or parents).[7]This is no bygone idiosyncrasy – it persists in modern literature and cultural norms. Certain parenting styles, behaviors, or responses may “encourage” or “foster” tantrums, a 1991 piece in the American Family Physician explained.[8] Who hasn't felt judged with a bawling tyke? This is not very surprising. Indeed, many if not all the topics covered in this blog abut uncomfortable questions about whether parents play causative roles in conundrums ranging from behavioral developmental delays to nutritional deficiencies to sleep problems. Tantrums are no exception. So why do they happen? In short – for all kinds of reasons. Tantrums are the regrettable outcome of a perfect alignment of circumstances: 1) Kids are at the “right” developmental stage.Toddlers are walking, talking developmental milestones. They are acquiring new motor, communication, and mobility skills daily. They are becoming individuals. They have pursuits. Independent pursuits. Yet their autonomy, not to mention their logical reasoning and language capabilities, is still obviously and grossly limited.[9] A physician interviewed for the 1943 Better Homes & Gardens mothers’ handbook explained it this way: “your child has an indomitable urge to exercise his newly developing function[s].”[10] Tantrums are a titanic clash of a child’s budding independent spirit with its parents bidding. 2) There is often some background disturbance.Background contributors are endless: hunger, tiredness, overstimulation, exhaustion, being too cold or too hot, thirst, etc. 3) A child encounters an inciting trigger.Triggers can be literally anything. (If you’re in need of a good laugh, try this: google “kids crying for ridiculous reasons.” Some of these are truly hilarious. Here are some of my favorites.) To anyone who has spent even a brief amount of time with a young child, the notion that his “being thwarted” is a major trigger for tantrums and outbursts is as obvious as the sky is blue. This truism has been in place throughout the 20th century, and I think the language is just perfect: thwarted. Here is a representative description from the 1950s: “when a child is frustrated, prevented from doing what he wants to do, or made to do what he does not want to do, he goes into an unthinking rage.”[11]

How to Manage a Tantrum? Trick Question. Don’t. As I mentioned, up through the mid-1900s, American society was accepting of various forms of physical punishment. Cultivating deep faith in measured discipline, psychological experts touted the significance of maternal distance, authority, and scheduling in child-rearing. Corporal punishment and spanking were commonplace, but some writers espoused alternative methods. In her 1925 advice book, Training the Toddler, Elizabeth Cleveland explained that for seemingly endless violent tantrums, parents might find the use of cold water effective – administered through a bath or shower or by “a dash of cold water in the face.” “This should be done in a common sense way,” Cleveland advised, “with no effect of violence, as a curative treatment rather than punishment.”[12] In the 1950s, the arrival of Dr. Spock symbolized a turning of the tides. He famously told parents to “trust themselves.” Some professionals started to admonish physical discipline as “harmful.”[13]Punishment came under fire, but continued to appear in the literature as a possible last resort for parents. One 1957 book advised that “smacking should usually be avoided” – (usually!) – admitting that “sometimes smacking may help in the early stage of a tantrum.”[14] A 1962 text similarly explained that “a sharp decisive spank to indicate that the parent means business can be very effective.”[15] Spock’s own evolution mirrored the decades-long shift. In 1945, he wrote that he was not “particularly advocating spanking,” but saw it as “less poisonous than lengthy disapproval….” By 1985, he “deplored” it.[16] Sadly, corporal punishment still exists – I have neither the inclination nor the space to discuss this much here, but its prevalence has continued to motivate pediatric literature to denounce it.[17]

Best Practices If we turn a blind eye to the topic of corporal punishment, the rest of professionals’ suggestions on handling tantrums are very stable over time. I’ve distilled things into an overarching credo and several bullet points for tantrum management. They will likely be familiar.

The Credo: “Stay Calm and Carry On”This speaks for itself, and parents everywhere are likely trying to stay calm. I am, at least. Like it or not, professionals have been advising parents in the storm’s eye of a tantrum to practice calmness and patience for over a hundred years. In the 1950s and today, “the more casual the parent the easier it is to manage the tantrum.”[18] Strategies 1 – 3: “the essence of treatment lies in prevention”[19]Pediatricians point to prevention as a parent’s best defense against a tantrum. This too seems obvious, but it is such a striking a point of agreement across professions and over time that it’s worth conveying. In 1925, Elizabeth Cleveland summarized that “if thwarting a child’s purposes stirs him to anger, the thing to attack is not the anger, but the purposes.”[20] A 1980 advice book explained: “there is a vast difference between keeping a child from an object and seizing the object from him once he has it.”[21]So true. Some authors offer various tricks to help prevent meltdowns: childproof thoroughly, remove tempting or dangerous items from children’s purview, employ diversion and distraction to change a child’s course (“children are tremendously suggestible,” Better Homes & Gardens noted in 1943);[22] and minimize background causation to the extent you are able.[23] One writer in 1980 recommended that parents try being “clever and resourceful” to get their child to do what he needed to. If such attempts failed, he wrote, the parent should “be more skillful next time.”[24]Strategies 4 – 6: “the parent has to be victorious”[25]Again, advisors across disciplines and decades warn parents not, under any circumstances, to succumb to children’s desires. If the child “wins,” this is likely to set a pattern, and, as one physician quipped in 1943, the child will be “as unhappy as the folks around him, for no child of 18 to 27 months possesses the judgment to run his own life.”[26] Ten years later, one parents’ guide explained that a child “must know that he does not get his way with tantrums. That is the absolute rule the parent must follow.”[27]Strategy 7: “Isolate and Ignore”[28]If strategies 1-3 (prevent!) failed, then the best way to succeed in strategies 4-6 (don’t give in!) is to not react at all to a child’s outburst. “The best way to treat a tantrum is to ignore it,” wrote an insistent author in 1957. “A display of indifference is a much more severe and effective punishment than any disciplinary method.”[29]For greatest efficacy, non-reaction should be complete, and isolating a child can help: “tantrums require an audience,” wrote Elizabeth Cleveland in 1925, “and few children will indulge in one there is no one to see.”[30] Modern physicians uphold these directives. In 1991, a piece in the journal American Family Physician read: “temper tantrums are best handled by ignoring the outburst.”[31] In 2009, the American Academy of Pediatrics suggested that parents “let the tantrum end itself.”[32]Most of their advice is grounded in experiential knowledge, but the very few studies that have sought to measure and assess tantrums in any way indicate that ignoring tantrums probably is the best way to go, showing that intervening upon a tantrum has a small correlation with its length.[33] As one piece described, “the more consternation the outburst provokes on the part of the parents, the longer it tends to continue.” (Those reports also indicate that children who tend to get their way from tantrums throw about twice as many tantrums.)[34]

And there you have it. Be calm, prevent, don’t give in, and then wait it out. Toddlers have successfully stumped generations of parents and physicians. I keep thinking of one of my family’s favorite children’s books right now, about a family who encounters myriad obstacles in search of a bear. As the family meets each stumbling block, they all sing out: we can’t go over it, we can’t go under it, we’ve got to go through it!​And so it is with tantrums.

*Addendum: Tips and Tricks for Surviving Tantrums

Reconcile after a tantrum.

Minimize your requests. Professionals suggest that parents try to acquiesce to young children whenever possible. (“Very many tantrums arise as a result of a totally unreasonable insistence on something which does not matter,” wrote Ronald Illingworth in 1957.[35])

Be consistent, both in how you respond to individual episodes as well as with your partner.

Count the clock. Most tantrums are finished by five minutes.

Limit time outs to one minute per year of age.

Model, encourage, and praise verbal communication.

Follow predictable routines to the extent you are able.

Offer choices as much as possible to help attend to a toddler’s growing sense of autonomy.

Not too surprisingly, breastfeeding and weight loss has not been a huge source of medical investigation. I had questions about this while I was nursing, and found some excellent pieces explaining that the scientific evidence that breastfeeding aids with weight loss is dubious. Check out Amy Kiefer’s take on this at her wonderful blog, expectingscience.com, here. As Kiefer demonstrates, it’s quite clear that women cannot depend on nursing to help them lose weight after giving birth. So where did this idea even come from?It appears the notion was promulgated in the 1970s when breastfeeding numbers took off after decades of low rates.[1] (I discuss this trend briefly in my post about feeding babies milk.) Treating breastfeeding as a weight loss aid was likely a nice selling point for health advocates and feminists alike who were invested in uplifting breastfeeding. The rhetoric still serves the same purpose in the unsettled public discourse about breastfeeding today: besides being a boon for your baby, nursing will help you regain your femininity, your figure, your pre-pregnancy wardrobe, yourself. The idea was – erroneously – “generally accepted” by about 1980, and still is today.[2]

Prior to the 1980s, there was scant professional medical writing on how nursing effects postpartum weight (and almost none on humans). The couple of studies that addressed the topic indicated nursing probably had little, if any, effect on weight loss. One 1957 study of how reproduction influences women’s overall body weight was published in The Journal of Endocrinology. It announced that “lactation had little influence on mean weight.” Breastfeeding consistently “resulted in a small loss during the period of lactation, but its effect was almost eliminated at 24 months after delivery.”[3] A 1975 study that looked at 42 women in either a breastfeeding or a control group indicated that both groups lost “considerable” weight in the six months after they gave birth. (Breastfeeding did promote the break-down of fat, however.)[4] For the most part, serious academic scrutiny of this relationship was confined to the dairy industry; in humans, the issue was “poorly described.”[5]For example, some studies looked at how lactation influenced immediate postpartum weight loss – as in the first week or so after childbirth.[6](This may be an interesting medical query, but most women I know aren’t necessarily interested in this – they want to know whether nursing will help them return to their pre-pregnancy selves any more efficiently.)

More researchers started to explore the relationship between breastfeeding and postpartum weight loss beginning in the 1980s, probably due to a combination of factors. Breastfeeding was much more prevalent, as was the notion that it could aid with weight loss; the science of breastfeeding was expanding; and American physicians were starting to explore the seeds of the nation’s growing weight problems more aggressively. A 1983 study of 31 women in Connecticut concluded – go figure – that calories consumed were the most telling indicator of postpartum weight loss. Because they ate fewer calories, the women who breastfed the least actually lost the most weight! The authors argued that postpartum weight loss boiled down to calories more than anything else, and announced that breastfeeding “does not promote weight loss in well-nourished women.” “Apparently,” the authors surmised, “the state of lactation leads women to consume more calories so that the body weight loss is small.” They were so convinced of their position that they even suggested the recommended daily amounts (RDAs) of food for nursing women be lowered.[7] A separate project in 1988 indicated that a woman’s pre-pregnancy weight was an important determinant of postpartum weight loss. It was unable to weigh in on how lactation influenced postpartum weight change since it did not measure lactation duration at all, but nevertheless mentioned that “previous research” suggested breastfeeding for at least 2 months increased weight loss. The previous research cited, however (one of which was the 1957 study described above, another of which was a project that only measured weight loss eight days after birth, and the other two of which hardly even addressed lactation), was inconclusive.[8]

By the 1990s more thorough investigations were underway, although most still pointed out how difficult the topic was to study. There were some mixed results, but in hindsight things appear clearer. Put simply, none of the work convincingly demonstrated that breastfeeding could reliably help women lose more weight after giving birth. A 1991 research project looked at over 400 women when they were 6 weeks and 12 months postpartum. The results “showed that there was no consistent relationship between weight loss following pregnancy and method of infant feeding.” What mattered most? Weight gain during pregnancy. “It is concluded,” the authors assessed, “that the method of infant feeding, bottle or breast, does not influence weight loss following pregnancy.” However, the authors differentiated breastfeeding women throughout the entire study based on just six weeks. Thus, if a woman nursed for six weeks, that “counted” to put her in the breastfeeding group, along with women who nursed for six months, nine months, or the full year.[9] I love this approach because it mimics many women’s maternity leave, but it also has some obvious shortcomings. Another extended study in the 1990s followed women for 2 years after delivery and divided women into a breastfeeding group (who nursed at least 12 months) and a control group (who nursed less than 3 months). The project authors determined that the breastfeeding group experienced greater weight loss, especially from 3 to 6 months postpartum. But what was the overall difference? At one year, the average was 2 kg (4.5 pounds); the women who breastfed for one year had lost about 4.5 pounds more than formula-feeding mothers, on average (some had gained weight).[10] Hmmm. A 1998 review concluded that weight loss rates didn’t differ for breastfeeding and non-breastfeeding women after birth.[11]

The most recent research puts things into better perspective. A study in 2014 looked at breastfeeding women vs. controls at 6, 9, and 12 months postpartum. The breastfeeding women (who breastfed exclusively for at least 3 months) had lost more weight than their counterparts at all checkpoints – but look at the numbers: at 6 months, they had lost 1.3 pounds more; at 9 months, they had lost 3.7 pounds more, and at 12 months they had lost 3.2 pounds more.[12] Three pounds. That’s it. This strikes me as uninspiring. It’s interesting that this study’s authors chose to highlight that breastfeeding could influence postpartum weight loss, while the authors of a review analysis in the UK used studies just like this to emphasize instead that “there is currently insufficient evidence to suggest that BF [breastfeeding] is directly associated with postpartum weight change.”[13] The review explained that the “methodological rigour of many of the studies [looking at breastfeeding and postpartum weight loss] is questionable.” This may sound unremarkable, but it is a major point of criticism in the scientific community. Ultimately, the project concluded that nursing may help some women lose weight, but definitely not all women; its findings “undoubtedly challenge[d] the common belief portrayed across scientific literature that BF [breastfeeding] promotes weight loss.”[14]

To me, the evidence is uncomplicated: as long as researchers have been studying this, all evidence indicates that breastfeeding does not have a substantial influence on weight loss.

So why does the notion persist? In part, inaccurate ideas about breastfeeding and weight loss probably stem from simple logic. Breastfeeding burns calories. It demands more from your body. But it absolutely does not guarantee weight loss. The caloric expenditures that breastfeeding offers come with a major caveat – women tend to (want to) eat more while nursing. I was ravenous when I was nursing – in a way I never experienced during my pregnancy. I was nearly always hungry and rarely felt satiated; food was never far from my mind. (Although there aren’t formal scientific studies on this, anecdotal evidence and testimony from obstetricians and women also indicates that the last ten or so pounds of pregnancy weight can be very difficult to shed for some breastfeeding women who store fat to help them produce milk.[15] I experienced this myself.) Unfortunately, several outlets – including professional medicine – continue to purport that breastfeeding mothers universally lose more weight, and faster, than their non-breastfeeding counterparts.[16] Others, more admirably, confront the nuances. For example, one ABC piece reported on a 2004 study in The American Journal of Clinical Nutrition concluding that non-breastfeeding women lose body fat more quickly than breastfeeding women. To be fair, this is a real challenge for researchers – it’s difficult to circumscribe breastfeeding groups vs. formula-feeding groups, for one thing. And, there are numerous confounders; women who breastfeed “are systematically different from those women who do not choose to breastfeed.” This, of course, makes it difficult, if not impossible, to tell whether any recorded variations are attributed to breastfeeding. Plus, many of these studies are based on self-reporting; other projects that actually measure women’s weight employ their own scales, and their devices can vary within 0.5 pounds in terms of accuracy. When you are only talking about 1-5 pounds, 0.5 is noteworthy. Not to mention that individual weight can easily vary 2-3 pounds in any given day.

Although the research could change at any time, in this case it seems highly unlikely. The historical trajectory of the work on breastfeeding and postpartum weight loss is quite stable. If breastfeeding does exert any influence on postpartum weight loss, it is very small – this has been true as long as researchers have been studying it. As Amy Kiefer explains, the variations are “so tiny as to be trivial.” They are at most a couple of pounds. “Despite burning a considerable number of calories,” Kiefer says, “breastfeeding has a negligible effect on body fat and total body weight for most well-nourished women.”[17] There are lots of great reasons to breastfeed. There are also lots of great reasons not to. Based on the work that’s been done thus far, postpartum weight loss shouldn’t factor into women’s decisions about infant feeding.

Where and when did the notion that pregnant women should “eat for two” originate? How much weight has medicine advised pregnant women to gain over time?

Heads Up: This is a lengthy piece (even for me).

I thought the concept of “eating for two” probably originated in the 1960s or 1970s as a cushy “what-to-expect” backlash again previously stringent weight-gain recommendations. I was wrong. I couldn’t even find the phrase’s beginnings. Apparently, women and doctors have been trying to rebut the eating for two myth since the 1800s.

Check out these nuggets:

In an 1866 advice manual, E.G. Cook stated: “the idea is erroneous that it is well ‘to eat for two people.’”[1]

In an 1891 book titled Parturition without Pain: Code of Directions for Escaping from the Primal Curse (nineteenth-century authors had a gift for composing undisguised, precise titles, didn’t they?), M.L. Holbrook described the notion of “eating for two” as a “common error” and a “thorough delusion,” besides being an absurd idea.[2]

In her 1901 handbook, What a Young Wife Ought to Know, Emma F. Angell Drake urged that “the false notion that the pregnant woman ‘must eat for two,’ and so proceed to indulge her appetite to the utmost, should be corrected.”[3]

In a 1919 medical text on fetal nutrition, Morris J. Slemons explained that “popular opinion holds that during pregnancy the mother ‘should eat for two.’ This doctrine is erroneous.”[4]

“It may surprise you,” shared Carolyn C. Van Blarcom in her 1922 guidebook ,Getting Ready to be a Mother: A Little Book of Information and Advice for the Young Woman Who is Looking Forward to Motherhood, “to learn that you need not ‘eat for two,’ in quantity, as is so commonly believed necessary . . . .” [5]

In a 1935 maternity care book, Claude Edwin Heaton remarked that “there is no foundation for the old belief that the pregnant mother must eat for two.”[6]

In the Pennsylvania Medical Journal in 1949, Robert Willson explained that “the ancient belief that the pregnant patient must ‘eat for two’ has no basis in fact since the additional dietary needs during pregnancy are only slightly greater than for the non-pregnant woman.”[7]

In a 1962 piece in The Lancet, Albert Bauer lamented that “unfortunately, the old axiom that the expectant mother should ‘eat for two’ is still widely accepted by the public and even by some of our profession.”[8]

I could go on, but the point is clear: Americans have been working to expose the notion that pregnant women should “eat for two” as a misconception for hundreds of years.

So, if medicine has never endorsed the “eating for two” meal plan, what did it recommend in terms of women’s weight gain?

Medical practitioners may have been united in their historical disavowal of eating for two, but they agreed on little else about weight gain during pregnancy. From the late 1800s onward, advice to pregnant women about how, when, what, and how much to eat changed like the wind. This piece focuses (almost) exclusively on one of those components: the question of how much weight women should gain, ideally, during a normal, healthy pregnancy. The science of nutrition is one of the most complex, least understood components in human health, and thus in some ways it is no surprise that medicine’s understanding of nutrition during pregnancy has always been lacking. And yet the extent of this void in information is still remarkable.

This piece explores how the ideas about weight gain during pregnancy have changed since the late 1800s. We’ll go on a chronological tour of advice about eating and weight during pregnancy, discuss the major debates over time, review some of the historical constants, and finally, try to draw some sense from this story.

Since at least the mid/late-1800s, commentators have been concerned with pregnant women’s diets. Around the turn of the century, they generally opined that healthy eating was significant to a healthy pregnancy, but that a woman did not need to eat much more (if at all) while she was pregnant, except perhaps towards the very end of her pregnancy. “There is no diet specifically adapted to the state of pregnancy,” one textbook explained. “A diet which has previously been ample will likewise be sufficient throughout pregnancy.”[9] Within this broad framework, advice literature provided detailed directives regarding pregnant women’s food consumption. Most centered on strategies for minimizing it. In lieu of summarizing these tips, here are:

The Top Five Commandments of Eating While Pregnant, around 1900:

1. Thou Shalt Eat as Thou Normally DoesThe idea that pregnant women should continue eating as they typically ate abounded. “In normal pregnancy there is no indication for a special diet. You should eat the foods you are accustomed to and enjoy, provided a well-balanced diet is taken,” wrote one commentator.[10]As another put it, “the woman who is eating correctly anyways, will not have to vary her diet during pregnancy.”[11]

2. Thou Shalt Masticate Thoroughly Writers were very adamant that women chew their food very fully.[12]

3. Thou Shalt Not Foolishly Gratify Thy Whims and Longings (or, Thou Shalt Constantly Guard Against Overeating)[13]Women were warned not give in to culinary temptation or cravings.[14] They were not to “be persuaded to humor and feed [their appetite’s] waywardness.”[15]Instead, women learned that their appetites “should be kept under in pregnancy as carefully as at any other time, rather than otherwise.”[16]

The first commandment informed every other: at the heart of turn-of-the-century ideas about weight gain during pregnancy was the broad notion that pregnant women did not need to eat much more or differently than anyone else, especially given the prevailing sentiment that most Americans ate too much to begin with.

“The fact is more likely,” described M.L. Holbrook, “to be the seeming paradox that enough for one is too much for two.”[19]Into the early 1900s, as the origins of modern prenatal care emerged in the U.S., actual numbers began to figure in to conversations about pregnancy and nutrition. Recording weight quickly became part of the routine prenatal visit, and keeping numbers low became increasingly important. By mid-century, it was standard to measure a pregnant woman’s weight at every visit, at least partly as a strategy to limit gain.[20]One doctor reflected that in the 1950s, prenatal exams were centered on the weigh-in.[21]Already by the 1920s, doctors endeavored to limit maternal weight gain to just 15 pounds – mostly in attempt to facilitate smoother labors and deliveries, but also to “preserve the woman’s figure after birth.”[22] The lower the gain, the better.

The Mid-1900s: Watch Your Weight, Pregnant Ladies For the most part, medicine maintained, if not intensified, this strict advice throughout the mid-1900s. “The need for limiting weight gain during pregnancy is almost universally accepted. A strict diet regimen is essential in most cases,” expressed one research group in its article “Control of Weight Gain in Pregnancy.”[23] The sentiment that pregnant women did not need any extra food proliferated in this “era of stringent dietary restriction.”[24]As the 1950 edition of WilliamsObstetrics stated: “in normal pregnancy the diet should be no more or less than that to which the patient has been accustomed.”[25]

The standard guidance was that women should only gain between 15 and 18 pounds (some even thought 15 pounds should be the upper limit) – just enough to account for the “matter” of pregnancy (a fetus, the placenta, extra blood volume, etc.).[26]“The individual who begins pregnancy at her normal weight need gain no more than the amount she will eventually lose following delivery and the completion of lactation,” explained physician Robert Willson.[27]

This was urgent advice, to the extent that physicians regularly prescribed medications to assist with the goal of minimal weight gain. (They doled out diuretics and various amphetamines, including dexadrine sulfate – now used to treat ADHD symptoms and narcolepsy – to limit fluid retention and curb appetites.)[28]Whether they resorted to pharmaceuticals or not, doctors tended to be insistent about this. “I suspect a fair number of our patients are made quite miserable throughout their pregnancy,” observed one obstetrician, “by our zeal in stressing this phase of prenatal care [weight gain].”[29]

Importantly, some outlying physicians did take issue with these limitations, and advised their patients differently. Some “let” women gain 30 pounds. Others simply thought the imposed limits were unjustified. One physician at the University of Pittsburgh remarked that “the routine, rigid restriction of a pregnant woman’s diet is inappropriate and unwise, and places an unfair emotional strain on the mother-to-be. Doctors should base diet advice on the specific nutritional needs of the individual patient, not arbitrary weight scales and charts.”[30] Perhaps it was from this initial seed of dispute that things began to change in the 1970s.

From Willpower to Starvation: Changing Ideas in the 1970s and 80sUp until 1970, the overwhelming practice among U.S. obstetricians was to curb weight gain during pregnancy as much as possible. In 1970 the National Academy of Sciences National Research Council (NAS) released a set of guidelines that initiated a new trend: “liberalizing weight guidelines and viewing [dietary] restriction as a form of relative starvation.”[31] The NAS report encouraged pregnant women to “eat to appetite” and aim for the “normal” weight gain of 24 pounds. This was a major turning point, a clear break from the harsh limits set in the past.[32]

The fresh, relaxed perspective stemmed from concerns about underweight babies (and, tangentially, the nation’s atrocious infant mortality rates).[33]The NAS expressed that ungrounded recommendations limiting women’s weight gain were “in effect contributing to the large number of low-birth-weight infants and to the high perinatal and infant-mortality rates” nationwide. Instead of protecting women and babies, in other words, the NAS report implied that doctors’ widespread habit of preventing pregnant women from gaining weight was harming them.[34] Soon after, U.S. medicine started forwarding a more liberal idea of the ideal weight gain range: 20-25 pounds.[35] Some texts permitted 30 or even 35 pounds.[36]New laws mandated that diuretics labels indicate that pregnant women should not take them.[37] Williams Obstetrics began warning practitioners about the dangers – rather than the merits – of severe dietary limitations.[38]

By the 1980s things were very different: New York Times health correspondent Jane Brody described that the “widespread practice among obstetricians” was to encourage “maximum weight gain during pregnancy” – a far cry from decades prior.[39] (That was not exactly true – the broad recommendation was to gain between 20 and 30 pounds, but it understandably appeared unbridled compared to the previous limitations.[40]) Furthermore, doctors in the 1980s were much more anxious about pregnant women gaining too little weight.[41]

The 1990s: Controversy Sowed In 1990 new guidelines advised individualized recommendations based on women’s pre-pregnancy weight. Instead of a blanket weight gain range, the new advice circumscribed various ranges depending on a woman’s body mass index (BMI) when she became pregnant.[42] For women with a normal BMI, a gain of between 25 and 35 pounds was considered appropriate. Thus, a weight gain of 30 pounds was “normal,” where it would have been regarded as “excessive” or “dangerous” a few decades earlier.[43](As a further note here, the 1990 recommendations were a milestone – it was the first time that recommendations varied by pre-pregnancy weight. But doctors have been doling out different advice about weight gain to different women, depending on their weight, for some time. Since at least the 1930s, medical articles and public health publications have consistently expressed that underweight women might need to gain more during pregnancy and overweight patients might need to gain less.[44]) The even-more tolerant 1990 guidelines met a wave criticism. Many researchers thought the new recommendations were far too broad and needed more evidence.[45] One article in The Lancet encapsulated many of critics’ fears. “The evidence for a population-wide strategy of liberal weight gain during gestation is weak in industrial nations,” the authors wrote, “and . . . this potential harmful policy represents an inappropriate response to problems rooted in inadequate preconception/prenatal care and social deprivation.” They noted that expecting mothers should probably gain at least around 15 pounds, but warned that the new 25-35 pound allowance fell “in the opposite direction.” Although the directive was intended to improve infant welfare, the study explained, “a logical policy response is surely not to encourage overnourishment in all pregnancies, but rather to promote preconceptual nutritional counseling and close monitoring of third-trimester weight gains for thin women.” Essentially, these authors considered the liberalized weight gain guidelines a “misguided” attempt to solve the problem of low birth weight. “We are at a loss,” they concluded, “to understand the logic of encouraging millions of women to overeat during pregnancy . . . .”[46]

The 21st Century: Controversy Continued In the 2000s and beyond, the back-and-forth about “ideal weight gain” during pregnancy continues. Many researchers are especially concerned about implications for the obesity epidemic. More and more physicians are questioning what constitutes “excess” weight gain during pregnancy, and how it might be linked to long-term weight as well as public health.[47]Some of them see pregnancy as a unique window to combat the obesity epidemic: “putting the brakes on weight gain during pregnancy may be an opportunity . . . to break the cycle of obesity,” one reporter summarized. “Similar to smoking-cessation programs, pregnancy provides a unique opportunity for behavior modification given high motivation and enhanced access to medical supervision . . . Pregnancy is an optimal time for health care providers to offer their resources to decrease maternal obesity and comorbidities, thus affecting current and future generations.”[48] The American College of Obstetricians and Gynecologists (ACOG) issued its most recent committee opinion on weight gain in pregnancy in 2013 – it reflects the NAS guidelines, and is stratified by BMI: (If you don’t know your BMI, you can check it using this calculator from the CDC.):

ACOG’s own statement acknowledges that many critics regard these guidelines as excessive (particularly for the higher BMI categories), but encourages physicians to rely on these recommendations as a “basis for practice.”[50]

​Central Threads of Conversation Over TimeDigging deeper, a handful of issues stand out as predominant concerns for why any of this has mattered in the first place. In other words, society and medicine have considered pregnant women’s weight important (and sought to better understand it) for different reasons at different times (clearly).In identifying key points of discussion, there is a constant tension between mom and baby –does weight gain matter more for a mother’s health, or for a baby’s? The answer, of course, is both, but doctors and mothers alike have expressed variable priorities over time. Here are some of the central, consistent threads of conversation coloring the history of “eating for two.”

1. Fetal Size Research has gone back and forth on the question of whether maternal weight gain is a determining factor in fetal size since the nineteenth century. In the 1800s, doctors thought that women who gained less weight while they were pregnant gave birth to smaller babies, and they considered this desirable because they judged that birth would be easier (and safer) with smaller fetuses. (It didn’t hurt that this also aligned with prevailing ideals of feminine beauty – i.e., that women be slender mothers). Later, obstetricians who could rely on much safer birthing rooms and C-sections produced evidence indicating that smaller babies were, generally, less healthy than heavier babies. Thus, growing a smaller fetus was desirable in the late 1800s but exposed as disadvantageous (because it was associated with less healthy babies) in the twentieth century.

But, the actual empirical evidence about the connection between maternal prenatal weight gain and fetal size has been all over the map, with some researchers attesting to their close (probably causal) correlation while others contend that the association is a deception. In the decades around 1900, the correlation was widely accepted. Then researchers challenged it. In 1935 one writer asserted that “there is not the least bit of scientific evidence to show that you can have a smaller baby by restricting your diet.”[51]In a 1945 scholarly article, two authors agreed that there was absolutely no correlation between maternal weight gain and a baby’s birthweight.[52] Then researchers reconsidered, disagreed with one another, and forwarded more nuanced opinions. In the 1960s, for example, one article suggested that there was a small correlation: “only about 2.5% of the maternal weight gained beyond 10 lb. can be demonstrated as additional fetal weight,” the authors proclaimed in Obstetrics and Gynecology.[53]The next year, in the same journal, a separate research team concluded that there was a “strong association between weight gain during pregnancy, prepregnancy weight of the mother, and the birth weight of the baby.”[54] Put simply, studies have investigated the relationship between maternal weight gain during pregnancy and birthweight multiple times over in every decade since the late 1800s, and they have all reached different conclusions.

Recent evidence does indicate that weight gain during pregnancy is at least related to gestational weight, and babies that are either too big or too small for their gestational age can both be problematic. In a 2009 publication reevaluating guidelines on weight gain during pregnancy, a group of expert authors explained that “many epidemiologic studies are consistent in showing a linear, direct relationship between GWG [gestational weight gain] and birth weight for gestational age.” Although, as Emily Oster comments, the effects are small on an individual basis.[55]

2. Complications – for mother and babyA second major point of continuity in this literature is the question of whether maternal weight gain during pregnancy has any bearing on maternal or fetal complications, ranging from toxemia/preeclampsia to delivery problems. Evidence substantiates, for example, that “excessive” gestational weight gain raises a woman’s chances of having a C-section. (Toxemia is the former nomenclature for preeclampsia, a late-pregnancy condition whose symptoms include hypertension, excessive swelling, severe headaches, and vision problems. It can be serious; for more on the condition, check out this patient information pamphlet from the U.K.)[56]

3. Postpartum Weight Retention and ObesityThere is a long record of concerns about how prenatal weight gain could lead to overall weight gain beyond childbearing, but around the mid-1900s that escalated to encompass broad social anxieties about obesity. The rhetoric from the mid-twentieth century indicates that contemporaries took obesity seriously. It was a grave concern. Again, though, it’s unclear whether prenatal weight gain is necessarily a determining factor in women’s weight beyond their childbearing years. In one early study (from 1969), just as an example, the authors determined permanent weight gain after pregnancy based on women’s six-week postpartum checkups.[57]I can’t speak for everyone, but it took me months to drop the 30 pounds I put on while I was pregnant; measuring longitudinal weight gain just six weeks after delivery could obviously lead to unsound conclusions. In the twenty-first century, investigators are using more appropriate postpartum measuring points, ranging from 30ish to 50ish weeks after delivery, but the extent to which gestational weight gain influences a woman’s weight overall is still uncertain.[58]

Recently, this issue has broadened more, as physicians and researchers are using epigenetics to link “excessive” prenatal weight gain not just to obesity in women, but also to their children and the societal obesity epidemic. Some studies indicate that obese women are nearly two times more likely to have a stillbirth than women of normal weight, and that their babies are almost three times more likely to die in the first month of life.[59] But these links are still tenuous. The evidence is much more clear that women who gain more weight while they are pregnant bear children who are more likely to be overweight and have higher blood pressure in childhood, as well as more likely to develop diabetes, heart disease and cancer as adults.[60]

Recommendations Vs. RealityUnderlying all this changing advice and shifting ideologies is the basic question of what constitutes “normal” weight gain during pregnancy. What is “ideal”? What is “excessive”?

In 1990, Roy Pitkin, an obstetrician at UCLA, reflected that the more recent, “loose” guidelines of the late-twentieth century “in a sense . . . bring health recommendations in line with what is actually happening.”[61] Looking at available historical evidence, he was exactly right.​It is fascinating that throughout this record, there has always been some level ofdisconnect between what medicine and society were promoting as the “ideal” weight gain and what pregnant women typically gained. Even when doctors were adamant that women keep their weight gain in the range of 10-15 pounds, they were cognizant of a different reality - medical articles regularly acknowledged a much higher typical weight gain range than the recommended limits. To illustrate this difference, I put together a timeline with some available evidence indicating how much weight women were actually gaining (as documented in studies or witnessed by practicing obstetricians), rather than what they ought to be gaining. To a certain extent, these figures – which document “real” weight gain during pregnancy – were more consistent across time than were prescriptions advising women how much to gain (. . . until the 2000s, that is). Still, you can see that there is a general trend towards higher gain, and numbers shifted notably upwards after the 1970s guidelines were released. Take a look:

1920, textbook: “The mother’s gradual but consistent gain in weight amounts finally to about 30 pounds; exceptionally, it is as little as 10 to 15 pounds, and at the other extreme as much as 40 to 50 pounds.”[62]

1935, advice book: Women’s “average gain is around twenty pounds.” (The author notes that the maximum should be 25 pounds for a woman of normal weight, and that “further gain . . . is not desirable.”)[63]

1944, medical article: a frequently-cited review of almost 12,000 patients across 19 studies, determined that the average weight gain for normal pregnancies was 24 pounds.[64]

1945, medical article: determined that average maternal weight gain was 21 pounds but that the range was much wider, from -3 pounds to +48 pounds.[65]

1963, clinical article: “It is generally agreed that a weight gain of 20 to 24 pounds during pregnancy can be regarded as normal . . . About 20 per cent of obstetric patients will exceed these limits . . . .”[67]

1968, medical article: a study of more than 12,000 women revealed an average weight gain of 22 pounds.[68]

1969, medical study: Women’s average weight gain was 24 pounds, although the authors made note of wide variation. (They recommended an “ideal” gain of 17 pounds and regarded gain higher than 30 pounds as in the “danger level.”)[69]

1970, medical article: most women’s average weight gain was 20 to 24 pounds.[70]

After this, women have been gaining more weight on average during their pregnancies. One study published in 2000 put together a nice chart illustrating this – check it out here. “Crude data clearly show,” the authors stated, “that after weight-gain recommendations were liberalized, there was an increase in the means [averages] of both pregnancy weight gain and infant birth weight.”[71]From 1990 to 2005, a clear majority of pregnant women in the U.S. gained between 16 – 40 pounds; the mere fact that the window has expanded to incorporate 40-pound gains – which would have been far outside the standard range of deviation for most of the 1900s – is telling.

Medicine has always acknowledged that diet is crucially important to a healthy pregnancy, but it has never been sure how to advise pregnant women about their weight.

Doctors see all kinds of different patients and outcomes, and clearly, research has failed to provide them with comprehensive answers. Take this suggestion from a physician in 1949: he encouraged his colleagues to remember “the basic principle that we are treating individuals rather than conditions in our maternity practice,” and reminded them that “wide fluctuations from so-called normal patterns can often be observed with complacency.”[72] In 1975, one piece observed that “the wide range of weight gain in pregnancy compatible with a normal outcome is astonishing – from a loss of more than 5 pounds to a gain of more than 50 pounds . . . .”[73]

Indeed, physicians have been trying – for decades – to contend with the fact that they don’t know what “normal” is. In 1962, physician P. Rhodes wrote in The Lancet that “weight gain in pregnancy is still a source of much confusion.”[74] Reflecting on their evidence from a study in the 1960s, authors wrote that “uncomplicated patients may gain a wide variety of weight with impunity.” Thus, they said, “a clinically useful concept of ‘normal weight gain’ could not be determined.” (They did, however, “venture to conclude that the ideal weight gain during pregnancy is somewhere between 16 and 20 lb., most likely about 18 lb.”[75]) A separate group of researchers observed, in 1970, that “weight gain in pregnancy has been studied for over 100 years but there is still no agreement as to the amount that can be regarded as normal.”[76] Five years later, another team explained how doctors’ knowledge regarding nutrition generally is “deficient because formal instruction in nutritional principles in most medical schools is absent or cursory.”[77]

If this science has always had a murky quality to it, why consider it? As usual, the past cannot supply us with answers, but it can give us some things to think about. Sweeping through the unresolved history of “eating for two,” three useful threads of continuity stood out to me:

1. Pregnant women should focus on eating healthfully.Eating well while pregnant is beneficial for women and babies. Since the data and numbers about exactly how much weight to gain have been so irregular, maybe it’s best not to get too bogged down in them and expend your energy focusing on eating well. That’s going to mean different things for different people. (Michael Pollan, anyone? Here’s a list of his food rules.) Furthermore, medicine has expressed increasing appreciation for the importance of achieving a healthy pre-pregnancy weight. Quite simply, more and more studies have come out indicating that having a normal weight before pregnancy matters a great deal, for a great many reasons.[78] Indeed, the return towards a more restricted, conservative perspective on weight gain during pregnancy – one characterized by minimalism rather than laxity – stems in large part from concerns about society’s weight gain in general, not necessarily with weight gain during pregnancy, per se.

2. Moderation has never been controversial.It sounds obvious, but it is clear and consistent in the historical record that the most contentious lines of debate about weight gain during pregnancy have fallen on the edges of the bell curve, where severe dietary restriction and excessive weight gain lie. It’s a subjective determination, and it feels hollow, but it stands the test of time. (Note that weight gain and malnutrition are not the same – weight gain, be it high or low, is not necessarily indicative of nutrition.)

3. There is no indication to “eat for two” while pregnant.This has, as I suggested earlier, been one of the only unanimous points of agreement about weight gain in pregnancy over time. Pregnant women do not need to eat much more. (“Eating for two?” health economist Emily Oster asks – “you wish.”)[79]Science has been “trending” towards the “less is more approach” for 150 years (at least).

The concept of “eating for two” has misled and confused pregnant women for centuries. Some authors are working to reclaim the old saying as an impetus for dietary improvements instead of dietary exemptions (quality over quantity), but the very fact that such a mistaken phrase has survived for so long does not bode well for its future. In the end, I couldn’t pin down the origins of the idea to “eat for two,” but I did learn that at no point in time has anyone knowledgeable ever thought it was a good idea.

When did "screen time" become a thing? What has the American Academy of Pediatrics (AAP) had to say about screens and babies? What are the key issues and findings in screen time research? I share my top 3 "lessons learned" about using screens with and around babies, and my top 3 reasons why - despite debatable findings - I still think minimizing and delaying screen time are worthy parenting pursuits.

I’ve never been that caught up in the whole “screen time” debate, mostly because it hasn’t been a debate in my family. We have a television, but we’ve made it inconvenient to watch things on it. The TV is up in an attic, where it’s annoying to get to, and it’s not hooked up to cable. We don’t even have Netflix – we use our local public library for DVDs. When my husband and I turn on the television, it’s a conscious decision – we choose to watch, to deviate from our default. We’re not entirely opposed to TV or movies (I love re-watching Mad Men as much as the next person), but in general we prefer to read or talk or listen to podcasts – and we make it a point to use the TV screen purposefully. TV simply is not a part of our daily lives. Given the way our home is set up, the matter of television for our son has essentially been a non-issue. But screen time is no longer synonymous with television. And while our television use might be way-below average, it’s still something. And screentime in our home is up there. Rare are the stretches of time devoid of computer screens or cell phones. If I wasn’t all that curious about whether and how much TV my son should watch (which, I quickly became – the landscape is fascinating), I was curious about whether and how much my own typing, web surfing, online reading, or exercise-video watching affects my son. What exactly is the problem with screen time, anyways? Is it all visual? Or is there an audio component? (And then, do podcasts or music have any associated issues?) What’s the difference between different kinds of screens? My questions abounded.

My first inclination was to find out when all the hubbub about screen time started. To put it mildly, very recently. The American Academy of Pediatrics (AAP) published its first policy statement on screen media in 1999. “Screen time” wasn’t even “a thing” yet. This issue is new. Which means its “history” is all but nonexistent. Prior to the 1990s, although there were tons of cultural and scientific explorations about children and TV, there were almost no cultural or scientific explorations considering babies or screen time. When I started reviewing this, I was less concerned with the historical debates regarding older children’s access to inappropriate programming (measured in terms of violence, aggression, drugs, sex, etc.) or lifestyle correlations (obesity, hours of sedentary activity, school performance, behavior problems, eating disorders etc.) My purpose was to inquire about the actual screen technology and its effects on babies’ brains – not screen content or long-term health issues. But I quickly realized that it’s not so easy to parse all this out. Everything is too interconnected. After spending several weeks researching this, here is a synopsis of what I found most important and applicable. For anyone interested in reading more, pick up Lisa Guernsey’s book Screen Time, which is an incredible and approachable resource for parents with lots of questions about screens.

Magic Picture Tubes: TVs to TabletsScreens made their way into Americans’ homes (and later, their pockets) beginning in the mid-1900s. This history is a separate story, but the briefest overview suggests a sweeping pattern: screen technology proliferation met with (often simultaneous) optimistic, creative anticipation and critical, concerned agitation. It happened with movies. It happened with television. It happened with video games. And now it’s happening with the exponentially-growing amount of screen media on computers, phones, and tablets. Television took off in America around 1950. As of 1948 there were 100,000 TV sets nationwide. The next year there were a million. The next decade, there were 50 million; 7 out of 8 U.S. homes had a television. Furthermore, families with kids were more likely – twice as likely – to own televisions.[1] It was in the 1950s, then, that kids began watching the “magic picture tube.”[2] TV was America’s pastime. Babies grew up with televisions on, and typically had their “first direct experience” with TV (meaning they used it themselves) at around 2 years of age.[3] Within a couple of years, observers were bemoaning the fact that American children spent more time each year watching television than in school. (The school vs. TV comparison stuck – it’s still frequently reported.) Early critics in the 1960s talked about TV as the “opiate of the masses,” and posed poignant questions still relevant today: “do [children] learn more from [TV] than they would learn without it?”[4] Between the expansion of television and the present day – a mere six decades – other screens infiltrated American life. Where the U.S. Census once asked about TVs, it now asks about computers. Per Pew research, in 2015, almost 70% of U.S. adults owned a smartphone (for all cell phones, 92%). More than 70% owned a laptop or desktop computer. Forty-five percent owned some sort of tablet.These screens are visible to babies. In 2012, almost one-third of American babies had a TV in their bedroom, and almost half watched TV or movies for nearly 2 hours daily.[5]

When it comes to the next generation, there exists a sort of intrinsically negative perspective about technology’s permeating effects. I am certainly biased in this direction. So are pediatricians, apparently – according to a 2004 study, they “almost universally believe that children’s media use negatively affects children in many different areas, including children’s aggressive behavior, eating habits, physical activity levels, risk for obesity, high-risk behaviors, and school performance . . . .”[6] There are legitimate reasons to be wary of technology, but some of our (my) woes might be misguided. As researcher Alison Gopnik observes, “innovative technologies always seem distracting and disturbing to the adults attempting to master them, and transparent and obvious—not really technology at all—to those . . . who encounter them as children.”[7] New technology is often overwhelming before it becomes ordinary. Case in point: reflecting on TV’s time demands of Americans, in a 1961 book called Television in the Lives of Our Children, authors observed that “if any of us were now compelled to find two or three hours every day for a new activity, we should probably resent that requirement as an intolerable intrusion on our scheduled lives . . . .” But, they stated, this is exactly what TV did. (And before the TV it was the radio.)[8] Sound familiar? In 2017, these “intolerable intrusions” are Snapchat, Twitter, blogs, Facebook, Instagram – everything at our fingertips pressing us to “keep up.” Technology can be a paradox: it’s freeing but also confining; it’s helpful but also a nuisance; it connects us but also divides us.

The First AAP Statements: 1985 & 1999Distilling the entire history of pediatric screen woes would be a monumental project. Let’s just cover the milestones. Broadly speaking, the biggest concerns involved the effects of inappropriate content and physical health. (Remember, almost none of this was specified to babies – just older children.)After decades of worrying about how excessive violence, sexuality, and substance use on television influenced child viewers, doctors in the 1980s discovered TV’s associations with obesity. It was a big deal. The landmark 1985 article in Pediatrics asked: “Do We Fatten Our Children at the Television Set?” The study looked at older children’s TV usage (the youngest children were 6) between 1966 and 1970 and found “significant associations” between time watching TV and the incidence of obesity. In fact, the association was so strong that it qualified as a dose-response relationship, meaning that the more TV kids watched, the higher their risk for obesity. Every extra hour kids spent in front of the television per day was correlated with a 2% increase in obesity rates. The effects were so strong that they met the criteria for a causal association – television could (indirectly) be a cause for obesity.[9] (In brief, these associations have (mostly) held firm over time. TV remains a convoluted culprit in the childhood obesity epidemic for multiple reasons.[10]) The same year (1985), the AAP released a task force statement expressing concerns about the amount of television American children were watching, and specified problematic content – violence, sex, and drugs – as well as the newly-documented elevated risk of obesity as key areas of concern.[11] Over the next 10-15 years, concerns about media content escalated. Studies exploring the ramifications of television (especially television depicting violence, sex, and drugs) abounded. Then, investigators also had to contend with video games. And computer games. And younger viewers. In 1997, Baby Einstein came out. By 1999 the digital media landscape was totally different than in 1985, and the AAP issued a statement on “media education.” It encouraged doctors to start asking parents about media use and suggested that parents carefully select what kinds of programs their children watched (or played). It also advised parents to: “co-view” media with their children, talk with their kids about shows watched or games played, role model “responsible media use,” and work to cultivate their children’s interests in non-media activities. The statement also urged parents to create “electronic media-free environments” in bedrooms and refrain from relying on television as an “electronic babysitter.” This was the first piece of formal advice I could find that specifically referenced babies. Babies two years or younger should “avoid television viewing.” The AAP called out programs advertising to parents with babies (such as Baby Einstein), saying that their promises of early development and learning were a farce. Instead, babies needed “direct interactions” with humans for “healthy brain growth and the development of appropriate social, emotional, and cognitive skills.”[12]

The Next AAP Statements (2011 & 2016) and the Spread of Second-Hand ScreensThen things got really interesting. For the first decade of the 2000s – the 10 years that saw the development of the IPod, IPad, and IPhone, not to mention GPS devices , TiVo, Nintendo Wii, Kindle, and digital photo frames – the AAP stuck with its 1999 recommendations for families: limit screen time to less than 2 hours per day, limit all screen time for babies, keep bedrooms screen free, and watch and talk about screen media content with kids.[13] Even though pediatricians agreed with these directives, they were “least likely to have discouraged TV viewing for children <2 years of age.”[14]The average 0-2 year-old watched television for 1-2 hours daily.(Side note – by this time the number of AAP policy statements was snowballing in general, to the point that one study in 2006 announced that pediatricians trying to implement policy statement directives were “drowning in a sea of advice.” Study investigators noted that media use comprised 12% of the existing advice.[15]) In 2011 the AAP came out with a new statement reaffirming its 1999 policy statement. For babies 0-2, the Academy still “discouraged media use,” noting that there were essentially no benefits associated with babies’ screen time, but there were potential negative effects. Again challenging companies that marketed to families with babies, the AAP asserted that “the educational merit of media for children younger than 2 years remains unproven despite the fact that three-quarters of the top-selling infant videos make explicit or implicit educational claims.” (To explain this a little further, babies don’t understand what they see on a screen – it’s not impossible for them to learn from or interpret something on a screen, but screens are finicky, unreliable teachers. Somewhere around 2 years, children experience developmental shifts that help enable them to follow televised programs differently. This happens at different times, and to a different degree, for every child.)[16] Most of this reiterated the 1999 policy statement, but the 2011 AAP statement differed from its 1999 predecessor in that it distinguished – for the first time – between “foreground” and “background” media use, calling background media use “second-hand television.”Second-hand TV distracts. It distract babies from creative play and parents from their babies. For babies, television watching came directly out of their time spent interacting with family members or playing independently. Overall, the 2011 AAP policy statement upheld the existing recommendation to “discourage media use” among 0-2 year olds, explaining that “media – both foreground and background – have potentially negative effects and no known positive effects for children younger than 2 years.”[17]“Unstructured play time,” the statement read, “is more valuable for the developing brain than any electronic media exposure.”[18]

There was a serious backlash against this. The AAP caught major flak from parents, researchers, and some of its own, who said the organization’s stance on screens (1) was unreasonably strict, (2) was unrealistic and therefore unhelpful for parents, and (3) ignored (the lack of) evidence about how screens affect babies. Critics said there wasn’t enough evidence available to justify a rigid screen time limit of “zero” for babies. (Remember, the actual suggestion was to “discourage media use.”) The way I see it, these criticisms were right and wrong. There was a dearth of information linking television to specific negative health effects for babies (compared to older children, for whom there is an abundance of evidence showing just that). And parents could benefit from more nuanced advice about TV. At the same time, though, I don’t necessarily think the AAP was off its rocker. It had reason to believe babies who watch more television are delayed in language development, although the jury is still out. It had plenty of evidence about sleep disruption. (In children under 3, TV-watching is clearly associated with irregular sleep schedules. This alone is enough to convince me to keep my baby away from screens as much as possible.) And it “discouraged” media for babies – it didn’t “ban” screens, as some media outlets suggested. Parents, pediatricians, and scientists applauded the AAP’s 2016 updated screen media statement. They praised the new policy statement for being more reasonable, evidence-based, and offering a more “nuanced” take on screen time.[19] So, what was different? First, instead of referring to babies’ and kids’ screen time, the AAP discusses “family media use.” For babies 18 to 24 months, it still recommends no screens, but makes allowance for video chatting. Beyond 18 months, the AAP continues to advise against screen time but offers a little guidance for parents interested in introducing digital media anyways: “choose high-quality programming/apps and use them together with children, because this is how toddlers learn best. Letting children use media by themselves should be avoided.”[20] (What kinds of programs are appropriate? It’s best to “avoid fast-paced programs . . . apps with lots of distracting content, and any violent content”[21]; for little people, “slower, quieter, less, is more.”[22]) Herein lies a major point of stress in 2016: co-viewing. “Screen time in question has to be parent time.”[23] It is technically possible for older babies to learn from screens, but that learning is highly conditional, more challenging, and less likely to “stick.” Parent participation in screen media – watching together, talking about it – is a key component facilitating any of that learning.[24](Isn’t this just learning from adults, though? And is this actually how families are using or will use screen media? The way I hear parents talk about screen time is almost like nap time – a sort of hallowed window for parents to get something done – not as “together time” with their babies.) Some other noteworthy bits from the AAP’s 2016 screen media statement include recommendations to turn screens off during meals and for at least an hour prior to bedtime, avoid using media as a calming tool, and turn screens off when they are not being used. Instead of screens, promote as much unstructured play time and social interaction with adults as possible.[25]

Follow Up: Some Foils in Some ArgumentsThere are several “go-to” lines of thinking that defenders and critics of screen time rely on. Some of them have some issues – let’s look.

1. “Delaying exposure and access to screens puts kids at a disadvantage.” Not so.There is something to be said for role modeling responsible use of technology for children, but delaying introducing screens will not put kids behind. “It is often assumed,” explains psychologist Aric Sigman, “that if children do not ‘get used to’ screen technology, early on, they will in some way be intimated by it, or be less competent at using it later. However, research has found that even Rhesus monkeys are comfortable with, and capable of using, the same screen technology that children are exposed to.”[26] Plus – this just makes sense. Kids are learning machines. Every second of a child’s day is about learning. Just because I might struggle with new technology does not mean my son will. Kids are growing up in a tech world – it’s impossible to change this. In the 1950s, when Americans were buying televisions in droves, even kids who grew up in towns without access to television never “live[d] in a pretelevision era.” Adults and kids in places without televisions (yet) were “very conscious of living in a world of television.”[27] So it is with digital media today.

2. “Scientific evidence proves screens are terrible for babies.” Not exactly. The studies are really hard to run. There are tons of caveats and subtleties. (Again, read Lisa Guernsey’s book Screen Time if you are up for learning about them all.) Instead, scientific evidence indicates that screen time can be a problem for babies. Furthermore, every child responds to screens differently – some become placated, others hyper. Mine displays classic symptoms of overstimulation, becoming restless and cranky.

3. “Trading out screen time is ALWAYS the best choice.” Not necessarily. As researcher Alexandra Samuel describes, there’s a tendency to portray time spent away from screens as idyllic, and time spent in front of them as something to panic about.”[28]It’s probably almost always better for babies and kids to be doing something else – and by this I mean actuallydoing something else, or otherwise figuring out how to not be bored – than to be watching a screen. TV displaces other activities, after all. But perhaps that’s not always true for parents, who might benefit from the “x” number of minutes that screen time provides them to exercise, make dinner, read, or generally do anything necessary to maintain sanity.[29] (Although, now we’re wading back into murky water because “using” screens in this way goes against all evidence indicating parents should be participants in their kids’ screen time.)

Summing Screens UpThis research is still in its infancy. There is so much we’ve yet to learn. Here are my top 3 “lessons learned” about using screens with and around babies, and my top 3 reasons why I still think minimizing and delaying screen time are worthy parenting pursuits.

1. “All screen time may not be equal.”[30] I love Lisa Guernsey’s wonderful take on screen time for young children. She emphasizes the “Three C’s”: context, content, and child. “What media means to children at these very young ages almost entirely depends on context—on how it is being used and talked about by the adults and siblings around them,” she explains.[31] If and when screens bring parents and kids together, maybe we can think of them as “good.” But no screen is going to be always good or always bad – screens could be “baby occupiers at one point in the day and conversation starters another.”[32] And “children, even at very young ages, can benefit from using media when it catalyzes conversation and is designed for learning.”[33]For me, Skype and FaceTime are reminders of this. But even television and movies can potentially be a source of bonding. Which brings me to #2. . .

2. Parent interaction with screen time is essential. The “chief factor that facilitates toddlers’ learning from commercial media (starting around 15 months of age) is parents watching with them and reteaching the content.”[34] Unless you are watching something baby-appropriate (i.e., designed for babies) WITH your child and interacting with him while doing so, screen time is probably - best case scenario - confusing to him.

3. Be a screen time mentor. Keeping kids’ lives screen free might work temporarily, but we have a responsibility to help teach our kids how to utilize technologies as a source of growth and learning – not distraction and mindlessness. “Just as abstinence-only sex education doesn’t prevent teen pregnancy,” writes researcher Alexandrea Samuel, it seems that keeping kids away from the digital world just makes them more likely to make bad choices once they do get online.”[35] I’m striving to be more mindful with my own reliance on screens, to set a better example for my son as he grows up.

All of that said, here are my three reasons for avoiding screens as much as possible:

1.To establish the habit of minimal or no screen time.Here’s the thing: screen time isn’t great for adults. It’s “bad” for humans. It’s tied up with obesity, heart disease, and diabetes. It screws with our eating patterns and our sleep.[36] As one researcher summarizes: “numerous well designed prospective cohort studies continue to find a highly significant dose-response association between ST [screen time] and risk of type 2 diabetes, cardiovascular disease and all-cause mortality among adults.” One study “recently reported that every 1 h/day increase in television viewing was associated with a 6% increased hazard for total fatal or non-fatal CVD [cardiovascular disease], and an 8% increased hazard for coronary heart disease.”[37] It makes no difference what you are watching: these risks are not merely from being sedentary. It seems as though screen time “may be somewhat distinct from other forms of sedentary behavior . . . The education value of screen material being viewed does not preclude the significant associations reported above between ST and morbidity, mortality and associated biomarkers.”[38] Steven Gortmaker, a professor of the Practice of Health Sociology at Harvard’s T.H. Chan School of Public Health, advises parents on how to “Limit the Dose” when it comes to screen time.[39] One strategy is to start early. It’s easier to maintain the pattern of lower levels of screen time than it is to cut back.

2.To minimize second-hand screens (and noise) and begin teaching mindful screen media use.For all the questions and uncertainties about how watching TV influences babies, there is a great deal of evidence indicating that having TVs on in the background is demonstrably problematic for babies. “Background television,” says Lisa Guernsey, “which . . . gets very little attention, has been shown in recent scientific studies to have the potential to do harm to very young children.” Background TV interrupts babies’ and toddlers’ creative play, interferes with baby/toddler-parent interaction, and impedes babies’ and toddlers’ language learning. These effects can also apply to distracting background sound in general -- yes, that means podcasts or loud music or the radio or exercise videos might be a problem. When there’s too much noise, says one expert researcher, a baby’s attempts to learn language become “‘devastatingly impaired.’” A separate pair of researchers likens the problem to trying to learn a foreign language while the television is on in the background – sounds frustrating, right? I'm not going cold turkey on all sound-emitting devices and programs, but every little bit helps. Shutting screens, being selective about podcasts or radio shows, and turning things off when they’re not being used can help instill a pattern of purposeful, mindful screen time and provide my son with a better learning environment.[40]

3.To get sleep.[41]I’ve already touched on the extent to which screens mess up sleep (again, for people of all ages). Blue light screens are particularly disruptive.[42] Parents with babies literally dream about sleep. It’s the holy grail. I’d do anything that might help my son – and therefore me – sleep more soundly. Of course, screens aren’t the only culprit obstructing children’s sleep – as Michael Rich, the director of the Center on Media and Child Health at Boston Children’s Hospital, notes, “‘I’ve also had kids who have been found staring wide-eyes and bloodshot at Harry Potter [books] at two in the morning.”[43] We still have a way to go before the Harry Potter series is keeping my son up. In the meantime, I’m relying on Lisa Guernsey to keep me up to speed and I’m sticking with a low screen time mantra: “limit the dose.”

​What does evidence tell us about the safety of swaddling?(Spoiler Alert: It "works." And parents should feel pretty good about wrapping babies up as long as they are sleeping on their backs.)​

​I’ve spent the last five years researching the history of Sudden Infant Death Syndrome (SIDS) for my PhD dissertation. Not surprisingly, no amount of research spared me from fear. Just like every other new parent, I was terrified at the prospect of SIDS. Even when my son was well past the peak age range for SIDS, I still worried. My work brought me no comfort; it just reminded me that there was precious little I could do to minimize the risk of SIDS.My forthcoming book, which details the history of the SIDS diagnosis, explains that one of the very few unanimously-accepted measures with incontrovertible powers to reduce a baby’s risk of SIDS is supine sleeping (back sleeping). (Another is not smoking. There are also lots of steps that can help parents establish a safe sleep environment, but their effects in terms of reducing SIDS are more difficult to calculate precisely. Besides that, some experts are not in agreement about what constitutes a “safe sleep environment.”) In terms of SIDS prevention, back sleeping’s effectiveness is undisputed. There was quite a hullaballoo over the 2016 meta-analysis on swaddling and SIDS published in Pediatrics. Initial news coverage reported the over-simplistic message that swaddling increased the risk for SIDS. Even reliable sources posed leading questions about whether swaddling was in fact a safe parenting habit. But the truth is, as more measured media analyses conveyed with witty taglines such as “About that Scary Swaddling Study” or “About that Alarming Study . . .,” the study’s actual, nuanced findings were hardly groundbreaking. The review was only “new” in the sense that it was recently published – rather than presenting unprecedented discoveries, the study reiterated what others have concluded for decades. In corroborating previous findings, it reinforced previously-available intel: all recent research on swaddling underscores the importance of supine sleeping.

Swaddling Past and PresentSwaddling is a challenging practice to study because it is so variably practiced across different cultures. But it does have a long history in human society – parents have been swaddling babies for thousands of years. Ancient Greeks and Romans swaddled their babies (though the Spartans did not). European peasants swaddled their babies in the Middle Ages. Native Americans swaddled babies. Up through the 1700s, most babies were swaddled, and not just for sleep.[1] Many societies harbored different ideas about why to swaddle babies. Some cultures did so for warmth, some for “shaping of the child’s body,” and others for its calming effects.[2] (One relied on swaddling to impede masturbation.[3]) Swaddling started to fall out of favor in western societies in the late 1700s. Borrowing rhetoric from the French and American Revolutions, early critics – philosophers more than physicians – rejected swaddling as an infringement upon infants’ freedom. Swaddling was confining. It was backward. It directly conflicted with the revolutionary principle of emancipation.[4] The case against swaddling swelled over the next hundred years to incorporate other concerns: it could cause physical damage, it was unhygienic, it was unkind, it was primitive. Over time, the once-ubiquitous practice of swaddling became obsolete in western, industrialized societies.[5] It stayed that way until the later-1900s. In the mid-twentieth century, most US parents saw swaddling as “antiquated,” cruel, and unnecessary. According to one report, in 1965, amid the Cold War, American (and Russian) parents expressed that “restraining an infant is . . . a significant step toward suppressing its freedom!” When advised, many parents were simply unwilling to use swaddling to calm their babies, and those who did were often uncomfortable with it.[6] In medical and lay literature, swaddling was so uncommon as to be infrequently discussed.

Make no Mistake: Swaddling Works Very few investigators studied swaddling before 2000. Before the mid-1960s, medical studies that did look at swaddling used animal subjects. Investigators in the early 1900s found that “immobilization was quite effective in inducing sleep in many species,” including frogs, guinea pigs, puppies, and kittens.[7]Some early studies to explore swaddling in humans reported that the restraint had a “’quieting effect.’” Swaddled babies slept more and cried less.[8] Scientists have corroborated those conclusions over again.Study after study shows that swaddling has calming and sleep-inducing effects for most babies. In 1965, researchers noted that swaddling “produced a tranquil, ‘co-operative’ state” akin to “pacification.”[9] Swaddling’s effectiveness was so clear that these investigators saw it as “useful clinically”: it curtailed babies’ “‘un-cooperativeness’” in the laboratory. “Swaddling clearly facilitates neonatal experimentation,” they wrote, “by maintaining a state of subdued physiologic activity by a procedure which is culturally acceptable.”[10] In other words, swaddling was an appropriate way to drug babies, and then study them, without literally drugging them. Again, studies continue to substantiate that swaddling calms babies and promotes sleep.[11] In a 2007 review that included “all known studies of swaddling,” authors explained that “overall, it is clear that swaddling stimulates sleep continuity.” It also decreases babies’ crying time “significantly.”[12]Swaddled babies wake less frequently, sleep for longer durations, and cry less.

What Doctors Know: Babies Shouldn’t Sleep ProneThe new Pediatrics study acknowledges that swaddling confers some broad benefits for families: it promotes sleep and reduces crying. But the analysis raised some concerns about whether swaddling is indeed safe. It incorporated four studies, and although the language may have come across as alarming, the findings were parallel with previous analyses. Where this article contributed a (somewhat) fresh perspective was on swaddling with regards to age and developmental milestones – specifically, it brought out the evidence that swaddling becomes less safe as babies reach about six months of age, or whenever they start learning to roll over. All of this goes back to prone sleeping. Swaddling becomes “risky” at six months or when babies start rolling because swaddled babies just becoming capable of rolling prone may struggle to maneuver their bodies or return to the supine position. In short, swaddling is a problem when combined with prone sleeping.[13]

We already knew this.

Consider these previous assessments about swaddling and sleep positioning:

In a 2007 review, authors concluded that advice about swaddling “should address the difference in SIDS risk associated with supine and prone sleeping.” They stated that “the combination of swaddling with prone position increases the risk of sudden infant death syndrome, which makes it necessary to warn parents to stop swaddling if infants attempt to turn.” In contrast, “swaddled infants in the supine position have a lower risk for SIDS,” they said – “evidence clearly shows that being supine and swaddled decreases the SIDS risk more than being supine without swaddling.” For infants placed to sleep on their backs, the investigators summarized: “swaddling seems to be protective. . . Up to a certain age, swaddling hinders turning prone, but on the other hand, when an infant is prone, his or her risk of SIDS significantly increases.”

(Side note: In some studies these discrepancies were drastic. In one, swaddled prone babies had a twelve times greater SIDS risk than nonswaddled prone babies.[15])

A 2009 study suggested that swaddling might increase the risk for SIDS because it so clearly depressed babies’ arousal thresholds, but still only concluded that swaddling potentially posed a risk for babies who were unaccustomed to swaddling. SIDS researcher Bradley Thach reflected that swaddling was only problematic when applied to prone babies, and that swaddling combined with supine sleeping had benefits for infants.[16]

A 2014 piece looking at swaddling-related injury and death incidents reported to the Consumer Safety Product Commission between 2004 and 2012 (there were 36) restated that sleeping prone and swaddled carried a ten times greater risk of death than sleeping prone and unswaddled. The authors emphasized that parents should discontinue swaddling when a baby begins learning to roll over, and that swaddling should only be employed in safe sleep environments.[17]

A separate study in 2016 also emphasized that swaddled babies are much more likely to die from SIDS when sleeping prone.[18]

So: the 2016 meta-analysis published in Pediatrics was important for adding to our pool of knowledge, but its conclusions were far from revolutionary.

O.K., But . . . All that being said, some skeptics still question whether swaddling’s effects are necessarily good for babies. Swaddling “works” in part by reducing startle responses during sleep. Parents can perceive swaddling’s effects as either “sleep promoting” or as “interfering with arousal,” because swaddling does both. For most (including me), extending sleep is a very desirable outcome, but not everyone thinks so. Critics argue that suppressing reflexes and extending sleep is detrimental for babies; swaddling “works like a drug,” writes Ralph Frenken. “It switches off the baby,” he says, and “‘works’ . . . because it forces the baby to sleep.”[19] Opponents of swaddling also point to other potential health hazards associated with the practice, but all those risks are only valid if swaddling in incorrectly employed. For example, the possibility of hip dysplasia “is related to the misapplied use” of swaddling.[20] Hyperthermia is a risk of swaddling only “when misapplied.”[21] Similarly, when swaddling is not routinely applied, but randomly used, it might carry more of a risk, in a similar vein to bed-sharing (which is particularly dangerous when practiced sporadically).[22]Experts agree: swaddling is only a risk when it’s done improperly. (Some observers have raised concerns that swaddling interferes with breastfeeding, but I don’t find this compelling, again unless swaddling is completely misused. Their logic is that routine swaddling, especially early on when feeding and milk supply are being established, swaddled babies might engage in less skin-to-skin contact and suffer decreased communication abilities with their mothers, and therefore be less likely to feed. The recommendation, though, is only to swaddle babies for sleep and to calm them down – not all the time.[23]) Proponents, most notably Harvey Karp, present a compelling rebuttal. Swaddling facilitates supine sleeping and may further prohibit infants from maneuvering into dangerous positions while they sleep.[24] It also helps that swaddling still so evidently works: swaddled babies sleep longer and cry less. Studies note that swaddling is thus associated with a “significant reduction of maternal anxiety, and an increase in parental satisfaction.”[25] While this might seem elementary, it could be much more important than it appears on the surface: parents who are better rested and less stressed out tend to be happier, and thus more engaged and satisfied with child care, and these types of benefits could have enormous (think ripple effect) down the line. Think “ripple effect”: because it is a relief to parents, swaddling could have the potential to minimize the chances for a whole cascade of health problems, ranging from postpartum depression to marital discord and unsafe sleep practices to smoking onset. (The AAP even advises swaddling “to reduce shaken baby syndrome.”)[26] Harvey Karp’s advice to parents: “keep on swaddling!”

Swaddling is a tool for parents. It is not safe because it is “natural” or because people have been doing it for centuries. It is not safe because it “works.” It is safe because medical study indicates it is (right now). Medicine is moving at such a past face these days that our perspective on swaddling could change at a moment’s notice – in theory, one study could wipe the slate clean. But the 2016 Pediatrics meta-analysis absolutely did not do this. It complemented what we already know: put babies to sleep on their backs, and start transitioning off swaddling as soon as babies can roll prone.

​Why do we start feeding babies cow’s milk when they turn one? Where did this practice come from and why, if at all, is it so important?

Milk is a controversial beverage. Food is extremely personal, of course, but milk seems especially highly charged these days. It often stands in as a touchstone for deep-seated beliefs about diet and nutrition, and milk can even be an indication for people’s views about the environment, animals, government regulation, or business. Americans tend to take milk’s place at the table for granted, but it was not always a staple of human consumption. Scholars on both sides of the so-called “milk wars” debate milk’s role in the human diet, and it’s very easy to find a nutritionist or physician guru to back up almost any individual inclinations about milk. Regardless of personal opinion, it is undeniable that a spate of research in the past five or ten years has raised questions about whether milk is a helpful or harmful substance, and whether daily milk-intake recommendations actually align with available evidence – for children and for adults. As tends to be the case with nutrition, to say that the answers to various questions about milk are elusive is a gross understatement. Yet amid the muck, there does seem to be a consensus that milk is not, contrary to existing advice, an “essential” food. In fact, anthropologists, geneticists, and archeologists are befuddled as to the reasons why humans developed the evolutionary capacity to consume milk at all.[1] “Humans have no nutritional requirement for animal milk,” write physicians David Ludwig and Walter Willet.[2]

But what about babies? Do they have nutritional needs that only milk can meet? Why do we herald the transition to cow’s milk at the age of one? Where did this practice come from and why, if at all, is it so important?

When Did Babies Start Drinking Cow’s Milk, Anyways? To begin to understand the place of cow’s milk in babies’ diets, it is helpful to step back and take a look at when and why babies began drinking cow’s milk in the first place. In the US, most mothers breastfed their babies until the turn of the twentieth century, when mothers started feeding their babies alternative food from bottles. For the most part, that alternative food was cow’s milk. There are a host of fascinating reasons why women became less inclined to breastfeed, but it is abundantly clear that mothers stopped breastfeeding en masse around the turn of the twentieth century. (For anyone interested in more details about how and why breastfeeding declined around 1900, check out these books: Don’t Kill Your Baby, by Jacqueline Wolf, and Mothers and Medicine: A Social History of Infant Feeding, by Rima Apple.) By the mid-1900s, breastfeeding had become quite uncommon and bottle-feeding was the undisputed default norm in infant feeding. This exchange – bottle for breast – occurred rapidly, and it ushered in a concurrent, and equally sweeping, shift in Americans’ milk-consumption habits. Cow’s milk appeared the sensible replacement for breastmilk for babies, and it was certainly the most widely-used replacement. “The use of less human milk meant, simply, that more babies consumed cow’s milk.”[3] But here’s the thing: in its initial heyday as the foodstuff for babies, fresh milk was actually a very dangerous substance – nutrition aside.[4] Prior to pasteurization, refrigeration technology, antibiotics, and government regulation, milk could be lethal for babies. Indeed, milk consumption was the underlying cause for a vast number of infant deaths. The milk supply was frequently riddled with bacteria, diluted with water, damaged by dubious additives such as chalk to make it appear palatable, or spoiled.[5]At the time, physicians responded to the crisis in part by promoting breastfeeding, but also by helping instigate major public health reforms to sanitize the nation’s milk supply. The result was that cow’s milk became much safer for human consumption.[6]

But “safe milk” was only the surface of the issue – on a deeper level, there remained questions about cow’s milk’s adequacy for human infants. As Jacqueline Wolf explains, “even a pristine and perpetually refrigerated milk supply would have continued to pose a problem for some infants.” The actual makeup of milk varies from one mammal to another, and “at best,” Wolf describes, “the milk of one species is not easy for the newborn of another to digest.” Doctors teamed up with allies to play with the structure of cow’s milk and make it more similar to human milk. Their product was infant formula.[7] The campaigns to purify milk had dramatic benefits – infant mortality plummeted as mothers gained access to safe, clean milk and milk-based formulas. But in the process, the significance of cow’s milk as a nutritional beverage was exaggerated.[8] In the first two decades of the twentieth century, milk was “elevated . . . to an almost untouchable status as the perfect food for both infants and children.”[9] By 1920, the “nutritional perfection” of milk was written into government policy and scientific gospel, and the stuff was omnipresent in American homes.[10] Milk’s importance only deepened over the coming decades. By the mid-1900s, more mothers were feeding their babies milk earlier and earlier. Although the data is limited, what is available suggests that more than half of American mothers from the 1940s to 1960s were offering their infants whole milk by the time they were just four months old. By 1970, about 70% of infants had consumed whole milk by six months of age.[11] Then, over the course of the next ten or fifteen years, this practice nearly reversed – beginning in 1971, mothers moved away from feeding their babies milk by a “steady and impressive” trend. By 1985, just 10% of babies had tasted whole milk at six months.[12]As of that year, pediatricians were advising that parents could start feeding their babies whole milk only after the six-month mark, and only if their solid food diets were otherwise sufficient.[13] Why the about-face? What happened?

Pass the MilkThere appear to be two major reasons for the abrupt shift in milk-drinking habits among babies in the late-1900s. First, in the 1970s, after decades of dropping rates, breastfeeding skyrocketed – and not just in the United States. Women across the globe breastfed in greater numbers, and longer. Along with higher rates of breastfeeding came higher rates of formula feeding, and women were consequently less likely to feed their babies plain old milk.[14] Secondly, a series of articles in the 1960s detailed some potential problems for babies drinking cow’s milk. Physicians found that infants who drank whole milk had higher rates of iron-deficiency anemia, both because milk itself did not offer them much iron and because what iron it did offer was not very “bioavailable,” meaning that babies were less able to actually process and utilize it.[15](Breastmilk also does not contain much iron, but it is highly bioavailable – babies can use about 40% of iron in breastmilk, compared to just about 10% of iron in whole milk.[16]) Physicians also learned that cow’s milk could potentially lead to gastrointestinal bleeding in babies with iron-deficiency anemia.[17] Other concerns were budding, too, including milk’s possible ties to obesity and diabetes, as well as allergy and intolerance. These reports emerged at the same time that the field of Pediatrics was stepping into nutrition. In 1954, the American Academy of Pediatrics (AAP) formed the Committee on Nutrition (CON); the group issued its first report in 1958, emphasizing developmental maturity, instead of age, as the best gauge for timing the introduction of solid foods. This inaugural report described that historically, perceptions regarding the ideal timing for introducing solid foods had swung back and forth, and noted that “present knowledge of nutrition is admittedly incomplete.”[18] In the 1960s, the FDA tapped the CON to help establish standards in infant feeding and childhood nutrition. Especially after this, explained Samuel J. Fomon, a pioneer in the field of pediatric nutrition, the CON “exerted an enormous influence on childhood nutrition, most notably on aspects of infant feeding.”[19]

In looking at the history of the AAP’s Committee on Nutrition reports regarding milk consumption, a prominent theme is the concept of uncertainty. The Committee repeatedly asserted that it was operating with a major handicap: broad gaps in knowledge about the ideal nutritional profile for babies and children. It 1974 the CON released a report in response to “public concerns about milk”: “Should Milk Drinking by Children Be Discouraged?” The piece was indecisive – the CON answered its own question with a tentative “probably not.” In regards to a number of issues, the article conveyed that evidence was insufficient as to be conclusive. Reviewing milk’s potential associations with obesity, for example, committee members explained that “there is no evidence that milk consumption per se makes any specific contribution to the development of obesity,” and determined that “a general recommendation to restrict milk fat is difficult to justify scientifically and may promote unnecessary anxiety in the general population.” In reference to milk as a calcium-delivery mechanism, the committee members reflected that “the concept that children and teen-agers should drink plenty of milk as a source of calcium (and phosphorous) to ensure ‘healthy bones and teeth’ is a tenet of North American health culture that is rarely questioned and one to which many physicians, dentists, and nutritionists subscribe.” But they pointed out that pediatric calcium intake requirements were undetermined, citing a World Health Organization report from 1961 that was “unable to define minimum or optimum calcium requirements for infants and children based on any data then available.” The CON estimated that existing daily milk recommendations probably exceeded babies’ and children’s calcium needs.Ultimately, the committee took a noncommittal, pragmatic stance: milk “supplies a large proportion of essential nutrients and calories,” it said, “but it is not an essential component of the diet for anyone whose diet is otherwise adequate.” “The amount of milk that should be consumed by healthy older infants and children,” it continued, “cannot be stated with convincing accuracy . . . [and] the Committee believes that the sum of pertinent, current knowledge does not permit a more dogmatic position.”[20]

“Few Facts and Many Opinions”A few years later, in 1979, leading experts reiterated that they did not have enough information: “definitive studies to guide us in our recommendations regarding many aspects of nutritional management are not yet available. Therefore, our reasoning in some areas is speculative and the recommendations must be considered tentative.” Furthermore, they stated, “our current recommendations for infant feeding are based on relatively few facts and many opinions.” The milk recommendations were tentative; once a baby was six months old and eating solid food, he could consume homogenized, vitamin-D fortified milk.[21] In 1984, Frank Oski – an outspoken critic of milk-drinking in general – delivered a provocative address that was subsequently published in the AAP’s journal, Pediatrics. His lecture detailed the health hazards of milk for babies and children. Contradicting the current pediatric teaching that babies could drink milk at six months, Oski pleaded that babies not be fed whole milk at all. He advised that even after children turned one, parents should avoid feeding them cow’s milk. He cited iron-deficiency, anemia, gastrointestinal bleeding, allergies, and longitudinal health problems such as obesity, atherosclerosis, and coronary artery disease as compelling reasons for Americans to quit milk. He was resolute about the issue for children: “If pediatricians sincerely believe that adult dietary patterns may be established in early childhood, then efforts should be made for dietary guidance that is consistent with the recommendations of the AHA [American Heart Association], NRC [National Research Council], and the US Select Senate Committee on Nutrition . . . The elimination of whole bovine milk from the diet would be a desirable first step in childhood,” he said. Oski thought the existence of so many unanswered questions about cow’s milk consumption was a big problem. “The Committee [on Nutrition] provides up with no cogent reasons, actually no reasons at all, why bovine milk should be introduced before the first birthday yet recommends that if ‘infants are consuming one third of their calories as supplemental foods . . . whole cow’s milk may be introduced.” But “why give it at all – then or ever?” he asked.[22] Oski was an east coast, ivy-league educated physician; in 1985 he became the Chair of Pediatrics at the prestigious Johns Hopkins University Hospital. But his questions were largely treated as the crazed rantings of an outsider. Writers expressed that his recommendations were radical, ungrounded, and overstated.[23] Oski held his ground, even publishing a titillating book – Don’t Drink Your Milk! – explicating his viewpoints and exposing American biases about the “naturalness” of milk. “Being against cow milk is equated with being un-American,” he said. Oski was resolute that breastfeeding was the best option for babies, but that formula should be used (even past one year) instead of whole cow’s milk, and he urged Americans of all ages to relinquish milk.[24] (This is especially interesting given that European scientists are beginning to play around with the development of “young-child formula” for toddlers.[25]) Even if Oski’s perspective was unorthodox, in the 1980s, the question of when it wassafe and sensible to begin feeding cow’s milk to babies was prominent in pediatric literature.New concerns and ambiguities emerged, and the “milk question” blossomed into an intriguing medical debate. A particularly vexing finding was that milk appeared to have a possible correlation with type 1 diabetes. One review article surveying twenty studies concluded that “there was a modest, but statistically significant association between the early introduction of cow’s milk and the development of IDDM [insulin-dependent diabetes mellitus] in childhood.”[26] Researchers also found that babies who consumed milk earlier tended to eat more “adult” foods, therefore consuming a “diet not well suited to the needs of the developing infant.”[27] The CON revisited the milk question in 1983, and again assessed that it was unsure about the place of cow’s milk in the ideal infant diet: “the appropriate age at which unheated, whole cow’s milk (WCM) can be safely introduced into the infant diet is unknown and remains an area of controversy.”[28]The percolating debate about using whole-fat, reduced-fat, or skim milk in infants’ diets could make for a separate piece in itself. Pediatricians discouraged parents from feeding their babies reduced-fat milk because of its potential to interfere with babies’ ability to consume enough calories and grow sufficiently. Reduced-fat milk threw off babies’ gauges for self-regulating food intake, and babies fed skim milk gained weight at slow rates. Plus, in several studies, infants who drank reduced-fat milk exhibited a “rapid decrease in skinfold thickness,” indicating that they were burning through their bodies’ fat stores just to meet their daily energy needs.[29]

Medicine and Milk: It’s Cloudy In 1992, the CON recommended that babies be fed either breastmilk or formula for at least one year, and that milk should be introduced after the first birthday.[30] In 1998 it made a change allowing for decreased-fat milk after a child turned two: “low-fat cow milk is appropriate during the second year if growth and weight are appropriate,” it said.[31]

But the reasons why parents should feed their babies whole milk are still difficult to pin down. The history of pediatric recommendations for feeding babies cow’s milk does not offer decisive answers about the place of milk in the human infant’s diet. It does, however, convey three key items of import:

The overwhelming trend in pediatric advice in the second half of the twentieth century has been to delay the introduction of cow’s milk to babies – pediatricians have continually pushed back the "ideal" age for introducing cow’s milk to babies.

Historically, the issue of cow’s milk consumption for babies has been characterized by uncertainty and questions. Although the prevailing societal ideology is that babies absolutely must drink milk, there is an apparent disconnect between the dominant medical literature and the cultural credo to offer babies cow’s milk.

As a practice, feeding babies and children cow’s milk was not something that initiated within medicine – quite the opposite. Women began feeding their babies cow’s milk in the early 1900s to replace breastmilk, and since then the pediatric community has juggled questions and tried to sort out the best practice when it comes to milk. But consuming milk was a complex, socially-instigated custom – not a medical revolution.

So where does this leave parents in the twenty-first century? Probably, wherever they areinclined. This historical overview corroborates journalist Tia Ghose’s observation that “it turns out the case for milk is fairly weak.”[32] So is the case against it. Melinda Moyer, a wonderful science writer I’ve referenced elsewhere, wrote in 2015 that the “milk controversy” is all in parents’ heads – it’s “pointless, for several reasons,” she says. Her piece navigates the current science, assessing that “cow’s milk isn’t perfect, and some of its claims may be overtouted, [but] many of the scary claims made about it are overblown.” Confused parents stressed about milk should definitely read her article. After reviewing the nuanced benefits and detriments associated with feeding young children milk from various sources, Moyer advises parents not to “waste your precious parenting capital” worrying about milk. “And really,” she writes, “there’s no need to stress about milk anyway: The idea that toddlers and older kids need milk and are going to suffer without the right kind is silly. Milk provides important nutrients, but if your kid eats a balanced diet and stays hydrated, she doesn’t need it at all.”[33]

It is unlikely that parents’ questions about cow’s milk will be definitively answered anytime soon. What’s more, the legacy of milk-drinking for babies cuts to the core of fundamental aspects of pediatrics, nutrition, and parenting. At the base of uncertainties about milk are not just doubts about the beverage itself (“is milk healthy?”), but about children’s nutritional needs more broadly (“should children eat what is healthy for adults, or something else?”), and answering these questions isn’t any easier. As for me, I'm incorporating milk as a component of my son's whole-foods diet, but I hold no delusions about its wonders. It's just milk, ​right?​

In the early and mid-1900s, alcohol was casually recommended to nursing women as a remedy. Beer was touted as a galactagogue, a substance that can increase milk supply; cocktails promoted napping. During midcentury decades, doctors and women were relatively relaxed about drinking during breastfeeding, especially given that so few women (less than 25%) were actually breastfeeding at the time. Plus, even drinking during pregnancy had yet to become a no-no. Alcohol and breastfeeding really wasn’t a very controversial issue. Nurses advised women to indulge in a drink or two as a relaxation aid, extolling alcohol as a stress-buster and a facilitator for the letdown reflex. In The Nursing Mother: A Guide to Successful Breast Feeding, in 1953, Frank Howard Richardson answered a woman’s question about whether having a cocktail while nursing could harm her baby by saying that there was “no unanimity of opinion among doctors” on the matter (a response he also supplied in regard to smoking), and “obviously, large amounts ought by all means to be avoided.” But, he quipped, “you don’t need a doctor’s opinion for that.”[1]In a 1965 review covering drug excretion through lactation, John A. Knowles summarized that “the amount of barbiturates excreted in breast milk has never been found to affect the nursing infant, nor does alcohol taken in moderation by the mother produce an untoward effect on her infant.”[2]In 1973, a book sanctioned by La Leche League read: “‘alcohol has special virtues for the nursing mother… this is the one time in life when the therapeutic qualities of alcohol are a blessing… Dr. Kimball’s succinct rule for nursing success is booze and snooze.”[3]

“Booze and snooze” sounds pretty great, but underlying this easygoing perspective was almost no medical literature. Research on alcohol consumption during breastfeeding was practically nonexistent prior to the 1970s, when the “discovery” of Fetal Alcohol Syndrome (FAS) instigated mounting concerns about the effects of drugs in utero. (For a really wonderful exploration of how the FAS diagnosis came into existence, check out Janet Golden’s book Message in a Bottle.) In the wake of FAS research in the 70s, medicine started to think a little more critically about substance exposure through breastmilk. But prevailing medical opinion still held that drinking (within reason) while breastfeeding was just fine. Jay Arena, a prominent pediatrician, conveyed the general sentiment circa 1980: “in regard to alcohol, moderate amounts (say, one or two cocktails a day) are not harmful to the nursing mother or infant.”[4]

Sound Advice?: Taking a Closer Look

Yet doctors were working with virtually no real academic work on alcohol’s effects on nursing itself or nursing babies. Writing in 1981 in the Journal of Advanced Nursing, authors Sheena Davidson, Lynn Alden, and Park Davidson remarked that “not a single human study has looked at postnatal maternal drinking and its effects.”[5] (For obvious reasons, human studies on this are difficult to conduct.) In 1985, Margaret Lawton, a chemistry Ph.D. in New Zealand, expressed dismay that “in particular, the data on ethanol [and nursing], an almost universally accepted drug taken in some form by the majority of most social groups is conspicuous by its virtual absence.”[6]

Writers like these were starting to worry that the standard line (“‘there are no dangers and [alcohol] is good for you!’”) was starting to sound eerily reminiscent of the previous message to pregnant women that alcohol posed no problems.[7] There was little doubt in anyone’s mind that babies might be ingesting miniscule amounts of alcohol, but the big questions were 1) does alcohol help or hinder breastfeeding? and 2) do even miniscule amounts of alcohol have the potential to harm nursing infants? What animal studies were available in the 80s (which were few) indicated that rather than helping with lactation, alcohol probably did just the opposite – it actually inhibited letdown by slowing the release of oxytocin. Furthermore, contrary to the widespread assumption that babies exposed to alcohol would not exhibit any effects unless their mothers consumed “enormous” quantities, animal studies thus far showed that baby rats exposed to alcohol consumed less, and therefore weighed less (15% less), than their counterparts.[8] Researchers needed to take a closer look.

One of the first published human investigations into alcohol and breastfeeding was an absolutely shocking (and hilarious) study conducted in New Zealand in the 1980s: eight nursing mothers (volunteers) were “asked to drink as much alcohol as they could manage in the form they preferred in as short a time as possible.” Yes, that’s right – the lead investigator asked these women to binge drink, however they “preferred.” These women weren’t messing around. About half of them consumed between 3.5 and 4 drinks, and the remaining half drank approximately 5 to 7 drinks – within an hour. After measuring alcohol levels in blood and milk samples from the women, the author concluded that alcohol moved into breastmilk quickly, with breastmilk alcohol levels typically equaling or surpassing blood alcohol levels. The study corroborated earlier work demonstrating that alcohol is not “stored” in breastmilk in the traditional sense; it cannot be eliminated simply by removal (“pump and dump”). Instead, alcohol in breastmilk is metabolized just as it is in the body’s circulatory system, and only dissipates with time (just as achieving 100% sobriety is a matter of time).[9]

A brief aside, since an understanding of the basic mechanisms here is helpful: the science of drug transfer and lactation is actually exceedingly complicated and confusing; not all substances are emitted into breastmilk in the same way, and every woman’s body is unique. Scientists know, however, that alcohol registers in breastmilk at the same levels as a mother’s blood alcohol concentration (BAC). When a woman takes a drink, alcohol moves into her bloodstream quickly, and her BAC peaks after about an hour. BAC itself rises commensurate with alcohol consumption, and in most states, driving with a BAC of 0.08 or higher is illegal, as it renders a person unable to drive. (Based on generic descriptions at WebMD, a person with a BAC of 0.2 – more than double the legal driving limit – may have trouble walking, experience blurry or double vision, or potentially vomit. At 0.3, someone could pass out, experience tremors, memory loss, and depressed body temperatures. At BAC levels of 0.4 and 0.5, a person would be in serious trouble – potentially facing death.) A woman’s BAC reflects the amount of alcohol in her blood; that percentage is transmitted to breastmilk, not directly to a baby. For example, if a nursing woman has a BAC of 0.06, her breastmilk would be 0.06% alcohol (a normal beer is about 5% alcohol). If she nursed her baby, the baby would consume milk that was legally non-alcoholic, and the baby’s own BAC would barely register as positive, not correspond to the mother’s BAC. This means that exposure to alcohol via breastmilk is fundamentally different than fetal exposure to alcohol in utero. Even though alcohol does move into breastmilk, the total exposure is very small, almost trivial, especially compared to a mother’s consumption. Thus, compared to the “drastic consequences of prenatal exposure seen in infants with fetal alcohol syndrome, the long-term effects of exposure to alcohol in the mother’s milk if any, are subtle.”[10]

OK – back to the 1980s study in New Zealand: having demonstrated that alcohol presents in breastmilk, the author wondered whether it had any effects on a nursing baby. Musing, she calculated the “maximum” BAC for one of her subject’s 6-month-old babies. If the child consumed 6 ounces of breast milk – a hefty serving – when the mother’s own BAC was at its peak of 0.119 (well over the legal driving limit in the U.S.), the child would still only have a BAC of 0.006, she reported. (For perspective, this would be comparable to an adult drinking less than two ounces of 5% ABV beer.) “It is improbable,” Margaret Lawton measured, “that occasional exposure to alcohol of that quantity would affect the child. In conclusion it would therefore appear that the age-old advice given to nursing mothers regarding alcohol intake is sound.”[11]

Research into the 1990s was muddled overall, but a couple of things began to shift into focus. First, alcohol was no breastfeeding elixir. By every measurement, alcohol handicapped breastfeeding. Numerous studies clarified that alcohol inhibited lactation by messing with the letdown reflex and milk production.[12]Second, it became evident that ascertaining any possible effects on infants was going to be very challenging. Most work produced ambiguous results or else was subsequently called into question.

For example, in 1989, a piece came out in The New England Journal of Medicine: “Maternal Alcohol Use During Breast-Feeding and Infant Mental and Motor Development at One Year.” The study, which included some 400 babies, found that at one year, babies exposed to alcohol through breastmilk were just as well off in mental development but notably worse off in motor development. The differences were slight enough that the authors noted they were “not meaningful” for any given individual child, but they were there. These results seemed clear enough – but when the lead investigator set out to replicate them in 18-month-olds, her team turned out different results; motor development was not significantly different in babies exposed to alcohol via breastmilk than in controls.[13] So, alcohol might influence motor development? A 1991 piece in the New England Journal (a much smaller study, of just 12 infants and moms) reported that alcohol influenced breastmilk’s smell. “The consumption of ethanol,” researchers said, “significantly altered the intensity of the odor of [mothers’] milk as perceived by a panel of adults.” (Just imagine it: 17 adults on the “sensory panel” sitting in a lab, sniffing women’s breastmilk. A majority of them determined that the “alcoholic” breastmilk “smelled like alcohol” or “smelled sweet.”) Their observations have held stable over time. But their other finding – that in the three hours after their mothers drank alcohol, babies drank less milk – has a wrinkle.[14]Later research concluded that babies exposed to alcohol via breastmilk do drink less milk initially but make up for it and actually consume more milk over the course of the next half a day.[15]Alcohol also does seem to subtly impact babies’ sleeping patterns, correlating with reduced time in active sleep right after exposure.[16]

PRESENTLY…

Where Things Stand

Additional research since then has continued to refine medicine’s understanding of the transfer and elimination of alcohol through breastmilk. Breastmilk alcohol levels rise and fall with blood alcohol levels; the popular “pump and dump!” slogan is misdirected at best. (Nursing women may still benefit from pumping and dumping as a strategy to mimic feedings and maintain breast supply, but it does not rid a woman’s body of alcoholic breastmilk.) Alcohol does tinker with milk production and letdown. But doctors are first to admit that scientific research documenting whether, and how, trace amounts of alcohol in breastmilk may or may not influence infants is lacking.

Contemporary researchers are all in agreement as to three items: 1) alcohol, at least to a certain extent, inhibits breastfeeding; 2) the physiological transfer of alcohol into breastmilk works such that breastmilk alcohol levels roughly mimic blood alcohol levels; and 3) the amount of alcohol that actually reaches children is miniscule. Beyond this, things are more open to interpretation.

The most recent review articles indicate that as yet, on the whole, alcohol exposure via breastmilk does not currently appear to have demonstrably negative effects for nursing babies, although it may have subtle, temporary effects (such as reduced time in active sleep). In 2014, reviewers Roslyn Giglia and Colin Binns assessed that it seemed “biologically implausible” that occasional exposure to alcohol, even if it was through binge drinking episodes, was clinically pertinent. “Minute behavioural changes in infants exposed to alcohol-containing milk have been reported,” they noted, “but the literature is contradictory.” “The effect of occasional alcohol consumption on milk production,” the Giglia and Binns said, “is small, temporary, and unlikely to be of clinical relevance. Generally, there is little clinical evidence to suggest that breastfed children are adversely affected in spite of the fact that almost half of all lactating women in Western countries ingest alcohol occasionally.”[17]

So, where does all of this leave us? We know that alcohol is definitely no help to breastfeeding, and that it can reach nursing babies in tiny doses. But the outcomes and risks associated with those tiny doses are still indefinite, with some researchers believing them unsafe and others gauging them negligible. Giglia and Binns reflected that the “evidence available to give advice to lactating mothers is less than ideal….”[18] Less than ideal, indeed.

Applications Given all of this, there are essentially three different reigning philosophies; each has backing and support from intelligent, thoughtful medical professionals and mothers:

1. Don’t Drink Any Alcohol.

Obviously, this is the “safest-bet” option. It is also the simplest. The notion is that since scientists don’t entirely know the ­long-term possible effects, the safest thing for nursing women to do is to refrain from drinking alcohol: “a zero level of alcohol in milk is the safest for a nursing baby.”[19]This mentality is depicted on the Mayo Clinic’s website: “there’s no level of alcohol in breast milk that’s considered safe for a baby.”[20]I don’t find this logic particularly compelling, especially in the context of those scientific aspects we do know about alcohol and breastfeeding, but this is admittedly the most secure path. (Since alcohol in the breastmilk is a temporary condition, almost none of the experts and societies demand complete abstinence, as they do during pregnancy.)

2. When You Drink Alcohol, Follow These Rules. Heed the Chart.

The most widespread recommendations fall into this category, and they basically suggest that women can drink small amounts of alcohol occasionally but should wait until they are completely sober, with a BAC of 0, before nursing.[21]In its 2012 policy statement on breastfeeding, the American Academy of Pediatrics advises that mothers really shouldn’t drink alcohol while nursing, but that if they do they should do so only occasionally, limit their consumption to no more than two drinks, and separate nursing episodes from drinking by at least two hours.[22]The UK’s National Health Service advises essentially the same.[23]Advocates in this camp mostly adopt the view that “until a safe level of alcohol in breast milk is established, no alcohol in breast milk is safe,” but they work to offer strategies that might help women who do wish to consume alcohol do so safely.[24]

Chief among these strategies are charts – easily available on the web – aimed to simplify the matter by indicating how long a mother would, on average, need to wait after consuming “X” number of drinks before she would be 100% sober and able to nurse her baby. One of the most widely cited ones is the Motherisk algorithm, available here. I know women who have found these extremely helpful and a great relief. Yet others may find them stressful or inaccessible. They are devised for the average woman, standing 5’4” tall – what about women who are taller or shorter? What about women who metabolize alcohol at different rates? Furthermore, the recommended wait times come across as inordinately long. For my own weight range, I would need to wait almost five hours in between nursing sessions if I had just two drinks. If I had three drinks, I’d need to wait more than seven hours. (La Leche League notes that because of personal differences, it may take any given woman up to thirteen hours to eliminate the effects of a single drink.[25]) What breastfeeding mother can bank on these kinds of time spans? My son ate very often in the afternoons and evenings, even up until he was six months old. The prospect that I might reliably be able to wait two and a half hours to nurse (the time recommended I wait after consuming just one alcoholic drink) on a Friday evening seemed almost laughable. I turned to charts and tables like these for help, but I found them vexing and near-impossible to apply in my own life.[26]

Many breastfeeding experts lament the blanket message that nursing women abstain from alcohol, and many of them have further criticized the two-hour-per-drink waiting period between drinking and nursing. The basic premise behind this third philosophy is that without clear evidence to back it up, breastfeeding mothers hardly need another restriction like this on their lives. Dr. Jack Newman, a renowned breastfeeding expert, advises women to go ahead and have a drink. “Nursing mothers have enough unnecessary artificial constraints on their lives,” explains Dr. Jack Newman. “Ethanol is not so bad,” he writes, “that mothers should avoid nursing their babies for two hours after a single drink.”[27] Spanish breastfeeding expert Dr. Carlos Gonzalez goes even further: “The legal driving limit in the UK is 0.08 per cent. If your alcohol level is higher than 0.15 per cent you are unmistakably drunk. If it goes above 0.55 per cent you simply drop dead. Therefore, it’s absolutely impossible for breastmilk to contain more than 0.55 per cent alcohol.” Gonzalez stresses that heavy drinking does not mix well with breastfeeding, but mostly for the mother, not the baby – “even if the mother drinks three glasses of wine a day,” he says, “breastfeeding is still better for her baby than bottle feeding.”[28] (As a side note, this dictum, that the benefits of breastmilk outweigh its possible contamination with alcohol – which the World Health Organization also declares – has its own problems.[29]Even though it’s intended to assuage breastfeeding women, what does this message convey to formula-feeding mothers? You would be better off serving your child a little alcohol than formula? This seems implicitly judgmental.)

I’ve come across a driving analogy that encapsulates the more casual approach to alcohol and breastfeeding: if you are sober enough that you would get behind the wheel of a car, go ahead and nurse. (Of course, as Gonzalez more provocatively explains, plenty of research also indicates that even if you couldn’t drive, nursing would still be fine. According to Dr. Thomas Hale, maternal BAC levels have to hit about 0.3 – more than three times the legal driving limit – before “significant side effects are reported” in a baby.[30]) A little less dramatically, the popular website kellymom.com, a resource that lactation consultants at my hospital as well as the Cleveland Clinic recommended, nonchalantly explains that a drink or two with food is fine, and that less than 2% of the alcohol a mother drinks actually reaches her nursing baby.[31] This perspective seems to be gaining traction, as far as I can tell. One of my favorite lay pieces broadcasting a balanced approach is this one, by Melinda Wenner Moyer. Moyer argues that breastfeeding moms can afford to relax a little about alcohol consumption.

Others who ascribe to this perspective suggest that if women want to consume more than a couple of drinks, they plan ahead. One option is to pump milk in advance.[32](No one suggests intermixing a formula feeding.)

At the end of the day, I find myself wishing there was more research. As is, undoubtedly, each of these outlooks is standing on firm ground. Personally, I fell in line with the third take, and chose to imbibe, but I would be lying if I said I never hesitated. Even though some quality research has explored this issue, I still think the casual moderation championed by 1970s physicians rings true. I took up the driving rule, and although it’s somewhat contrived, I liked the planning mantra. I tried to anticipate when I might want to have drinks, and pumped to store extra breast milk for those occasions. In hindsight, I might have forgone pumping and simply used formula instead. Maybe next time around. But hopefully we’ll all have some better answers by then.

​A recap of the history of medical recommendations about exercising during pregnancy.​

(*This piece is in reference to healthy, normal pregnancies without contraindications for exercise.)

Exercise Misgivings

In a 2016 piece in The Atlantic, writer Julie Beck explained that her “gut reaction” to the prospect of exercise during pregnancy was that it “seems risky.”[1]She is definitely not alone in her intuition – the idea that exercise might be hazardous for pregnant women is pervasive in American society. There appears to be a good deal of confusion about the issue. As of 2014, only about 16% of pregnant women in the U.S. were exercising in accordance with the American College of Obstetricians and Gynecologists (ACOG) guidelines (which suggest engaging in “moderate-intensity exercise for at least 20-30 minutes per day on most or all days of the week”).[2]In 2016, a majority of rural women surveyed for a study published in the International Journal of Childbirth Education conveyed inaccurate or confused understandings about exercising during pregnancy.[3] In a 2010 survey of physicians and certified nurse-midwives in Michigan, a surprisingly high number of practitioners conveyed outdated information to their patients, with 64% of respondents urging pregnant women to “limit their activity intensity based on a suggestion that has not been applicable for several years.”[4] So pregnant women who continue to exercise hear – from medical professionals and from friends and family – that they should “be careful,” or “take it easy.” Women like Julie Beck think instinctively about exercise as “risky.” Where did this kind of apprehension originate? Why are Americans so nervous about the habit of exercising during pregnancy?

American women were taught to be cautious about physical exertion during pregnancy for much of the twentieth century.

Historically, recommendations regarding exercise during pregnancy stemmed from socio-cultural ideas and norms more than they were grounded in science.[5] Generally speaking, pregnant women were “treated as though they had an illness.” They were instructed to “relax, avoid strenuous exertion and even bending or stretching, for fear they would strangle or squash the baby.”[6] In 1949, the U.S. Children’s Bureau exclaimed that pregnant women could maintain housework and gardening activities, could go for walks up to one mile at a time, and could swim from time to time.[7]Most books in the 40s proclaimed something similar, allowing that pregnant women could continue their housework but by and large discouraging sports activities.[8]

Medical textbooks up through the 1960s and even into the 1970s announced that “’pregnancy is not a good time to exercise’” (although they still permitted walking).[9] Meanwhile, scientific medicine was beginning to take “a more serious interest” in exercise as a component of health. Exercise science blossomed in the 1960s, and in the 1970s medical outlets like the National Institutes of Health, the American Heart Association, and the Centers for Disease Control began weighing in. The most rigorous medical journals, such as the Journal of the American Medical Association and The New England Journal of Medicine, started publishing articles on the importance of exercise and physical activity. While exercise grew more popular among the general population, exercising throughout pregnancy was relatively unusual – enough so that JAMA ran a story in 1981 about a woman who ran 4 miles daily throughout her seventh pregnancy. She was the subject of a formal academic presentation at the annual meeting of the American College of Sports Medicine.[10] The fact that this woman’s experience warranted formal comment among medical professionals indicates that practitioners were not accustomed to seeing women continue to exercise while they were pregnant. (By comparison, it is difficult to imagine that professionals today would consider this situation worthy of special discussion.)

By the end of the 1980s, exercise was not only cemented as a legitimate component of medical health and wellbeing, it was a public fad. Exercise routines for every faction of society – including pregnant women – abounded.[11] Prenatal exercise regimens proliferated so widely, in fact, that doctors grew concerned about them. Most were developed by non-physicians and made untenable claims about how physical fitness would ease labor and delivery.[12] Amid this climate, medicine and society started asking questions about the safety of exercise during pregnancy. According to one analysis, two “schools of thought” emerged: one conservative and the other liberal. Conservatives, predominantly physicians, were more likely to recommend “a restrictive, cautious approach” to pregnant women. In turn, active pregnant women saw no detriments to their activity and argued that exercise was a boon to their pregnancies.[13]

For those who harbored any concerns, there were two broad questions. The first was acute – were there any immediate problems that resulted during exercise for a mother or a fetus? The second was chronic – were there any broad negative outcomes of exercising on the course of a pregnancy and/or birth?

Some of the first work conducted took place on animals, and those studies seemed to point to some potential risks, such as reduced maternal blood flow (which could be problematic if it caused a reduction in adequate blood supply to the uterus) and lower birth weight.[14] But human research indicated otherwise. Small-scale studies in the 1980s concluded that exercise did not impact fetal weight and that any effects on maternal blood flow or fetal heart rate were limited to exercise sessions, were minor, and had no visible consequences.[15] So, in the short-term, exercise did elicit subtle acute changes, but they were not evidently harmful. And in the long-term, exercise appeared to offer benefits.

Then people starting asking more specifics – how hard could women work out? For how long? In what kind of exercises and activities could they partake? As researchers dug into these questions, they increasingly determined that vigorous or strenuous exercise posed no problems, and might in fact be advantageous. One 1987 study, for example, looked at nearly 850 women split into four groups – a control group, a low-exercise group, a medium-exercise group, and a high-exercise group. Across all three exercising groups, physical activity did not have any negative outcomes for mothers or babies. C-section rates were actually inversely correlated with higher exercise intensity, and the high-exercisers gave birth to babies with the highest Apgar scores. “Pregnancy outcomes,” the researchers concluded, “were more favorable in the exercise groups, particularly the high-exercise group.”[16] With some (albeit minimal) research, ACOG levied a formal opinion.

In 1985, ACOG released its first ever guidelines on exercising during pregnancy, formally approving of exercise in pregnant women. Given the paucity of research, its recommendations were relatively cautious. Pregnant women were advised to limit their maximum heart rates to 140 beats per minute, restrict strenuous activity to no more than 15 minutes, and keep their core body temperatures below 100.4 degrees Fahrenheit.[17] In 1994, ACOG amended its guidelines, doing away with heart rate restrictions and time limitations. In 2002, another revision asserted that “pregnant women are now encouraged to follow general adult recommendations for PA [physical activity].”[18]

At the end of 2015 ACOG released its current guidelines, which stress that exercising during pregnancy carries very little risk. They suggest that all pregnant women without complications should partake in aerobic and strength training exercises for the duration of their pregnancies. This advisement is similar internationally. Breaking with its previous guidelines, ACOG also began to encourage women to start exercising during pregnancy, even if they hadn’t previously.[19] Women who were exercising pretty vigorously before they became pregnant also gained the A-OK to “keep doing the same workouts” if they felt up for it.[20](One piece of the original guidelines that research has continually substantiated is the advisement that pregnant women keep their core body temperatures under about 102 degrees Fahrenheit, since elevated core body temperatures (especially in the first trimester) have been correlated with neural tube defects.[21])

Allowing that the pregnant body often demands numerous modifications for exercising women, ACOG’s general message is that exercise prescriptions for pregnant women should essentially be “the same…as those recommended for the general population.”[22] With regards to exercise, pregnant ladies can work out just like everybody else. (If they want to.)

The Scope of Things

In other words, science clearly demonstrated that the age old customs and “social wisdom” supposing that pregnant women should avoid physical activity were misguided. At the present, evidence clearly shows that exercising during pregnancy carries no increased risks for the fetus or for the mother.[23] It also has no documented associations with miscarriage or premature labor.[24] In some ways, the benefits of exercising during pregnancy are modest. As Emily Oster assesses in her wonderful book Expecting Better, many studies suggest that “exercise doesn’t seem to have much of an impact on anything…[and] the same randomized studies that show no clear benefits of exercise also show no downsides.”[25] But more and more work seems to indicate that exercising during pregnancy does in fact carry potential benefits, and that on the whole, it is likely beneficial for both mothers and babies.

Exercise offers many of the same overall benefits for pregnant women as it does for the general population – improving physical fitness, helping with weight management, and improving mood. (Regarding weight, exercising while pregnant only helps a little – just as outside of pregnancy. A 2014 metaanlysis showed that exercising while pregnant lowered prenatal weight gain by about 2.2 pounds on average.[26]) It also has some possible benefits particular to pregnant women, ranging from decreasing the risk of gestational diabetes and hypertension to helping with typical maladies such as back pain, constipation, bladder control, varicose veins, and heartburn. Some women report that a quick bout of exercise (just 3-5 minutes) relieves morning sickness symptoms.[27]New York Times contributor Gretchen Reynolds recently wrote about a fascinating study in which the offspring of mice who ran while they were pregnant ran more themselves when they grew up – they “had literally been born to run,” she quipped. The study, as Reynolds describes, “hint[s] at the possibility that to some extent our will to work out may be influenced by a mother’s exercise habits during pregnancy, and begin as early as in the womb.”[28]

Even more provocatively, recent research shows that being physically active during her pregnancy might reduce a woman’s chances of having a C-section. In a systematic review of randomly-controlled trials (published in 2014 in the American Journal of Obstetrics and Gynecology) encompassing more than 3300 women in 16 studies, authors measured that “women in exercise groups had a significantly lower risk of cesarean delivery.”[29] Importantly, only one of the studies included in the review actually offered any indication about the circumstances of C-sections, meaning they did not record or explain why C-sections occurred. This is definitely a huge factor, since many women elect to have C-sections ahead of time. Still, the numbers are hard to ignore – really hard to ignore, actually. Exercise programs had the potential to lower the risk of C-section by about 15%. As the authors note, this possibility “is of huge clinical significance…and likely [is] difficult to obtain by any other single intervention.”[30]

Popular websites and common resources for pregnant women working to transmit these findings to women mostly do a nice job. Sometimes they send mixed messages, or patronize women by belittling the manifold obstacles to exercise that pregnancy imposes. But they are working against decades of cultural and ideological conventions relegating pregnant women to inactivity or domestic work. There are, of course, exercise modifications women should make as well as some activities pregnant women really should avoid altogether (such as scuba diving, or contact sports), but the ongoing scholarship should reassure women who wish to remain active during their pregnancies. The recommendations I heard when I was pregnant were to simply do what felt comfortable, safe, and good, based on my own intuition. Of course, this was welcome advice, but I liked knowing more of the full story.