On a chilly day last spring, a few dozen developers of children’s apps for phones and tablets gathered at an old beach resort in Monterey, California, to show off their games. One developer, a self-described “visionary for puzzles” who looked like a skateboarder-recently-turned-dad, displayed a jacked-up, interactive game called Puzzingo, intended for toddlers and inspired by his own son’s desire to build and smash. Two 30‑something women were eagerly seeking feedback for an app called Knock Knock Family, aimed at 1-to-4-year-olds. “We want to make sure it’s easy enough for babies to understand,” one explained.

The gathering was organized by Warren Buckleitner, a longtime reviewer of interactive children’s media who likes to bring together developers, researchers, and interest groups—and often plenty of kids, some still in diapers. It went by the Harry Potter–ish name Dust or Magic, and was held in a drafty old stone-and-wood hall barely a mile from the sea, the kind of place where Bathilda Bagshot might retire after packing up her wand. Buckleitner spent the breaks testing whether his own remote-control helicopter could reach the hall’s second story, while various children who had come with their parents looked up in awe and delight. But mostly they looked down, at the iPads and other tablets displayed around the hall like so many open boxes of candy. I walked around and talked with developers, and several paraphrased a famous saying of Maria Montessori’s, a quote imported to ennoble a touch-screen age when very young kids, who once could be counted on only to chew on a square of aluminum, are now engaging with it in increasingly sophisticated ways: “The hands are the instruments of man’s intelligence.”

What, really, would Maria Montessori have made of this scene? The 30 or so children here were not down at the shore poking their fingers in the sand or running them along mossy stones or digging for hermit crabs. Instead they were all inside, alone or in groups of two or three, their faces a few inches from a screen, their hands doing things Montessori surely did not imagine. A couple of 3-year-old girls were leaning against a pair of French doors, reading an interactive story called Ten Giggly Gorillas and fighting over which ape to tickle next. A boy in a nearby corner had turned his fingertip into a red marker to draw an ugly picture of his older brother. On an old oak table at the front of the room, a giant stuffed Angry Bird beckoned the children to come and test out tablets loaded with dozens of new apps. Some of the chairs had pillows strapped to them, since an 18-month-old might not otherwise be able to reach the table, though she’d know how to swipe once she did.

Not that long ago, there was only the television, which theoretically could be kept in the parents’ bedroom or locked behind a cabinet. Now there are smartphones and iPads, which wash up in the domestic clutter alongside keys and gum and stray hair ties. “Mom, everyone has technology but me!” my 4-year-old son sometimes wails. And why shouldn’t he feel entitled? In the same span of time it took him to learn how to say that sentence, thousands of kids’ apps have been developed—the majority aimed at preschoolers like him. To us (his parents, I mean), American childhood has undergone a somewhat alarming transformation in a very short time. But to him, it has always been possible to do so many things with the swipe of a finger, to have hundreds of games packed into a gadget the same size as Goodnight Moon.

In 2011, the American Academy of Pediatrics updated its policy on very young children and media. In 1999, the group had discouraged television viewing for children younger than 2, citing research on brain development that showed this age group’s critical need for “direct interactions with parents and other significant care givers.” The updated report began by acknowledging that things had changed significantly since then. In 2006, 90 percent of parents said that their children younger than 2 consumed some form of electronic media. Nonetheless, the group took largely the same approach it did in 1999, uniformly discouraging passive media use, on any type of screen, for these kids. (For older children, the academy noted, “high-quality programs” could have “educational benefits.”) The 2011 report mentioned “smart cell phone” and “new screen” technologies, but did not address interactive apps. Nor did it broach the possibility that has likely occurred to those 90 percent of American parents, queasy though they might be: that some good might come from those little swiping fingers.

I had come to the developers’ conference partly because I hoped that this particular set of parents, enthusiastic as they were about interactive media, might help me out of this conundrum, that they might offer some guiding principle for American parents who are clearly never going to meet the academy’s ideals, and at some level do not want to. Perhaps this group would be able to articulate some benefits of the new technology that the more cautious pediatricians weren’t ready to address. I nurtured this hope until about lunchtime, when the developers gathering in the dining hall ceased being visionaries and reverted to being ordinary parents, trying to settle their toddlers in high chairs and get them to eat something besides bread.

I fell into conversation with a woman who had helped develop Montessori Letter Sounds, an app that teaches preschoolers the Montessori methods of spelling.

She was a former Montessori teacher and a mother of four. I myself have three children who are all fans of the touch screen. What games did her kids like to play?, I asked, hoping for suggestions I could take home.

“They don’t play all that much.”

Really? Why not?

“Because I don’t allow it. We have a rule of no screen time during the week,” unless it’s clearly educational.

No screen time? None at all? That seems at the outer edge of restrictive, even by the standards of my overcontrolling parenting set.

“On the weekends, they can play. I give them a limit of half an hour and then stop. Enough. It can be too addictive, too stimulating for the brain.”

Her answer so surprised me that I decided to ask some of the other developers who were also parents what their domestic ground rules for screen time were. One said only on airplanes and long car rides. Another said Wednesdays and weekends, for half an hour. The most permissive said half an hour a day, which was about my rule at home. At one point I sat with one of the biggest developers of e-book apps for kids, and his family. The toddler was starting to fuss in her high chair, so the mom did what many of us have done at that moment—stuck an iPad in front of her and played a short movie so everyone else could enjoy their lunch. When she saw me watching, she gave me the universal tense look of mothers who feel they are being judged. “At home,” she assured me, “I only let her watch movies in Spanish.”

By their pinched reactions, these parents illuminated for me the neurosis of our age: as technology becomes ubiquitous in our lives, American parents are becoming more, not less, wary of what it might be doing to their children. Technological competence and sophistication have not, for parents, translated into comfort and ease. They have merely created yet another sphere that parents feel they have to navigate in exactly the right way. On the one hand, parents want their children to swim expertly in the digital stream that they will have to navigate all their lives; on the other hand, they fear that too much digital media, too early, will sink them. Parents end up treating tablets like precision surgical instruments, gadgets that might perform miracles for their child’s IQ and help him win some nifty robotics competition—but only if they are used just so. Otherwise, their child could end up one of those sad, pale creatures who can’t make eye contact and has an avatar for a girlfriend.

Norman Rockwell never painted Boy Swiping Finger on Screen, and our own vision of a perfect childhood has never adjusted to accommodate that now-common tableau. Add to that our modern fear that every parenting decision may have lasting consequences—that every minute of enrichment lost or mindless entertainment indulged will add up to some permanent handicap in the future—and you have deep guilt and confusion. To date, no body of research has definitively proved that the iPad will make your preschooler smarter or teach her to speak Chinese, or alternatively that it will rust her neural circuitry—the device has been out for only three years, not much more than the time it takes some academics to find funding and gather research subjects. So what’s a parent to do?

In 2001, the education and technology writer Marc Prensky popularized the term digital natives to describe the first generations of children growing up fluent in the language of computers, video games, and other technologies. (The rest of us are digital immigrants, struggling to understand.) This term took on a whole new significance in April 2010, when the iPad was released. iPhones had already been tempting young children, but the screens were a little small for pudgy toddler hands to navigate with ease and accuracy. Plus, parents tended to be more possessive of their phones, hiding them in pockets or purses. The iPad was big and bright, and a case could be made that it belonged to the family. Researchers who study children’s media immediately recognized it as a game changer.

Previously, young children had to be shown by their parents how to use a mouse or a remote, and the connection between what they were doing with their hand and what was happening on the screen took some time to grasp. But with the iPad, the connection is obvious, even to toddlers. Touch technology follows the same logic as shaking a rattle or knocking down a pile of blocks: the child swipes, and something immediately happens. A “rattle on steroids,” is what Buckleitner calls it. “All of a sudden a finger could move a bus or smush an insect or turn into a big wet gloopy paintbrush.” To a toddler, this is less magic than intuition. At a very young age, children become capable of what the psychologist Jerome Bruner called “enactive representation”; they classify objects in the world not by using words or symbols but by making gestures—say, holding an imaginary cup to their lips to signify that they want a drink. Their hands are a natural extension of their thoughts.

Norman Rockwell never painted Boy Swiping Finger on Screen, and our own vision of a perfect childhood has never adjusted to fit that now-common tableau.

I have two older children who fit the early idea of a digital native—they learned how to use a mouse or a keyboard with some help from their parents and were well into school before they felt comfortable with a device in their lap. (Now, of course, at ages 9 and 12, they can create a Web site in the time it takes me to slice an onion.) My youngest child is a whole different story. He was not yet 2 when the iPad was released. As soon as he got his hands on it, he located the Talking Baby Hippo app that one of my older children had downloaded. The little purple hippo repeats whatever you say in his own squeaky voice, and responds to other cues. My son said his name (“Giddy!”); Baby Hippo repeated it back. Gideon poked Baby Hippo; Baby Hippo laughed. Over and over, it was funny every time. Pretty soon he discovered other apps. Old MacDonald, by Duck Duck Moose, was a favorite. At first he would get frustrated trying to zoom between screens, or not knowing what to do when a message popped up. But after about two weeks, he figured all that out. I must admit, it was eerie to see a child still in diapers so competent and intent, as if he were forecasting his own adulthood. Technically I was the owner of the iPad, but in some ontological way it felt much more his than mine.

Without seeming to think much about it or resolve how they felt, parents began giving their devices over to their children to mollify, pacify, or otherwise entertain them. By 2010, two-thirds of children ages 4 to 7 had used an iPhone, according to the Joan Ganz Cooney Center, which studies children’s media. The vast majority of those phones had been lent by a family member; the center’s researchers labeled this the “pass-back effect,” a name that captures well the reluctant zone between denying and giving.

The market immediately picked up on the pass-back effect, and the opportunities it presented. In 2008, when Apple opened up its App Store, the games started arriving at the rate of dozens a day, thousands a year. For the first 23 years of his career, Buckleitner had tried to be comprehensive and cover every children’s game in his publication, Children’s Technology Review. Now, by Buckleitner’s loose count, more than 40,000 kids’ games are available on iTunes, plus thousands more on Google Play. In the iTunes “Education” category, the majority of the top-selling apps target preschool or elementary-age children. By age 3, Gideon would go to preschool and tune in to what was cool in toddler world, then come home, locate the iPad, drop it in my lap, and ask for certain games by their approximate description: “Tea? Spill?” (That’s Toca Tea Party.)

As these delights and diversions for young children have proliferated, the pass-back has become more uncomfortable, even unsustainable, for many parents:

He’d gone to this state where you’d call his name and he wouldn’t respond to it, or you could snap your fingers in front of his face …

But, you know, we ended up actually taking the iPad away for—from him largely because, you know, this example, this thing we were talking about, about zoning out. Now, he would do that, and my wife and I would stare at him and think, Oh my God, his brain is going to turn to mush and come oozing out of his ears. And it concerned us a bit.

This is Ben Worthen, a Wall Street Journal reporter, explaining recently to NPR’s Diane Rehm why he took the iPad away from his son, even though it was the only thing that could hold the boy’s attention for long periods, and it seemed to be sparking an interest in numbers and letters. Most parents can sympathize with the disturbing sight of a toddler, who five minutes earlier had been jumping off the couch, now subdued and staring at a screen, seemingly hypnotized. In the somewhat alarmist Endangered Minds: Why Children Don’t Think—and What We Can Do About It, author Jane Healy even gives the phenomenon a name, the “ ‘zombie’ effect,” and raises the possibility that television might “suppress mental activity by putting viewers in a trance.”

Ever since viewing screens entered the home, many observers have worried that they put our brains into a stupor. An early strain of research claimed that when we watch television, our brains mostly exhibit slow alpha waves—indicating a low level of arousal, similar to when we are daydreaming. These findings have been largely discarded by the scientific community, but the myth persists that watching television is the mental equivalent of, as one Web site put it, “staring at a blank wall.” These common metaphors are misleading, argues Heather Kirkorian, who studies media and attention at the University of Wisconsin at Madison. A more accurate point of comparison for a TV viewer’s physiological state would be that of someone deep in a book, says Kirkorian, because during both activities we are still, undistracted, and mentally active.

Because interactive media are so new, most of the existing research looks at children and television. By now, “there is universal agreement that by at least age 2 and a half, children are very cognitively active when they are watching TV,” says Dan Anderson, a children’s-media expert at the University of Massachusetts at Amherst. In the 1980s, Anderson put the zombie theory to the test, by subjecting roughly 100 children to a form of TV hell. He showed a group of children ages 2 to 5 a scrambled version of Sesame Street: he pieced together scenes in random order, and had the characters speak backwards or in Greek. Then he spliced the doctored segments with unedited ones and noted how well the kids paid attention. The children looked away much more frequently during the scrambled parts of the show, and some complained that the TV was broken. Anderson later repeated the experiment with babies ages 6 months to 24 months, using Teletubbies. Once again he had the characters speak backwards and chopped the action sequences into a nonsensical order—showing, say, one of the Teletubbies catching a ball and then, after that, another one throwing it. The 6- and 12-month-olds seemed unable to tell the difference, but by 18 months the babies started looking away, and by 24 months they were turned off by programming that did not make sense.

Anderson’s series of experiments provided the first clue that even very young children can be discriminating viewers—that they are not in fact brain-dead, but rather work hard to make sense of what they see and turn it into a coherent narrative that reflects what they already know of the world. Now, 30 years later, we understand that children “can make a lot of inferences and process the information,” says Anderson. “And they can learn a lot, both positive and negative.” Researchers never abandoned the idea that parental interaction is critical for the development of very young children. But they started to see TV watching in shades of gray. If a child never interacts with adults and always watches TV, well, that is a problem. But if a child is watching TV instead of, say, playing with toys, then that is a tougher comparison, because TV, in the right circumstances, has something to offer.

How do small children actually experience electronic media, and what does that experience do to their development? Since the ’80s, researchers have spent more and more time consulting with television programmers to study and shape TV content. By tracking children’s reactions, they have identified certain rules that promote engagement: stories have to be linear and easy to follow, cuts and time lapses have to be used very sparingly, and language has to be pared down and repeated. A perfect example of a well-engineered show is Nick Jr.’s Blue’s Clues, which aired from 1996 to 2006. Each episode features Steve (or Joe, in later seasons) and Blue, a cartoon puppy, solving a mystery. Steve talks slowly and simply; he repeats words and then writes them down in his handy-dandy notebook. There are almost no cuts or unexplained gaps in time. The great innovation of Blue’s Clues is something called the “pause.” Steve asks a question and then pauses for about five seconds to let the viewer shout out an answer. Small children feel much more engaged and invested when they think they have a role to play, when they believe they are actually helping Steve and Blue piece together the clues. A longitudinal study of children older than 2 and a half showed that the ones who watched Blue’s Clues made measurably larger gains in flexible thinking and problem solving over two years of watching the show.

For toddlers, however, the situation seems slightly different. Children younger than 2 and a half exhibit what researchers call a “video deficit.” This means that they have a much easier time processing information delivered by a real person than by a person on videotape. In one series of studies, conducted by Georgene Troseth, a developmental psychologist at Vanderbilt University, children watched on a live video monitor as a person in the next room hid a stuffed dog. Others watched the exact same scene unfold directly, through a window between the rooms. The children were then unleashed into the room to find the toy. Almost all the kids who viewed the hiding through the window found the toy, but the ones who watched on the monitor had a much harder time.

A natural assumption is that toddlers are not yet cognitively equipped to handle symbolic representation. (I remember my older son, when he was 3, asking me if he could go into the TV and pet Blue.) But there is another way to interpret this particular phase of development. Toddlers are skilled at seeking out what researchers call “socially relevant information.” They tune in to people and situations that help them make a coherent narrative of the world around them. In the real world, fresh grass smells and popcorn tumbles and grown-ups smile at you or say something back when you ask them a question. On TV, nothing like that happens. A TV is static and lacks one of the most important things to toddlers, which is a “two-way exchange of information,” argues Troseth.

A few years after the original puppy-hiding experiment, in 2004, Troseth reran it, only she changed a few things. She turned the puppy into a stuffed Piglet (from the Winnie the Pooh stories). More important, she made the video demonstration explicitly interactive. Toddlers and their parents came into a room where they could see a person—the researcher—on a monitor. The researcher was in the room where Piglet would be hidden, and could in turn see the children on a monitor. Before hiding Piglet, the researcher effectively engaged the children in a form of media training. She asked them questions about their siblings, pets, and toys. She played Simon Says with them and invited them to sing popular songs with her. She told them to look for a sticker under a chair in their room. She gave them the distinct impression that she—this person on the screen—could interact with them, and that what she had to say was relevant to the world they lived in. Then the researcher told the children she was going to hide the toy and, after she did so, came back on the screen to instruct them where to find it. That exchange was enough to nearly erase the video deficit. The majority of the toddlers who participated in the live video demonstration found the toy.

Blue’s Clues was on the right track. The pause could trick children into thinking that Steve was responsive to them. But the holy grail would be creating a scenario in which the guy on the screen did actually respond—in which the toddler did something and the character reliably jumped or laughed or started to dance or talk back.

Like, for example, when Gideon said “Giddy” and Talking Baby Hippo said “Giddy” back, without fail, every time. That kind of contingent interaction (I do something, you respond) is what captivates a toddler and can be a significant source of learning for even very young children—learning that researchers hope the children can carry into the real world. It’s not exactly the ideal social partner the American Academy of Pediatrics craves. It’s certainly not a parent or caregiver. But it’s as good an approximation as we’ve ever come up with on a screen, and it’s why children’s-media researchers are so excited about the iPad’s potential.

A couple researchers from the Children’s Media Center at Georgetown University show up at my house, carrying an iPad wrapped in a bright-orange case, the better to tempt Gideon with. They are here at the behest of Sandra Calvert, the center’s director, to conduct one of several ongoing studies on toddlers and iPads. Gideon is one of their research subjects. This study is designed to test whether a child is more likely to learn when the information he hears comes from a beloved and trusted source. The researchers put the iPad on a kitchen chair; Gideon immediately notices it, turns it on, and looks for his favorite app. They point him to the one they have invented for the experiment, and he dutifully opens it with his finger.

Onto the screen comes a floppy kangaroo-like puppet, introduced as “DoDo.” He is a nobody in the child universe, the puppet equivalent of some random guy on late-night public-access TV. Gideon barely acknowledges him. Then the narrator introduces Elmo. “Hi,” says Elmo, waving. Gideon says hi and waves back.

An image pops up on the screen, and the narrator asks, “What is this?” (It’s a banana.)

“This is a banana,” says DoDo.

“This is a grape,” says Elmo.

I smile with the inner glow of a mother who knows her child is about to impress a couple strangers. My little darling knows what a banana is. Of course he does! Gideon presses on Elmo. (The narrator says, “No, not Elmo. Try again.”) As far as I know, he’s never watched Sesame Street, never loved an Elmo doll or even coveted one at the toy store. Nonetheless, he is tuned in to the signals of toddler world and, apparently, has somehow figured out that Elmo is a supreme moral authority. His relationship with Elmo is more important to him than what he knows to be the truth. On and on the game goes, and sometimes Gideon picks Elmo even when Elmo says an orange is a pear. Later, when the characters both give made-up names for exotic fruits that few children would know by their real name, Gideon keeps doubling down on Elmo, even though DoDo has been more reliable.

By age 3, Gideon would tune in to what was cool in toddler world, then drop the iPad in my lap and ask for certain games by their approximate description.

As it happens, Gideon was not in the majority. This summer, Calvert and her team will release the results of their study, which show that most of the time, children around age 32 months go with the character who is telling the truth, whether it’s Elmo or DoDo—and quickly come to trust the one who’s been more accurate when the children don’t already know the answer. But Calvert says this merely suggests that toddlers have become even more savvy users of technology than we had imagined. She had been working off attachment theory, and thought toddlers might value an emotional bond over the correct answer. But her guess is that something about tapping the screen, about getting feedback and being corrected in real time, is itself instructive, and enables the toddlers to absorb information accurately, regardless of its source.

Calvert takes a balanced view of technology: she works in an office surrounded by hardcover books, and she sometimes edits her drafts with pen and paper. But she is very interested in how the iPad can reach children even before they’re old enough to access these traditional media.

“People say we are experimenting with our children,” she told me. “But from my perspective, it’s already happened, and there’s no way to turn it back. Children’s lives are filled with media at younger and younger ages, and we need to take advantage of what these technologies have to offer. I’m not a Pollyanna. I’m pretty much a realist. I look at what kids are doing and try to figure out how to make the best of it.”

Despite the participation of Elmo, Calvert’s research is designed to answer a series of very responsible, high-minded questions: Can toddlers learn from iPads? Can they transfer what they learn to the real world? What effect does interactivity have on learning? What role do familiar characters play in children’s learning from iPads? All worthy questions, and important, but also all considered entirely from an adult’s point of view. The reason many kids’ apps are grouped under “Education” in the iTunes store, I suspect, is to assuage parents’ guilt (though I also suspect that in the long run, all those “educational” apps merely perpetuate our neurotic relationship with technology, by reinforcing the idea that they must be sorted vigilantly into “good” or “bad”). If small children had more input, many “Education” apps would logically fall under a category called “Kids” or “Kids’ Games.” And many more of the games would probably look something like the apps designed by a Swedish game studio named Toca Boca.

The founders, Emil Ovemar and Björn Jeffery, work for Bonnier, a Swedish media company. Ovemar, an interactive-design expert, describes himself as someone who never grew up. He is still interested in superheroes, Legos, and animated movies, and says he would rather play stuck-on-an-island with his two kids and their cousins than talk to almost any adult. Jeffery is the company’s strategist and front man; I first met him at the conference in California, where he was handing out little temporary tattoos of the Toca Boca logo, a mouth open and grinning, showing off rainbow-colored teeth.

In late 2010, Ovemar and Jeffery began working on a new digital project for Bonnier, and they came up with the idea of entering the app market for kids. Ovemar began by looking into the apps available at the time. Most of them were disappointingly “instructive,” he found—“drag the butterfly into the net, that sort of thing. They were missing creativity and imagination.” Hunting for inspiration, he came upon Frank and Theresa Caplan’s 1973 book The Power of Play, a quote from which he later e-mailed to me:

What is it that often puts the B student ahead of the A student in adult life, especially in business and creative professions? Certainly it is more than verbal skill. To create, one must have a sense of adventure and playfulness. One needs toughness to experiment and hazard the risk of failure. One has to be strong enough to start all over again if need be and alert enough to learn from whatever happens. One needs a strong ego to be propelled forward in one’s drive toward an untried goal. Above all, one has to possess the ability to play!

Ovemar and Jeffery hunted down toy catalogs from as early as the 1950s, before the age of exploding brand tie-ins. They made a list of the blockbusters over the decades—the first Tonka trucks, the Frisbee, the Hula-Hoop, the Rubik’s Cube. Then they made a list of what these toys had in common: None really involved winning or losing against an opponent. None were part of an effort to create a separate child world that adults were excluded from, and probably hostile toward; they were designed more for family fun. Also, they were not really meant to teach you something specific—they existed mostly in the service of having fun.

In 2011 the two developers launched Toca Tea Party. The game is not all that different from a real tea party. The iPad functions almost like a tea table without legs, and the kids have to invent the rest by, for example, seating their own plushies or dolls, one on each side, and then setting the theater in motion. First, choose one of three tablecloths. Then choose plates, cups, and treats. The treats are not what your mom would feed you. They are chocolate cakes, frosted doughnuts, cookies. It’s very easy to spill the tea when you pour or take a sip, a feature added based on kids’ suggestions during a test play (kids love spills, but spilling is something you can’t do all that often at a real tea party, or you’ll get yelled at). At the end, a sink filled with soapy suds appears, and you wash the dishes, which is also part of the fun, and then start again. That’s it. The game is either very boring or terrifically exciting, depending on what you make of it. Ovemar and Jeffery knew that some parents wouldn’t get it, but for kids, the game would be fun every time, because it’s dependent entirely on imagination. Maybe today the stuffed bear will be naughty and do the spilling, while naked Barbie will pile her plate high with sweets. The child can take on the voice of a character or a scolding parent, or both. There’s no winning, and there’s no reward. Like a game of stuck-on-an-island, it can go on for five minutes or forever.

Soon after the release of Toca Tea Party, the pair introduced Toca Hair Salon, which is still to my mind the most fun game out there. The salon is no Fifth Avenue spa. It’s a rundown-looking place with cracks in the wall. The aim is not beauty but subversion. Cutting off hair, like spilling, is on the list of things kids are not supposed to do. You choose one of the odd-looking people or creatures and have your way with its hair, trimming it or dyeing it or growing it out. The blow-dryer is genius; it achieves the same effect as Tadao Cern’s Blow Job portraits, which depict people’s faces getting wildly distorted by high winds. In August 2011, Toca Boca gave away Hair Salon for free for nearly two weeks. It was downloaded more than 1 million times in the first week, and the company took off. Today, many Toca Boca games show up on lists of the most popular education apps.

Are they educational? “That’s the perspective of the parents,” Jeffery told me at the back of the grand hall in Monterey. “Is running around on the lawn educational? Every part of a child’s life can’t be held up to that standard.” As we talked, two girls were playing Toca Tea Party on the floor nearby. One had her stuffed dragon at a plate, and he was being especially naughty, grabbing all the chocolate cake and spilling everything. Her friend had taken a little Lego construction man and made him the good guy who ate neatly and helped do the dishes. Should they have been outside at the beach? Maybe, but the day would be long, and they could go outside later.

The more I talked with the developers, the more elusive and unhelpful the “Education” category seemed. (Is Where the Wild Things Are educational? Would you make your child read a textbook at bedtime? Do you watch only educational television? And why don’t children deserve high-quality fun?) Buckleitner calls his conference Dust or Magic to teach app developers a more subtle concept than pedagogy. By magic, Buckleitner has in mind an app that makes children’s fingers move and their eyes light up. By dust, he means something that was obviously (and ploddingly) designed by an adult. Some educational apps, I wouldn’t wish on the naughtiest toddler. Take, for example, Counting With the Very Hungry Caterpillar, which turns a perfectly cute book into a tedious app that asks you to “please eat 1 piece of chocolate cake” so you can count to one.

Before the conference, Buckleitner had turned me on to Noodle Words, an app created by the California designer and children’s-book writer Mark Schlichting. The app is explicitly educational. It teaches you about active verbs—spin, sparkle, stretch. It also happens to be fabulous. You tap a box, and a verb pops up and gets acted out by two insect friends who have the slapstick sensibility of the Three Stooges. If the word is shake, they shake until their eyeballs rattle. I tracked down Schlichting at the conference, and he turned out to be a little like Maurice Sendak—like many good children’s writers, that is: ruled by id and not quite tamed into adulthood. The app, he told me, was inspired by a dream he’d had in which he saw the word and floating in the air and sticking to other words like a magnet. He woke up and thought, What if words were toys?

During the course of reporting this story, I downloaded dozens of apps and let my children test them out. They didn’t much care whether the apps were marketed as educational or not, as long as they were fun. Without my prompting, Gideon fixated on a game called LetterSchool, which teaches you how to write letters more effectively and with more imagination than any penmanship textbooks I’ve ever encountered. He loves the Toca Boca games, the Duck Duck Moose games, and random games like Bugs and Buttons. My older kids love The Numberlys, a dark fantasy creation of illustrators who have worked with Pixar that happens to teach the alphabet. And all my kids, including Gideon, play Cut the Rope a lot, which is not exclusively marketed as a kids’ game. I could convince myself that the game is teaching them certain principles of physics—it’s not easy to know the exact right place to slice the rope. But do I really need that extra convincing? I like playing the game; why shouldn’t they?

Every new medium has, within a short time of its introduction, been condemned as a threat to young people. Pulp novels would destroy their morals, TV would wreck their eyesight, video games would make them violent. Each one has been accused of seducing kids into wasting time that would otherwise be spent learning about the presidents, playing with friends, or digging their toes into the sand. In our generation, the worries focus on kids’ brainpower, about unused synapses withering as children stare at the screen. People fret about television and ADHD, although that concern is largely based on a single study that has been roundly criticized and doesn’t jibe with anything we know about the disorder.

There are legitimate broader questions about how American children spend their time, but all you can do is keep them in mind as you decide what rules to set down for your own child. The statement from the American Academy of Pediatrics assumes a zero-sum game: an hour spent watching TV is an hour not spent with a parent. But parents know this is not how life works. There are enough hours in a day to go to school, play a game, and spend time with a parent, and generally these are different hours. Some people can get so drawn into screens that they want to do nothing else but play games. Experts say excessive video gaming is a real problem, but they debate whether it can be called an addiction and, if so, whether the term can be used for anything but a small portion of the population. If your child shows signs of having an addictive personality, you will probably know it. One of my kids is like that; I set stricter limits for him than for the others, and he seems to understand why.

In her excellent book Screen Time, the journalist Lisa Guernsey lays out a useful framework—what she calls the three C’s—for thinking about media consumption: content, context, and your child. She poses a series of questions—Do you think the content is appropriate? Is screen time a “relatively small part of your child’s interaction with you and the real world?”—and suggests tailoring your rules to the answers, child by child. One of the most interesting points Guernsey makes is about the importance of parents’ attitudes toward media. If they treat screen time like junk food, or “like a magazine at the hair salon”—good for passing the time in a frivolous way but nothing more—then the child will fully absorb that attitude, and the neurosis will be passed to the next generation.

“The war is over. The natives won.” So says Marc Prensky, the education and technology writer, who has the most extreme parenting philosophy of anyone I encountered in my reporting. Prensky’s 7-year-old son has access to books, TV, Legos, Wii—and Prensky treats them all the same. He does not limit access to any of them. Sometimes his son plays with a new app for hours, but then, Prensky told me, he gets tired of it. He lets his son watch TV even when he personally thinks it’s a “stupid waste.” SpongeBob SquarePants, for example, seems like an annoying, pointless show, but Prensky says he used the relationship between SpongeBob and Patrick, his starfish sidekick, to teach his son a lesson about friendship. “We live in a screen age, and to say to a kid, ‘I’d love for you to look at a book but I hate it when you look at the screen’ is just bizarre. It reflects our own prejudices and comfort zone. It’s nothing but fear of change, of being left out.”

Prensky’s worldview really stuck with me. Are books always, in every situation, inherently better than screens? My daughter, after all, often uses books as a way to avoid social interaction, while my son uses the Wii to bond with friends. I have to admit, I had the exact same experience with SpongeBob. For a long time I couldn’t stand the show, until one day I got past the fact that the show was so loud and frenetic and paid more attention to the story line, and realized I too could use it to talk with my son about friendship. After I first interviewed Prensky, I decided to conduct an experiment. For six months, I would let my toddler live by the Prensky rules. I would put the iPad in the toy basket, along with the remote-control car and the Legos. Whenever he wanted to play with it, I would let him.

Gideon tested me the very first day. He saw the iPad in his space and asked if he could play. It was 8 a.m. and we had to get ready for school. I said yes. For 45 minutes he sat on a chair and played as I got him dressed, got his backpack ready, and failed to feed him breakfast. This was extremely annoying and obviously untenable. The week went on like this—Gideon grabbing the iPad for two-hour stretches, in the morning, after school, at bedtime. Then, after about 10 days, the iPad fell out of his rotation, just like every other toy does. He dropped it under the bed and never looked for it. It was completely forgotten for about six weeks.

Now he picks it up every once in a while, but not all that often. He has just started learning letters in school, so he’s back to playing LetterSchool. A few weeks ago his older brother played with him, helping him get all the way through the uppercase and then lowercase letters. It did not seem beyond the range of possibility that if Norman Rockwell were alive, he would paint the two curly-haired boys bent over the screen, one small finger guiding a smaller one across, down, and across again to make, in their triumphant finale, the small z.

Most Popular

His paranoid style paved the road for Trumpism. Now he fears what’s been unleashed.

Glenn Beck looks like the dad in a Disney movie. He’s earnest, geeky, pink, and slightly bulbous. His idea of salty language is bullcrap.

The atmosphere at Beck’s Mercury Studios, outside Dallas, is similarly soothing, provided you ignore the references to genocide and civilizational collapse. In October, when most commentators considered a Donald Trump presidency a remote possibility, I followed audience members onto the set of The Glenn Beck Program, which airs on Beck’s website, theblaze.com. On the way, we passed through a life-size replica of the Oval Office as it might look if inhabited by a President Beck, complete with a portrait of Ronald Reagan and a large Norman Rockwell print of a Boy Scout.

“Well, you’re just special. You’re American,” remarked my colleague, smirking from across the coffee table. My other Finnish coworkers, from the school in Helsinki where I teach, nodded in agreement. They had just finished critiquing one of my habits, and they could see that I was on the defensive.

I threw my hands up and snapped, “You’re accusing me of being too friendly? Is that really such a bad thing?”

“Well, when I greet a colleague, I keep track,” she retorted, “so I don’t greet them again during the day!” Another chimed in, “That’s the same for me, too!”

Unbelievable, I thought. According to them, I’m too generous with my hellos.

When I told them I would do my best to greet them just once every day, they told me not to change my ways. They said they understood me. But the thing is, now that I’ve viewed myself from their perspective, I’m not sure I want to remain the same. Change isn’t a bad thing. And since moving to Finland two years ago, I’ve kicked a few bad American habits.

Why the ingrained expectation that women should desire to become parents is unhealthy

In 2008, Nebraska decriminalized child abandonment. The move was part of a "safe haven" law designed to address increased rates of infanticide in the state. Like other safe-haven laws, parents in Nebraska who felt unprepared to care for their babies could drop them off in a designated location without fear of arrest and prosecution. But legislators made a major logistical error: They failed to implement an age limitation for dropped-off children.

Within just weeks of the law passing, parents started dropping off their kids. But here's the rub: None of them were infants. A couple of months in, 36 children had been left in state hospitals and police stations. Twenty-two of the children were over 13 years old. A 51-year-old grandmother dropped off a 12-year-old boy. One father dropped off his entire family -- nine children from ages one to 17. Others drove from neighboring states to drop off their children once they heard that they could abandon them without repercussion.

Trinidad has the highest rate of Islamic State recruitment in the Western hemisphere. How did this happen?

This summer, the so-called Islamic State published issue 15 of its online magazine Dabiq. In what has become a standard feature, it ran an interview with an ISIS foreign fighter. “When I was around twenty years old I would come to accept the religion of truth, Islam,” said Abu Sa’d at-Trinidadi, recalling how he had turned away from the Christian faith he was born into.

At-Trinidadi, as his nom de guerre suggests, is from the Caribbean island of Trinidad and Tobago (T&T), a country more readily associated with calypso and carnival than the “caliphate.” Asked if he had a message for “the Muslims of Trinidad,” he condemned his co-religionists at home for remaining in “a place where you have no honor and are forced to live in humiliation, subjugated by the disbelievers.” More chillingly, he urged Muslims in T&T to wage jihad against their fellow citizens: “Terrify the disbelievers in their own homes and make their streets run with their blood.”

The same part of the brain that allows us to step into the shoes of others also helps us restrain ourselves.

You’ve likely seen the video before: a stream of kids, confronted with a single, alluring marshmallow. If they can resist eating it for 15 minutes, they’ll get two. Some do. Others cave almost immediately.

This “Marshmallow Test,” first conducted in the 1960s, perfectly illustrates the ongoing war between impulsivity and self-control. The kids have to tamp down their immediate desires and focus on long-term goals—an ability that correlates with their later health, wealth, and academic success, and that is supposedly controlled by the front part of the brain. But a new study by Alexander Soutschek at the University of Zurich suggests that self-control is also influenced by another brain region—and one that casts this ability in a different light.

A professor of cognitive science argues that the world is nothing like the one we experience through our senses.

As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.

Should you drink more coffee? Should you take melatonin? Can you train yourself to need less sleep? A physician’s guide to sleep in a stressful age.

During residency, Iworked hospital shifts that could last 36 hours, without sleep, often without breaks of more than a few minutes. Even writing this now, it sounds to me like I’m bragging or laying claim to some fortitude of character. I can’t think of another type of self-injury that might be similarly lauded, except maybe binge drinking. Technically the shifts were 30 hours, the mandatory limit imposed by the Accreditation Council for Graduate Medical Education, but we stayed longer because people kept getting sick. Being a doctor is supposed to be about putting other people’s needs before your own. Our job was to power through.

The shifts usually felt shorter than they were, because they were so hectic. There was always a new patient in the emergency room who needed to be admitted, or a staff member on the eighth floor (which was full of late-stage terminally ill people) who needed me to fill out a death certificate. Sleep deprivation manifested as bouts of anger and despair mixed in with some euphoria, along with other sensations I’ve not had before or since. I remember once sitting with the family of a patient in critical condition, discussing an advance directive—the terms defining what the patient would want done were his heart to stop, which seemed likely to happen at any minute. Would he want to have chest compressions, electrical shocks, a breathing tube? In the middle of this, I had to look straight down at the chart in my lap, because I was laughing. This was the least funny scenario possible. I was experiencing a physical reaction unrelated to anything I knew to be happening in my mind. There is a type of seizure, called a gelastic seizure, during which the seizing person appears to be laughing—but I don’t think that was it. I think it was plain old delirium. It was mortifying, though no one seemed to notice.

“All the world has failed us,” a resident of the Syrian city of Aleppo told the BBC this week, via a WhatsApp audio message. “The city is dying. Rapidly by bombardment, and slowly by hunger and fear of the advance of the Assad regime.”

In recent weeks, the Syrian military, backed by Russian air power and Iran-affiliated militias, has swiftly retaken most of eastern Aleppo, the last major urban stronghold of rebel forces in Syria. Tens of thousands of besieged civilians are struggling to survive and escape the fighting, amid talk of a rebel retreat. One of the oldest continuously inhabited cities on earth, the city of the Silk Road and the Great Mosque, of muwashshah and kibbeh with quince, of the White Helmets and Omran Daqneesh, is poised to fall to Bashar al-Assad and his benefactors in Moscow and Tehran, after a savage four-year stalemate. Syria’s president, who has overseen a war that has left hundreds of thousands of his compatriots dead, will inherit a city robbed of its human potential and reduced to rubble.

Even in big cities like Tokyo, small children take the subway and run errands by themselves. The reason has a lot to do with group dynamics.

It’s a common sight on Japanese mass transit: Children troop through train cars, singly or in small groups, looking for seats.

They wear knee socks, polished patent-leather shoes, and plaid jumpers, with wide-brimmed hats fastened under the chin and train passes pinned to their backpacks. The kids are as young as 6 or 7, on their way to and from school, and there is nary a guardian in sight.

A popular television show called Hajimete no Otsukai, or My First Errand, features children as young as two or three being sent out to do a task for their family. As they tentatively make their way to the greengrocer or bakery, their progress is secretly filmed by a camera crew. The show has been running for more than 25 years.

A recent study shows that people who simply ate more fiber lost about as much weight as those who went on a complicated diet.

By this time of year, many peoples’ best-laid New Year’s Resolutions have died, just seven short weeks after they were born. One reason why it’s difficult to lose weight—the most common resolution—is that dieting is so confusing.

For instance, the American Heart Association's recommended diet is one of the most effective food plans out there. It’s also one of the most complicated. It requires, according to a recent study, “consuming vegetables and fruits; eating whole grains and high-fiber foods; eating fish twice weekly; consuming lean animal and vegetable proteins; reducing intake of sugary beverages; minimizing sugar and sodium intake; and maintaining moderate to no alcohol intake.” On top of that, adherents should derive half of their calories from carbs, a fifth from protein, and the rest from fat—except just 7 percent should be saturated fat. (Perhaps the goal is to keep people busy doing long division so they don't have time to eat food.)