I first became acquainted with the phenomenon during my year-and-a-half stint in a child development lab: hearing parents would sign while speaking to their hearing children, and their children would sign back. I originally attributed the curious habit to the presence of disabled family members in the home. But as I have since come to understand, these people were engaging in baby sign.

Infants can produce recognizable gestures earlier than they can produce recognizable speech. And indeed, from a young age, gestures are a critical component of the communication arsenal, complete with their own developmental trajectory, one that begins with pointing and ends with references to things unseen. So why not put that natural affinity for gesture to good use?

Such is the logic behind courses in baby sign, where parents learn gestures—sometimes borrowed from sign language, sometimes not—for simple words like milk and more. The gestures are then taught to infants, who, in lieu of throwing tantrums to acquire more milk, can go about the process with some dignity, making for happier babies, happier parents, and stronger baby-parent bonds.

Baby sign also, it’s been argued, makes for smarter babies, or at least more linguistically advanced ones. There’s something about gesture, some believe, that prepareschildren for language learning. One telling piece of evidence: bilingual children learning both a spoken language and a sign language have been shown to hit language milestones in their spoken language earlier than monolingual children learning only that spoken language.

This is all the more striking because bilingual babies have it tough. They have to learn two labels for milk, two labels for more,and two ways of structuring milk and more before they can master the art of more milk. Whatever advantages these bilingual babies may experience later in life—and they may well be legion—bilingual babies often fall ever-so-slightly behind monolingual babies when it comes to reaching milestones for a given language, in part because the bilinguals, on average, hear less of it.

Given that bilinguals who both sign and speak seem to have an advantage, however, many parents consider a course in baby sign a prudent investment. But is it? In the latest issue of Child Development, researchers Elizabeth Kirk, Neil Howlett, Karen Pine, and Ben Fletcher report results from the largest, best-controlled study of baby sign yet.

The researchers recruited 80 mothers with eight-month-old infants, whom they promptly (and randomly) split into four groups of 20. The first group was instructed to use a set of 20 words as often as possible, accompanying each word with its equivalent sign in British Sign Language. The second group was instructed similarly, only instead of borrowing from British Sign Language, they paired words with symbolic gestures—things like extending your arms (for airplane) or pulling on an imaginary shoe (for shoe). In the third group, parents were simply asked to speak the words often. The final group served as a control and was not given the word set.

Periodically, over the course of the next year, researchers visited the families. They discovered that children in the two gesture conditions had indeed learned several of the gestures. But importantly there were no group differences in how often children produced the related spoken words. Indeed, there were no group differences in any of the measures of language development the researchers examined: whatever satisfaction baby sign brought mother and child in the short term, it appeared to do absolutely nothing for the child’s long-term language development. (Researchers did however note upticks in a few measures of maternal responsiveness, suggesting a modest benefit to the mother, if not to her child.)

Why the null result? The researchers admit that their sample—high socioeconomic status (SES) infants—may have already been “beyond the threshold of improvement.” In other words, children who are read to, engaged with, simply spokento often and at length just don’t need whatever benefits gesture training might provide.

The researchers did not set out to recruit high-SES families. But as they point out, the highly educated, motivated parents likely to enroll in academic research studies are the same parents likely to enroll in baby sign classes. Alas, for the families who might actually benefit from baby sign, the classes likely aren’t on their radar.

Years after my first and only art history class, I am insufferable at museums. “That’s definitely a Matisse,” I say. “You can tell because of the brushwork and the vibrant use of color.” Sometimes it is not a Matisse. Sometimes it is a Manet, and I am quiet for a while. But usually it is a Matisse, and I am smug as a picnic.

It is unsettling to learn, then, that for all of my carefully won art appreciation, I am in danger of being surpassed by an insect. In a recent study led by the University of Queensland’s Wen Wu, honeybees—whose brains, mind you, are the size of grass seeds—were shown Picassos and Monets paired side by side. Below the prints were two small chambers, one containing sugar water and the other nothing at all.

Which to enter? Bees couldn’t see or smell whether a given chamber held the succulent treat until they’d already flown inside it. But they could let the masterpieces guide them: for some bees, the reward was always under the Picasso, while for the rest it was under the Monet. Over the course of many trials, the bees learned to fly straight for the correct chamber. Indeed, they even performed slightly better than chance when faced with pairs of paintings they’d never seen before. The bees had learned to discriminate, however modestly, between the two artists’ styles.

To be sure, humans still have the edge. Last year a team of researchers at University of British Columbia led by Liane Gabora found that art students were perfectly capable of identifying which well-known artist was behind which obscure painting. Creative writing students were similarly excellent at spotting little-read passages by Hemingway or Dickinson—a skill I can only assume no honeybee has yet demonstrated.

Even more impressively, though, the students could recognize as-yet-unseen samples of each other’s work, including work in entirely different mediums. Creative writers could identify their fellow writers’ paintings and sketches; painters had a pretty good idea who’d brought which poem or clay pot.

It’s clear what the bees were doing: picking up and categorizing complex visual patterns in the pairs of images. But recognizing differences across mediums is altogether different. Whether we’re writing sonnets or building sculptures, Gabora and her colleagues argue, we’re doing so with the same mind: one that structures information in the same way, has been shaped by the same experiences, and yearns to express the same ideas. It should come as no surprise that our techniques and preoccupations in one domain should “out” us in another.

But still I wonder: Just what about these techniques and preoccupations did the trick? The researchers did their best to keep subject matter from ruling the day by instructing, for instance, artists who happened to be surfers not to bring in art that depicted surfing. But what of less obvious subject matter—violent relationships, or Western landscapes? And what of the tics and obsessions that seep into our work unawares? A correlational study like this one, though a fine starting point, will not answer these questions.

Perhaps my biggest question has to do with people who don’t identify as artists, and haven’t settled—or at least would claim so—on a personal style. Are their creations also a reflection of their worldview? It seems likely that, at least to some extent, bad art is all alike, while only good art is good in its own way.

Jessica Love’s last Psycho Babble column appeared on Oct. 2, 2014. We are re-posting several of our favorite columns, including this one, published on Oct 24, 2013.

Nobody wonders, in the hurricane’s aftermath, whether her home will be in better shape than it was before: stacks of books straightened; skewed pictures righted; clothing back up on its hangers; dust bunnies shepherded into tidy, sweep-able piles; the contents of the neighbor’s dilapidated barn reconfigured into a lovely second bathroom. No, with nature or just the passing of time, energy undoes, disperses, homogenizes, until there’s nothing left to do or undo because little quantities of everything are everywhere. This is entropy in its favorite state, one of absolutely perfect disarray.

Our modern understanding of entropy is less than 200 years old. (And by our I mean physicists who are not me.) But an intuitive understanding of the phenomenon—in Henry Adams’s phrase, “chaos [is] the rule of nature; order [is] the dream of man”—comes part and parcel with navigating this world. When does this understanding emerge? About as early, it would seem, as psychologists know how to test.

In 1981, researchers Thomas Shultz and Marilyn Coddington had children draw pictures of what a small box of marbles arranged by color might look like after the box was shaken. Nine-year-olds accurately predicted that the arrangement of marbles would become more random with each shake, but a group of six-year-olds showed no such understanding. (Nor, according to the researchers, did the younger children’s facial expressions appear sufficiently “puzzled” if the marbles remained neatly in place.) Huge leaps in obtaining an intuitive understanding of entropy occur between the ages of six and nine, Shultz and Coddington concluded.

Then two decades later, William Friedman revisited the question. Instead of asking children to predict the outcome of an event, he asked them to identify, using simple yes-no responses, whether a given outcome was possible. In one study, for instance, children were shown “before” and “after” pictures of scenes. One of the pictures might show table settings arranged neatly for a picnic, while the other showed the plates and silverware strewn haphazardly. Children were asked whether an act of nature could have turned “before” into “after”: That looks different, doesn’t it? Somebody said that maybe the wind did it. Do you think the wind can make this happen? Friedman found that even four-year-olds were likelier to say that an act of nature could disrupt a neatly arranged scene than to agree that a gust of wind could clean up and organize an untidy one.

In recent years, new studies have shown that even infants possess an intuitive understanding of entropy. Take this one, published earlier this year by researchers Lili Ma and Fei Xu, which uses an even simpler task: watch.

Under the infants’ attentive gaze, a group of red and yellow balls was ordered—either randomly or in a regular pattern. The infants were then allowed to see who, exactly, was responsible for the ordering. Sometimes, they saw a human hand; other times a claw. At just nine months old, the infants looked longer at the claw than the human hand, but only if the balls appeared in a regular pattern. In other words, the babies were intrigued when a claw—but not a human—was responsible for the nifty patterns that were highly unlikely to occur by chance.

Humans are agents, these infants understood, capable of controlling their environments. Of course they’d want the balls just so. But the claw contraption threw them for a loop—at least until they understood how it worked. Babies weren’t surprised in the least when the curtain was pulled back to reveal a claw so long as, back in the waiting room, they’d witnessed the claw being manipulated by a human.

Jessica Love’s last Psycho Babble column appeared on Oct. 2. We are re-posting several of our favorite columns, including this one, published on Oct. 17, 2013.

A mouse in her room woke Miss Doud,
Who was frightened and screamed very loud.
Then a happy thought hit her:
To scare off the critter
She sat up in bed and just … purred?

Ah yes, the limerick, a poetic form associated not with grand literary tradition so much as bawdy old men from distinctive-sounding cities. But silly as these ditties are, we’ve come to expect a great deal from them—the strict AABBA rhyme scheme, the juxtaposition of short and long lines, the heavy dollop of head-bopping anapests (da da DA da da DA da da DA)—and when a poem doesn’t come through, we notice.

We don’t need psychology to tell us this, of course. But a new paper from one of my favoritelabs proposes that our responses to poems that defy our formal expectations may also have an emotional component.

University of Glasgow’s Christoph Scheepers and his colleagues prerecorded a few dozen (remarkably clean) limericks to play to participants. The catch is that each poem was recorded with five different endings. One was the original or baseline ending (She sat up in bed and just meowed), while the other four endings violated, in turn: the poem’s expected rhyme scheme (She sat up in bed and just purred); the poem’s expected rhythm (She sat up in bed and loudly meowed); the syntactic rules of English (She sat up in bed to just meowed); and the poem’s coherence (She sat up in bed and just ploughed).

In one study, participants then rated the poems on a scale from “highly anomalous” to “perfectly ok.” Baseline versions were deemed reliably more “ok” than any of the others, suggesting that participants recognized and were affected by all of the violations. But in another study, researchers measured participants’ pupils as they listened to the poems. Here, they found that the rhyme violations—and only the rhyme violations—caused the pupils to dilate.

Now, as behavioral measures go, pupil dilation is pretty new, and nobody can argue with any confidence how it should be interpreted. But an earlier study pegged it to mental effort: the faster the mind spins, the more the pupils react. So first the researchers reasoned that rhyme violations in limericks simply elicit more surprise or confusion (and thus require more mental work to overcome that surprise or confusion) than other sorts of violations. But no, a closer look at responses to individual poems revealed that even those rhyme violations that received relatively “ok” ratings still caused pronounced dilation, while other types of “highly anomalous” violations did not. This led researchers to their next preferred interpretation: above and beyond evoking surprise, rhyme violations in limericks pack a singularly emotional wallop.

Really—emotions are involved? Limericks are not exactly the stuff of heart-wrenching drama. But then again, more so than in other poetic forms, exact rhymes carry limericks. They give the form its momentum and its punch line; they’re just about all the form has going for it. Perhaps it really is a betrayal of sorts should the final line not deliver.

Ashley Anna McHugh, a friend and accomplished formalist poet, found the idea that we might respond viscerally to the disruption of a rhyme scheme highly plausible. “When I first taught ‘Leda and the Swan’ by Yeats,” she told me, “my students asked, out of the blue, why the last line didn’t rhyme since the entire poem is otherwise written in true rhyme. They seemed very frustrated, confused—almost confrontational, to tell the truth! Of course, that’s exactly how Yeats wanted that poem to end—in uncertainty and frustration, with a sense that the event is unfinished and that we are lacking in understanding and hopeless to change that.”

Jessica Love’s last Psycho Babble column appeared on Oct. 2, 2014. We are re-posting several of our favorite columns, including this one, published on August 29, 2013.

By all accounts, most of us talk very differently to computers—Google Search, Siri, the automated phone system tasked with extracting our pizza orders—than to fellow humans. We are inclined to speak to machines loudly, slowly: not unlike talking to a stupid child, as one friend puts it. We are careful to curb our regional accents. We e-nun-ci-ate. We do not squander our best jokes.

Still, human-computer conversations hold some rather surprising resemblances to conversations of the human-human ilk. One such resemblance is, well, resemblance. As I’ve written about in the past, when we chat with other people, we tend to adopt some of their language patterns, and they adopt some of ours. Should our friend call a couch a sofa, we too (at least in his presence) call it a sofa. When he asks us to help carry the sofa home, we are likelier to agree to carry the sofa home than to carry home the sofa. If our friend is a fast talker, we may even converge upon his speech rate.

Our conversations with computers are also characterized by this subtle, often unconscious linguistic mimicry, according to work by the University of Edinburgh’s Holly Branigan and her colleagues. Some studies have even found that we are more likely to pattern our language after a computer than a human. Why? Branigan has suggested that it’s because we believe machines to be less linguistically savvy than people. In other words, we want to help a computa out: If it understands carry the couch home, why take a chance with carry home the sofa? Indeed, in another study, Branigan and her colleagues report that people were more likely to replicate the linguistic patterns of basic, out-of-date computers than advanced, up-to-speed ones.

And yet. Our responsiveness to language produced by computer algorithms goes far beyond what could plausibly be expected to aid communication. For instance, implicitly wary of hurt “feelings,” we offer computers more generous performance reviews when questioned directly by them than by other computers, or via pen and paper. We also respond to social cues: we are delighted when our virtual conversants look us in the eyes as if to say, “Go on, I am listening.” We’re won over when they indicate on their handsome, pixilated faces that they’d like to speak next.

It seems clear, then, that many of the behaviors that we’ve internalized over a lifetime of human conversations are unlikely to change just because our conversation partner has a microprocessor instead of a brain. But this has me wondering: What then happens when our daily conversations are as likely to take place with computers as with humans—something futurists have long predicted but that recently has seemed more real and urgent?

Just this month, the New York Times’s Ian Urbina reported on the increasing ubiquity of socialbots: robotic programs designed to lure actual humans into virtual conversations, and then, more often than not, convince them to do something: buy stock, adopt a political stance, even fall in love. (And who better to fall for than a Nigerian Prince?) “Within two years,” Urbina writes, “about 10 percent of the activity occurring on social online networks will be masquerading bots, according to technology researchers.”

Make no mistake: these bots will get good. As will Siri and Google Search and any number of algorithms programmed—for reasons insidious or otherwise—to behave as humanly as possible. I find it very probable that, in my lifetime, I’ll be able to have entire conversations without ever quite knowing whom or what I’m talking to.

But here’s the thing: given that the mechanisms underlying computers’ humanly behavior remain very different from those underlying ours—which is to say, given that computers largely stay computers and we largely stay human—it seems likely that there will always be some circumstances under which machines have an easier time “passing” than others. There will always be some conversation styles or linguistic structures that computers will find easier, less error-prone—and will therefore prioritize.

As these features appear more predominantly in our linguistic environment, we too are likely to embrace them. I therefore can picture not only a world in which computers have adapted to human language but also one in which, ever so slightly, human language itself has shifted to make communication easier for computers.

Jessica Love’s last Psycho Babble column appeared on Oct. 2, 2014. We are re-posting several of our favorite columns, including this one, published on March 27, 2014.

There are plenty of places to turn if your goal is to speak Spanish, or Mandarin, or Arabic. But what if your goal is to talk Shakespeare—to ape the playwright, howsoever hard, and play the part of deft immortal bard?

James Knapp is a damn fine midfielder on my recreational soccer team. He’s also a professor of English at Loyola University Chicago and an advisor to Chicago’s own Improvised Shakespeare Company, known for uproariously weaving entire Elizabethan-inspired tragedies, comedies, and tragicomedies into place on the basis of a single, audience-provided title—“The Merchant of Penis” at the first show I attended. The premise is not as crazy as it sounds. In Shakespeare’s time, new plays would often run for a day or two, or a couple weeks at most. This relentless pace, and the edit-on the-spot ethos it encouraged, bears a stronger resemblance to today’s improv shows than you might imagine.

Knapp recently shared with me some tips for convincingly channeling Shakespeare on the fly. Given the undeniable appeal of such a skill—whether you’re looking to create your own company or simply keen to dazzle colleagues at the employee talent show—I’d be a jerk to keep his insights to myself.

1. OK, you’re not actually “talking Shakespeare.” We call him the bard, but he was by no means the only show in town. Among his 45 or so contemporaries, only a few others (Christopher Marlowe and Ben Jonson, maybe Thomas Middleton and Thomas Dekker) are still read much today. But the whole gang of playwrights reveled in similar language, themes, and genres. Forget learning to talk Shakespeare—your best bet is to learn to talk “Age of Shakespeare.”

2. Recruit some friends, but not too many. Small troupe acting was the name of the game in the Renaissance. But in order to perform large, intricate plays with so few men (yes, always men), actors had to take on multiple roles. Ever wonder why Shakespeare’s plays have so many subplots? Subplots offer complexity without manpower: it’s hard to play two characters, after all, if they’re both on stage at the same time.

3. Know how to stall. If you start a line, and you’re not sure where it’s headed, throw in a few ayes or nays. (“Nay nay is sort of like saying umm umm,” Knapp assured me.) This rule will be very important as you attempt rules 4-7.

4. Nail your blank verse. Re-MEM-ber THIS from ENG-lish CLASS: the UN-/ der-LY-ing BEATS that MIM-ic NAT-ural SPEECH? Get it in your head.

5. Invert your Syntax. Shakespeare’s characters can as plausibly not know as know not. But shuffling your word order every now and then is one of the best ways of distinguishing you as fluent in Age of Shakespeare. The fluid syntax also frees up a lot of potential rhyme words, convenient should you wish to…

6. Punctuate big moments with heroic couplets. Ending a powerful scene with a tidy rhyme, while somewhat comical to modern ears, is the ultimate classy move—this is poetry, dammit. But the technique may also have served as a memory aid. Rather than receiving a copy of the entire script, actors were handed only their own lines, as well as a cue line that told them when to chime in. Heroic couplets, highly memorable, could have been particularly effective way of signaling that the next scene was about to start.

7. Imagery and metaphors and metonyms—oh my. No way around this one. Shakespeare and his contemporaries were masters of figurative language. But be kind to yourself. The occasional “the pot boileth as the volcano runs it course” will take you further than you think.

8. And finally, don’t hold back. Shakespeare sure didn’t. Want to woo a widow at her own husband’s funeral? Fine. Awaken next to the beheaded corpse of a man you believe to be your lover but who is actually your evil stepbrother dressed in your lover’s clothes? Perfect. So long as your characters appeal to our humanity—Hamlet’s delay, Macbeth’s ambition, Lear’s irrationality, Othello’s jealousy—there’s no plot too demented to throw them into.

Jessica Love’s last Psycho Babble column appeared on Oct. 2, 2014. We are re-posting several of our favorite columns, including this one, published on Nov. 7, 2013.

I’d like to write a book some day, a nonfiction book, probably about cognitive science. This book will have a single-word title like Outliers or Quiet or Flow, followed by a long subtitle that will teach you what the title means. The more instructive my subtitle, the shorter my title can be. Perhaps I will be the one to write !: The astonishing psychology of surprise. Or : Spaces between words and other linguistic innovations.

Still, my favorites titles are the ones that span an entire sentence. These titles are not about something so much as something: hard-won truths, arranged neatly down a skinny spine. From Flannery O’Connor I learn that A Good Man Is Hard to Find and Everything that Rises Must Converge; from Ray Bradbury that Something Wicked This Way Comes and from Lee K. Abbott that The Heart Never Fits Its Wanting. In By the River Piedra I Sat Down and Wept, Paulo Coelho pushes the straight-faced limits of the convention: if the title went on any longer we’d have When You Look Like Your Passport Photo, It’s Time to Go Home.

Titling trends—the good, the bad, and the maddening—have been on my mind of late thanks to a new player in the online publishing business: Upworthy, a website that slaps new headlines on feel-good stories available elsewhere and does its damnedest to make them go viral. The site takes an almost scientific approach to virality: headlines are tested exhaustively, 25 versions of each pitted against one another in a grand battle for clicks. The best titles, Upworthy has learned, rely on a “curiosity gap”: they’re not too vague, but they’re not going to give away the punch line either—or even, really, much of the joke. We’re left to click on titles like Mitt Romney Accidentally Confronts A Gay Veteran; Awesomeness Ensues and 9 Out of 10 Americans Are Completely Wrong About This Mind-Blowing Fact.

And click we do. This past August Upworthywas the fifth biggest publisher on Facebook—one place ahead of TheNew York Times. Not surprisingly, other outlets, even prestigious ones, have taken notice. During the government shutdown, The New Republic ran Watch This for 22 Seconds and You’ll See Why Obama Can’t Give an Inch.

What’s odd about the “curiosity gap” is that it represents an about-face on the last tenet of good titling: SEO (Search Engine Optimization). Nobody, needless to say, is ever going to enter “What to watch to see why Obama can’t give an inch” into a search box. And of course SEO is what not long ago dealt a near-death blow to the genre of “clever” titles favored by many daily newspapers. “The only really creative opportunity copy editors had was writing headlines, and they took it seriously,” as Washington Post columnist Gene Weingarten put it. (Though the New York Post, home of the most pun-tastic headline of all time, Headless Body in Topless Bar, remains committed. Earlier this year, Weiner’s Second Coming! graced its front page.)

But the search for the perfect, which is to say the most marketable, title is a long-running experiment. Perhaps the earliest “clickbait” took the form of pamphlets, often sold by printers to fund the publication of more prestigious, and expensive, books. Titles rarely shied away from sensationalism—or, refreshingly, punch lines. Consider this 16th-century title:

A true and most dreadfull discourse of a woman possessed with the Deuill who in the likenesse of a headlesse beare fetched her out of her bedd, and in the presence of seuen persons, most straungely roulled her thorow three chambers, and doune a high paire of staiers, on the fower and twentie of May last. 1584. At Dichet in Sommersetshire. A matter as miraculous as euer was seen in our time.

Jessica Love’s last Psycho Babble column appeared on Oct. 2, 2014. We are re-posting several of our favorite columns, including this one, published on May 22, 2014.

Life and especially literature are filled with incongruous wordplay: the moon scours the sea, the dead refuse to be blessed, a tide waits for someone to enter.

In these moments there is tension between the types of nouns a verb usuallyappears with—animate ones, perhaps, or people—and the noun it actually appears with. Reconciling the incompatible meanings can leave us uneasy, like we’ve squeezed a skirt over our shoulders. But we twist and yank and make the meaning work. Something always gives.

Interestingly enough, it’s often the verb. As researchers Dedre Gentner and Ilene France observed more than two decades ago, when a verb accompanies an ill-fitting noun—The lizard worshipped, The car softened, and Bill owned luck are examples from their study—we’re likelier to tweak our interpretation of the verb to befit the noun than to shift our interpretation of the noun to be compatible with the verb. Perhaps the lizard is bowing its head, or basking in the sunshine? Perhaps the car is collapsing under a heavy weight, and Bill is simply a very lucky guy? Only rarely do we consider that lizard might refer to an unethical preacher, or luck a four-leafed clover.

So what makes verb meanings so much more malleable than noun meanings? Nouns often describe concepts and (especially) objects with meanings that are directly tethered to the world around us. But verbs tend to describe relationships between concepts or objects: an owner and his possessions, a worshipper and a thing worshipped. “Relationships have to adjust to their participants, so words that name relationships need to adjust,” Anja Jamrozik, a PhD student who studies relational language at Northwestern University, told me.

In other words, precisely what it means to own is always somewhat context dependent. We own pets differently than emotions or property; corporations and cities own them differently than we do. So, given that even conventional meanings of verbs must shift to fit their nouns, it should come as no surprise that, at least in English, verbs are likelier than nouns to bend under semantic strain. (And with enough use, some of these newfangled senses may become conventional meanings in their own right. For this reason, verbs tend to have a greater number of conventional meanings than nouns—which in turn helps to explain why, in laboratory experiments, our memory for verbs is so poor: encoding one sense of a verb, but attempting to retrieve a second, is not ideal.)

Still, nouns aren’t all stodge and ennui. Synecdoche, metonymy, and plain old symbolism can push us to extend a noun’s meaning. Furthermore, some nouns are relational, or at least have relational senses. Mother can describe an ancestral or custodial relationship, bridge a connective one. As such, they are excellent candidates for sense-creep (and, not coincidentally, they can also be used as verbs). And via explicitly marked metaphors and similes, relational senses in even nouns like circus—The people! The props! The pageantry!—become apparent. The circus dined earns us quizzical looks; dinner was a circus not so much.

Jessica Love’s last Psycho Babble column appeared on Oct. 2, 2014. We are re-posting several of our favorite columns, including this one, published on Feb. 13, 2013.

In our post-Babel (and pre-Cyborg) world, nobody expects to hear a word spoken in an unfamiliar tongue and be able to intuit its meaning. But what about nonverbal vocalizations: the wordless screams and sobs, sighs and exaltations that seem to emerge from us as reflexively as thoughts? Might a rolling laugh or a breathy moan universally indicate amusement or pleasure?

A few years ago, a group of Western researchers led by Disa Sauter headed to Namibia to find out. Members of a group called the Himba—a population of traditional, seminomadic people who’ve had little contact with other cultures over the years—were invited to participate in an experiment: listening to brief stories that evoked a strong emotion and deciding which of two prerecorded human vocalizations best expressed that emotion.

The resulting 2010 study reports that when the vocalizations were elicited by fellow Himba (as opposed to Europeans), they were likelier to make the intended match. Native European-English speakers showed the same increased in-group sensitivity: we’re all better, it seems, at recognizing the cries and sighs of our own people. But crucially, the Himba participants were better than chance at selecting the intended European vocalization, and vice versa. Culture matters, the researchers determined, but some basic vocalizations really do seem universal.

Then just last week another research group, led by Maria Gendron, revisited the topic. These researchers also journeyed to remote parts of Namibia to play prerecorded Western vocalizations (see table below) to Himba participants. This time, though, instead of picking which of two vocalizations expressed an emotion, the Himba simply listened to a recording and named the intended emotion. Cross-group recognition faltered. Himba participants reliably and distinctly identified only one Western emotion: amusement, as signaled by laughing.

Their responses did, however, match the intended Western emotions in terms of valence: whether the emotion is negative (like fear and disgust) or positive (like amusement and triumph). The Himba weren’t certain quite what to make of an “ewww,” in other words, but they did discern it meant nothing good. Perhaps, the researchers posit, “valence perception, rather than discrete-emotion perception per se, is robust across cultures, such that valence comes closer to being a core human capacity.”

We’ll need more data—a whole lot more, and collected from many different populations—to come close to settling questions about the universality of human vocalizations. (And, for that matter, how they fit in with other forms of nonverbal communication, such as facial expressions and gestures.) Likewise, the data we get won’t be easy to interpret, given the inherent difficulty of comparing groups that differ along so many dimensions. But what we now know seems to suggest that the trappings of culture cut straight into our rawest selves—deeper even than words dare to go.

Jessica Love’s last Psycho Babble column appeared on Oct. 2, 2014. We are re-posting several of our favorite columns, including this one, published on Apr. 24, 2014.

Most of us have strong intuitions about how adjectives should be strung together. More concrete, intrinsic descriptors—purple, wooden—should appear close to the noun, with more subjective, relative descriptors—stupid, nice—appearing further away. Size descriptors such as big—more situation-specific than purple, less so than stupid—ought to fall somewhere in between.

What’s fun about these preferences (other than their curious near-universality across languages) is that they tend to be “squishier” than other syntactic rules. We can all think of exceptions that sound perfectly okay to our ears. And we can purposefully break the preferred order to create a “strange-but-not-estranging” effect. Consider, for instance, the slightly unsettling title of Raymond Carver’s short story “A Small, Good Thing.”

]]>https://theamericanscholar.org/dont-mock-the-monocle/feed/0Zugzwang in the New Yearhttps://theamericanscholar.org/zugzwang-in-the-new-year-2/
https://theamericanscholar.org/zugzwang-in-the-new-year-2/#commentsThu, 23 Oct 2014 04:01:20 +0000http://theamericanscholar.org/?p=23930Jessica Love’s last Psycho Babble column appeared on Oct. 2, 2014. This month we are re-posting several of our favorite columns, including this one, published on Jan. 9, 2014.

This year, in lieu of driving, I’ll be taking the train to work. This is my New Year’s resolution, and, in the way of New Year’s resolutions, it is above reproach. Good on you, I tell myself, stocking my closet with long underwear and committing the train schedule to memory. But just a few days into 2014, I have already accepted rides home nearly every evening. “It seems,” said a coworker, not unkindly, “that your New Year’s resolution is to be driven around by other people.”

And so I am reminded, once again: in a complex system, who’s to say how things will play out? Jennifer Berman captures the sentiment well in a recent essay in TheNew York Times. After a lifetime of filling her cart with “nonprocessed, gluten-free, non-G.M.O., heirloom, grass-fed, free-range and artisanal goods,” Berman was rewarded with a diagnosis of hypothyroidism, and told to avoid nearly every good-for-you food on her shopping list. To make matters worse, the fluoride she was newly prescribed by her dentist—all that carrot juice and lemon water wasn’t great on the teeth—has been linked to hypothyroidism. “Which should I choose?” she asks. “My thyroid or my teeth?”

A German term for this—zugzwang—is used in chess to describe a situation where every move is a bad one: Damned if you do, but damned if you do that other thing instead. These situations can be frustrating in part because there are genuine tradeoffs to be considered. Reasonable people can look at the same evidence and come to different conclusions. Even the sameperson could look at the evidence twice and come to different conclusions. When these tradeoffs are difficult to quantify, or impossible to compare—is my not driving rougher on my coworkers than my driving would be on the environment?—reasonable people just shudder and give up.

Which brings us to another reason why these decisions are so disconcerting: We’re not used to making them. We’re lazythinkers. Not all the time, but most of the time. In his book Thinking Fast and Slow, the psychologist Daniel Kahneman describes mental shortcut after mental shortcut that we rely on to make sense of things—which is to say, to avoid truly making sense of things.

Take the availability heuristic, in which our ability to judge the probability of something happening is related to how easily we can recall specific instances of it in the past. Like all heuristics, it serves us well much of the time. But it can also explain why we might worry more about some rare disease that has afflicted a friend than some common disease that hasn’t. Or take the affect heuristic, in which we use how positively or negatively we feel about something as a quick gauge of how safe or risky it actually is. “The affect heuristic simplifies our lives by creating a world that is much tidier than reality,” writes Kahneman. “Good technologies have few costs in the imaginary world we inhabit, bad technologies have no benefits, and all decisions are easy.” Sounds good to me.

Perhaps few people are confronted so bluntly with cost-benefit tradeoffs as parents, who experience all of the usual quandaries, but with much higher stakes. Kahneman describes a study in which parents were asked how much they’d be willing to pay for safer products. An insecticide is known to cause 15 child poisonings per 10,000 bottles. How much would they pay for an insecticide that reduced this risk to 5 injuries per 10,000 bottles? Parents on average agreed to pay an extra $2.38 a bottle for the substantially safer product. But when asked how discounted a slightly more dangerous product would have to be—one causing 16 injuries per 10,000 bottles instead of 15—before they would purchase it, two-thirds refused to buy the product at all. This is, of course, ridiculous. Paying top dollar to avoid accepting even minuscule risks is a great way to run out of money each month before you’ve paid your heating bill. “In fact,” says Kahneman, “the resistance may be motivated by a selfish fear of regret more than by a wish to optimize the child’s safety.”

But what has motivated my mother, after years of gung-ho advocacy, to become newly skeptical of sunscreens? “Avoid those containing oxybenzone and/or retinyl palmitate,” she instructed me in an email. Instead, I’ve been directed to use zinc oxide, the pasty white stuff that leaves your face looking like a mime. Right. That’s what I’ll put on my face before I walk to and from the train station. The only thing to do, it seems, is drive.

Jessica Love’s last Psycho Babble column appeared on Oct. 2, 2014. This month we are re-posting several of our favorite columns, including this one, published on August 22, 2013.

About five years ago, I stopped eating anything smarter than my cat. This, I decided, spared all non-rodent mammals, as well as cephalopods—the latter thanks to an 11th-hour magazine piece on octopus ingenuity. Everything else was on the menu.

Friends have quibbled with my sorting. There’s no way a cow is smarter than a cat. Even your cat. I have no grand defense. Mine is a stance honed not by reason but by gut feel—more Citizen Kane vs. The Godfather than Pythagorean theorem. How the hell do I know whether my cat is smarter than a cow? How could anyone?

The problem is that though some animals laze in Chicago apartments, others dwell in rural pastures or factory farms or rainforest canopies or 1,000 feet underwater. Some animals live in small groups, others in solitude, and still others in flocks thousands strong. Just last week I learned that a limiting factor for tool use—the smoking gun of animal intelligence—may well be physical dexterity: the dumb, lucky ability to clamp or poke or push things around with some precision. Ranking the intelligence of animals born into such different environments, family units, and bodies is as futile as it is irresistible.

Nor is it unproblematic that we humans have a complete monopoly on IQ test design and implementation. As the Emory University primatologist Frans de Waal recently described, this has led to a number of anthropocentric mishaps: testing whether elephant-sized elephants could identify their own reflections inside human-sized mirrors, or investigating facial recognition in primates using humanfaces rather than primate ones. Whoops.

But really, we needn’t look much beyond our own species to determine the absurdity of pitting fruit bats against iguanas. Ravens Progressive Matrices is an IQ test designed to be culturally neutral: participants complete geometric puzzles—no culturally encrusted words required. But the very quality that makes the test culturally universal also limits its interpretability, Sarah Judkins, a clinical psychologist at Gonzaga University, tells me: Just which aspects of intelligence are being tested with a completely nonverbal test? And what, moreover, is culturally neutral about sitting across the room from a psychologist, solving puzzle after puzzle?

Or forget cross-cultural comparisons. Consider comparing children to adults, or young adults to the elderly. Ohio State psychologist Roger Ratcliff has demonstrated again and again that different age groups’ performance on cognitive tasks—even simple things like deciding whether a visual array has few or many asterisks—cannot be easily compared. Over the course of our lives, we have different sensory capabilities, expectations, and strategies.

The truth is, whenever two populations approach a task differently, comparisons should be cautious. When these populations come from two drastically different species, caution may not be enough.

Does this mean we shouldn’t try? No. I see no reason why we shouldn’t embrace the opportunity to understand what animals are capable of—generosity in rats? Cultural transmission among damselfish?—and allow what we learn to shape our own behavior. But we should recognize that whatever ranking system we impose—to live with ourselves and with the havoc we are wreaking on the rest of the animal kingdom—is pure folly. And then we should probably go get something to eat.

Jessica Love’s last Psycho Babble column was published last week. This month we are re-posting several of our favorite columns.

Chat with someone who speaks another dialect of English, and it’s the differences you notice. Differences in your vowels, sure, but also differences in everything you’ve come to associate with those vowels: where you’ve lived, how you’ve lived, who you are. Over time, should you get close to this person, her dialect may become unremarkable. It may become just another thing about her. Rarely, though, will it go entirely unheard.

That’s why it’s so intriguing to learn that babies, as they become accustomed to accented speech, go through a developmental period where they seem to stop perceiving it as accented.

A study by researchers Christine Kitamura, Robin Panneton, and Catherine Best found that six-month-old infants paid more attention to speech in their native Australian English than to speech in an unfamiliar dialect—South African English. This makes sense. Infants like hearing language that sounds familiar to them. They’re drawn to it, a fortunate thing because a native language takes a lot of listening to unpack.

By nine months, however, infants were equally attentive to both their native dialect and South African English. A follow-up study suggests that this is because the nine-month-olds could no longer discriminate between the two dialects. (American English, somewhat familiar to Australian infants thanks to the pervasiveness of American media, is lumped indiscriminately with Australian English at just six months old.)

Why the temporary deaf ear for accents? It helps to remember that we make sense of sounds by categorizing them, and categories are as much about similarities as they are about differences. Being able to distinguish t’sfrom d’s or pets from pits is critical for an English speaker. But so is ignoring the differences between one t and another, no two of which are ever the same. Likewise, we must de-emphasize the differences between individual voices to recognize that my pits should be categorized with your pits, even though I am female and you’re male, or I tend to speak quickly and you more slowly.

All this categorization actually changes our perception of these sounds. Meaningful differences between t’s and d’s are perceived in an exaggerated way, while less meaningful differences between two t’s or two d’s (even if the size of the differences are acoustically equivalent) are collapsed. What these infants appear to be doing is lumping dialects together—dialects they’ve previously perceived as distinct—once they detect the underlying patterns that South African English or American English share with their native Australian English.

“For adults,” the researchers reiterate, “regional accent functions as an important social marker enabling the listener to identify the speaker’s geographic and ethnic origin, and social status.” Even five-year-olds recognize that an unfamiliar accent marks a speaker as somehow different, and are less willing to learn things—like how to play with a toy—from him. But for infants, to whom these social categories are somewhat inscrutable, accented English is, for a while, just another idiosyncratic difference to ignore, like a lisp or the hoarseness of my voice after a late night.

I often hear my researcher friends complain about popular media coverage of science. Journalists are keen on the shiny and the new, the expected-with-a-twist, the stuff that makes sense only if you think about it the right way, the stuff that’s fun to think about the right way. Ignored entirely is everything else—though this is its own blessing, because the only thing worse than no coverage is botched coverage.

The assessment isn’t fair; there’s plenty of great science writing out there, and on niche topics too, especially if you knowwhereto look. But with publications desperate for page views and underpaid freelancers crunched for time (and just about everyone writes freelance these days), the assessment isn’t entirely unfair either.

So, researchers: What are you going to do about it? Because if you want research conveyed to the public the way you want it—and to be clear, I think this is a fine thing to want—it is going to take resources.

Sure, more academics are blogging these days, and from what I’ve seen, university press offices are picking up their game too. But think bigger. What if major funding agencies such as the National Institutes of Health or the National Science Foundation took a more active role in research dissemination? Say grants were administered with the expectation that a minute portion of the funds would go toward producing accessible summaries of any resulting research. Or perhaps money might be funneled directly to academic journals, whose editors would then select a portion of articles from each issue to be featured. Either way, researchers could partner directly with professional science communicators, and the resulting summaries—whether in written, audio, or video form—compiled in a national, freely accessible database: a PubMed that actually includes results.

I have no idea how much an endeavor like this would cost. But if public access to research in a digestible form boosted science literacy—the first step, in my mind, to raising funding levels for basic research—a proposal like this could easily pay for itself in the long run. And don’t underestimate how many researchers would also find these summaries helpful. One of the biggest hurdles to interdisciplinary collaboration is the volume of background reading necessary to simply not sound stupid asking questions about another field.

I’m taking the moment to offer up this unlikely (but to my mind obvious) idea because this will be my last post. For three years—157 weeks—I’ve used this space to follow my curiosity down some wonderful and unexpected paths: What happens to our speech when we drink? What might language look like if it were composed entirely of pictures? How do our vocabularies grow? What makes conversation seamless?

Writing Psycho Babble has been a terrific, even life-changing experience for me, and I will always be grateful for the opportunity. But it is time to tackle some longer, messier projects, the sort that can’t be wrapped up on a weekly deadline. Writing projects, yes, and life projects too: my husband and I are expecting a daughter in early 2015. (“What a rich source of material that’s going to be,” says Sudip Bose, my editor here at the Scholar. Indeed, we can’t wait to welcome our own little research participant into the family.)

Thankfully there are plenty of other places you can go for your regular linguistics fix—more proof that science writing is flourishing in many corners of the Internet. Stan Carey’s blog Sentence First is a must-read. Sometimes I suspect that Carey starts linguistic trends just so he can discover them before anyone else. Gretchen McCulloch’s All Things Linguistic is another tremendous resource; McCulloch also edits Lexicon Valley. No list would be complete without mention of Language Log: the posts can be intimidating for those without a background in linguistics, but you simply won’t find better language commentary (or smarter commenters) anywhere else. I’d also like to plug SchwaFire, a new magazine publishing long-form language journalism. You can always follow me on Twitter @LoveOnLanguage, and peruse the complete Psycho Babble archives for as long as the Scholar is gracious enough to host them. Thank you for reading.

It’s a cliché of anxious, middle-class parenting: if you want your baby to have a real shot in life, there’d better be a violin in her hands before she’s three. And like many clichés, there appears to be some truth to the sentiment. Plenty of researchers are convinced that early musical training can pay dividends later in life—and language is often singled out as one of the biggest beneficiaries of these musical forays.

I’d always assumed that the relationship between music and language stemmed from a shared dependence on hierarchical structure: like language, music is governed by a complex set of rules, and rules within rules, to be internalized, mastered, and eventually, perhaps, (playfully) broken. Learning a language while learning an instrument, then, ought to help with mastery of both—cross-training for the mind.

But there’s another, less obvious similarity between language and music: rhythm. Even before they’re born, babies are sensitive to rhythm, spending their days in utero familiarizing themselves with the telltale beats of their native language. The sensitivity to rhythm continues throughout early childhood, as children use patterns in stressed and unstressed syllables to help determine, for instance, where one word ends and another begins.

But some children have better rhythm than others, with interesting consequences. In one recent study, preschoolers who were able to match an experimenter’s steady beat on a conga drum were also found to have better language skills than non-synchronizers: they named objects faster, had better auditory memories, and scored more highly on tests of phonological awareness (which require you to manipulate sounds: saying frog without pronouncing the r, for instance). In another recent study, the ability to discriminate rhythmic patterns in kindergarten was tied to better phonological awareness skills in second grade.

Why? In many ways, timing is as important in language as it is in music. I mentioned before that patterns of stressed and unstressed syllables can indicate word boundaries; more fine-grained timing (on the order of milliseconds) can help us determine whether we’ve heard a b or a d. Music and language, then, may well tap into “shared neural resources for temporal precision,” according to the researchers.

And sensitivity to rhythmic patterns on a larger scale may even aid our grammar. Yet another recent study had six-year-olds listen to sequences of beats—bam ba da bam__da bam bam ba da!—and decide whether they were the same as, or different than, earlier sequences. Then the children answered a variety of questions designed to test their grammatical knowledge: what the past tense of bring is, for instance, or when him versus himself should be used in a sentence.

Rhythmic ability was positively related to grammatical knowledge—even when other factors, including socioeconomic status, IQ, and prior musical training were taken into account. “Children who have stronger musical rhythm discrimination skills may also be more sensitive in general to speech rhythm variations that mark grammatical events,” the researchers write.

This helps explain our (or at least my) response to the occasional unwieldy sentence. We repeat it aloud, again and again. How should it sound? Am I missing something?