Humans have a tendency to see faces where there are none. So do computers. Are they more like us in their flaws?

Wikimedia Commons

This rocky hill in Ebihens, France, is, well, just that -- a rocky hill in Ebihens, France. But to pretty much any human observer, the assemblage of meaningless angles takes on a familiar appearance, that of a human face in profile. It has a distinct nose, eyes, lips, and chin, capped off with some foliage as hair. From the perspective pictured above, it's impossible not to see a man in a mountain.

This is an example of a phenomenon known as pareidolia, the human tendency to read significance into random or vague stimuli (both visual and auditory). The term comes from the Greek words "para" (παρά), meaning beside or beyond, and "eidolon" (εἴδωλον), meaning form or image. Though animals or plants can "appear" in clouds and human speech can do the same in static noise, the appearance of a face where there is none is perhaps the most common variant of pareidolia (this includes the subgenre of spotting Jesus or Mary in anything from toast to a crab).

Please use a JavaScript-enabled device to view this slideshow

Pareidolia was once thought of as a symptom of psychosis, but is now recognized as a normal, human tendency. Carl Sagan theorized that hyper facial perception stems from an evolutionary need to recognize -- often quickly -- faces. He wrote in his 1995 book, The Demon-Haunted World, "As soon as the infant can see, it recognizes faces, and we now know that this skill is hardwired in our brains. Those infants who a million years ago were unable to recognize a face smiled back less, were less likely to win the hearts of their parents, and less likely to prosper."

Humans are not alone in their quest to "see" human faces in the sea of visual cues that surrounds them. For decades, scientists have been training computers to do the same. And, like humans, computers display pareidolia.

Though there is something basely human about the tendency to see faces in the non-human shapes around us, to anthropomorphize odd pieces of hardware or rocks on a hillside, that computers see humans where there are none should not be all too surprising. Facial-recognition software is a tough technological feat, and in the process, computers are bound to come up with false positives. Does this make the computers more like us? Have they taken on our most human cognitive errors? In a superficial sense, yes, computers do make errors that are similar to pareidolia, and this seems very human. But as you look into these computer false-positives a bit more, you find a different story.

In an awesome little creative trick, New York University researcher Greg Borenstein applied open-source software FaceTracker to a Flickr pool of examples called Hello Little Fella. In some instances, FaceTracker found a face just where you or I would:

Like a human, the computer has found a false-positive. That humans and computers share some instances of pareidolia seems to underscore the human-like nature of those computers, brought about by their human-led training. In that sense, a computers' errors make the computers seem somehow more human.

But maybe the reason a computer "sees" a face in that key is very simple: Things around us do sometimes actually have the shapes that constitute a face. How can we say this is pareidolia, a strange phenomenon that is supposedly the byproduct of millions of years of evolution, and not just the basic truth that sometimes shapes do look like things they are not?

A project from Phil McCarthy called Pareidoloop pushes us to think about these questions. By combining random-polygon-generation software and facial-recognition software, McCarthy's program builds its own series of randomly generated faces. Out of layers upon layers of mish-mashed shapes, the software "recognizes" the faces, and the fine tunes them into human likenesses. (McCarthy notes that a lot of them kind of resemble old pictures of Einstein.)

The computer is "seeing" faces where there are just random shapes! But wouldn't anyone? The results are clearly faces, so much so that recognizing them as such cannot be labeled pareidolia any more so than recognizing faces in a painting of a face is pareidolia. Where is that line? If it's pareidolia to see a face in the two windows and door of a house, why not in a sketch of two eyes and a nose? Faces are, after all, just a series of well arranged polygons. We'll see them in the world around us because sometimes, inevitably, shapes will be arranged in the formation of two eyes, a nose, and a mouth. How can we identify pareidolia in a way that is distinct from the "accurate" identification of an artistic representation of a face? How can we say pareidolia is a phenomenon of the human mind at all?

Borenstein's work with computers provides a way out of this, answering a most human question by looking at the idiosyncrasies of algorithms. He writes:

Facial recognition techniques give computers their own flavor of pareidolia. In addition to responding to actual human faces, facial recognition systems, just like the human vision system, sometimes produce false positives, latching onto some set of features in the image as matching their model of a face. Rather than the millions of years of evolution that shapes human vision, their pareidolia is based on the details of their algorithms and the vicissitudes of the training data they've been exposed to.

Their pareidolia is different from ours. Different things trigger it.

In Borenstein's sample, FaceTracker found faces in only seven percent of the images, meaning that even though the program did display this human tendency, it did so at a rate much lower than the human judges who created the Flickr pool. That said, we do not know how many false positives the program would spot in the world around us that humans didn't include in the pool, though we get a sense from the "mistakes" the program made, sometimes missing the obvious "face" and spotting another. Such mistakes are useful for seeing just how particularly human pareidolia is in the first place. Here's an example:

The computer's false-positive is, as any human could tell you, wrong -- the wrong wrong answer, selecting B where a human would say A, and the answer is actually D, for none of the above. The mistakes of a computer are so other, so less-than-human, that we can see that pareidolia is not the recognition of just any old assemblage of eyes, nose, and a mouth, but specific ones, ones that must come from within the human observer, that are not inherently available in the shapes as they appear in the world.

And it shows us something more. Although a computer may, like a human, find false positives in the world around it, its sensibility for what makes a set of polygons a face is still, somehow, off. On its surface, a computer's a tendency to pareidolia, this very human phenomenon, seems human-like. In a strange echo of the tendency to see human faces in random shapes, we see our reflection in a machine's cognition -- a sort of pareidolia of the mind. We look at a computer's pareidolia and think, We make those very same mistakes!

But, in fact, we don't. The mistakes are different. A computer's flaws are still very machine -- and ours are very human.

Most Popular

Should you drink more coffee? Should you take melatonin? Can you train yourself to need less sleep? A physician’s guide to sleep in a stressful age.

During residency, Iworked hospital shifts that could last 36 hours, without sleep, often without breaks of more than a few minutes. Even writing this now, it sounds to me like I’m bragging or laying claim to some fortitude of character. I can’t think of another type of self-injury that might be similarly lauded, except maybe binge drinking. Technically the shifts were 30 hours, the mandatory limit imposed by the Accreditation Council for Graduate Medical Education, but we stayed longer because people kept getting sick. Being a doctor is supposed to be about putting other people’s needs before your own. Our job was to power through.

The shifts usually felt shorter than they were, because they were so hectic. There was always a new patient in the emergency room who needed to be admitted, or a staff member on the eighth floor (which was full of late-stage terminally ill people) who needed me to fill out a death certificate. Sleep deprivation manifested as bouts of anger and despair mixed in with some euphoria, along with other sensations I’ve not had before or since. I remember once sitting with the family of a patient in critical condition, discussing an advance directive—the terms defining what the patient would want done were his heart to stop, which seemed likely to happen at any minute. Would he want to have chest compressions, electrical shocks, a breathing tube? In the middle of this, I had to look straight down at the chart in my lap, because I was laughing. This was the least funny scenario possible. I was experiencing a physical reaction unrelated to anything I knew to be happening in my mind. There is a type of seizure, called a gelastic seizure, during which the seizing person appears to be laughing—but I don’t think that was it. I think it was plain old delirium. It was mortifying, though no one seemed to notice.

His paranoid style paved the road for Trumpism. Now he fears what’s been unleashed.

Glenn Beck looks like the dad in a Disney movie. He’s earnest, geeky, pink, and slightly bulbous. His idea of salty language is bullcrap.

The atmosphere at Beck’s Mercury Studios, outside Dallas, is similarly soothing, provided you ignore the references to genocide and civilizational collapse. In October, when most commentators considered a Donald Trump presidency a remote possibility, I followed audience members onto the set of The Glenn Beck Program, which airs on Beck’s website, theblaze.com. On the way, we passed through a life-size replica of the Oval Office as it might look if inhabited by a President Beck, complete with a portrait of Ronald Reagan and a large Norman Rockwell print of a Boy Scout.

Why the ingrained expectation that women should desire to become parents is unhealthy

In 2008, Nebraska decriminalized child abandonment. The move was part of a "safe haven" law designed to address increased rates of infanticide in the state. Like other safe-haven laws, parents in Nebraska who felt unprepared to care for their babies could drop them off in a designated location without fear of arrest and prosecution. But legislators made a major logistical error: They failed to implement an age limitation for dropped-off children.

Within just weeks of the law passing, parents started dropping off their kids. But here's the rub: None of them were infants. A couple of months in, 36 children had been left in state hospitals and police stations. Twenty-two of the children were over 13 years old. A 51-year-old grandmother dropped off a 12-year-old boy. One father dropped off his entire family -- nine children from ages one to 17. Others drove from neighboring states to drop off their children once they heard that they could abandon them without repercussion.

Since the end of World War II, the most crucial underpinning of freedom in the world has been the vigor of the advanced liberal democracies and the alliances that bound them together. Through the Cold War, the key multilateral anchors were NATO, the expanding European Union, and the U.S.-Japan security alliance. With the end of the Cold War and the expansion of NATO and the EU to virtually all of Central and Eastern Europe, liberal democracy seemed ascendant and secure as never before in history.

Under the shrewd and relentless assault of a resurgent Russian authoritarian state, all of this has come under strain with a speed and scope that few in the West have fully comprehended, and that puts the future of liberal democracy in the world squarely where Vladimir Putin wants it: in doubt and on the defensive.

The same part of the brain that allows us to step into the shoes of others also helps us restrain ourselves.

You’ve likely seen the video before: a stream of kids, confronted with a single, alluring marshmallow. If they can resist eating it for 15 minutes, they’ll get two. Some do. Others cave almost immediately.

This “Marshmallow Test,” first conducted in the 1960s, perfectly illustrates the ongoing war between impulsivity and self-control. The kids have to tamp down their immediate desires and focus on long-term goals—an ability that correlates with their later health, wealth, and academic success, and that is supposedly controlled by the front part of the brain. But a new study by Alexander Soutschek at the University of Zurich suggests that self-control is also influenced by another brain region—and one that casts this ability in a different light.

“Well, you’re just special. You’re American,” remarked my colleague, smirking from across the coffee table. My other Finnish coworkers, from the school in Helsinki where I teach, nodded in agreement. They had just finished critiquing one of my habits, and they could see that I was on the defensive.

I threw my hands up and snapped, “You’re accusing me of being too friendly? Is that really such a bad thing?”

“Well, when I greet a colleague, I keep track,” she retorted, “so I don’t greet them again during the day!” Another chimed in, “That’s the same for me, too!”

Unbelievable, I thought. According to them, I’m too generous with my hellos.

When I told them I would do my best to greet them just once every day, they told me not to change my ways. They said they understood me. But the thing is, now that I’ve viewed myself from their perspective, I’m not sure I want to remain the same. Change isn’t a bad thing. And since moving to Finland two years ago, I’ve kicked a few bad American habits.

Modern slot machines develop an unbreakable hold on many players—some of whom wind up losing their jobs, their families, and even, as in the case of Scott Stevens, their lives.

On the morning of Monday, August 13, 2012, Scott Stevens loaded a brown hunting bag into his Jeep Grand Cherokee, then went to the master bedroom, where he hugged Stacy, his wife of 23 years. “I love you,” he told her.

Stacy thought that her husband was off to a job interview followed by an appointment with his therapist. Instead, he drove the 22 miles from their home in Steubenville, Ohio, to the Mountaineer Casino, just outside New Cumberland, West Virginia. He used the casino ATM to check his bank-account balance: $13,400. He walked across the casino floor to his favorite slot machine in the high-limit area: Triple Stars, a three-reel game that cost $10 a spin. Maybe this time it would pay out enough to save him.

A report will be shared with lawmakers before Trump’s inauguration, a top advisor said Friday.

Updated at 2:20 p.m.

President Obama asked intelligence officials to perform a “full review” of election-related hacking this week, and plans will share a report of its findings with lawmakers before he leaves office on January 20, 2017.

Deputy White House Press Secretary Eric Schultz said Friday that the investigation will reach all the way back to 2008, and will examine patterns of “malicious cyber-activity timed to election cycles.” He emphasized that the White House is not questioning the results of the November election.

Asked whether a sweeping investigation could be completed in the time left in Obama’s final term—just six weeks—Schultz replied that intelligence agencies will work quickly, because the preparing the report is “a major priority for the president of the United States.”

A professor of cognitive science argues that the world is nothing like the one we experience through our senses.

As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.