Legal Writing for Legal Reading!

Archive for the tag “revolution”

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in Church Life Journal which, I thought, was pretty insightful. Be edified.

_______________

In my theological writings over the past twenty years, I have often (some might say tediously often) returned to two episodes from the gospels that never quite lose their power to startle me: that of Peter weeping in the early light of dawn over the realization that, contrary to his fervent protestations of the night just past, he has denied Christ before the world; and that of Christ’s confrontation with Pilate (especially as recounted in John’s gospel). After so much time, one might reasonably expect that the fascination would wane, or at least cease to have the quality of surprise. But nothing of the sort. Recently, as I was preparing my own translation of the New Testament for Yale University Press, I found myself drawn to both episodes yet again, with the same old familiar feeling that they contain something at once momentous and uncanny, something somehow out of place and out of time. Something is happening in these passages, homely as they may seem, that never happened before.

We speak today very easily, if not always sincerely, of the intrinsic dignity of every human person. For us, this is merely a received piety, and one of immemorial authority. And yet, if we take the time to wonder just how old a moral intuition it is, there is a good chance that our historical imagination will carry us only as far back as the “Age of Enlightenment” and the epoch of the “Rights of Man.” But our modern notion that there is such a thing as innate human worth, residing in every individual of every class and culture, is at best the very late consequence of a cultural, conceptual, and moral revolution that erupted many centuries earlier, and in the middle of a world that was anything but hospitable to its principles. And I am tempted to think that the nature of that revolution became visible for the first time only in the tale of Peter’s tears. We cannot quite see it, of course. For us, it does not stand out as an extraordinary moment in the larger narrative. We expect Peter to weep; more to the point, we expect the narrator to record the fact. After all, Peter’s humanity is our own, and so we do not hesitate to recognize his grief as ours also. It is all quite obvious to us: Peter’s shattering realization of the immensity of his failure, his hopeless devotion to his beloved master, the certain knowledge that he will never have a chance to retract his words or seek Christ’s forgiveness for his cowardice. To us, the story would probably seem incomplete if this detail were missing. But that is not how things would have seemed to most of the contemporaries of the evangelists. At least, among the literate classes of late antiquity, to call attention to Peter’s grief would more likely have seemed an aesthetic mistake; for Peter, as a rustic, could not possibly have been a worthy object of a well-bred man’s sympathy, nor could his sorrow possibly have possessed the sort of tragic dignity necessary to make it a suitable subject of either a poet or a historian. If a peasant’s weeping possessed any interest at all, it might be as an occasion for cruel mirth. Tragic dignity was the exclusive property of the nobly born. It was the great literary critic Erich Auerbach, many decades ago, who perhaps most powerfully called attention to the singularity of the story in the context of late antique literature. According to his Mimesis: The Representation of Reality in Western Literature, when one compares this scene to the sort of emotional portraiture one finds in great Roman writers, comic or serious, one discovers that only in Peter can one glimpse “the image of man in the highest and deepest and most tragic sense.” Yet Peter is a peasant from Galilee, a rural backwater in an obscure and barbarous colonial territory. This was not merely a lapse of good taste; it was an act of rebellion.

Not that the evangelists necessarily intended to be especially provocative. Still, in this story we see something beginning to emerge from darkness into full visibility, arguably for the first time in our history: the human person as such, invested with an intrinsic and inviolable worth, an infinite value. Actually, even our blithe willingness to assign personhood in the fullest sense to everyone who comes our way is the consequence of that ancient revolt. Originally, at least in many very crucial contexts, “persons” were something of a rarity in nature. At least, as far as ancient Roman legal usage, one’s person was the status one held before the law, and this was anything but an invariable property among all individuals. The original and primary meaning of the Latin word “persona” was “mask,” and may well originally have indicated the special distinction of belonging to one of those patrician families entitled to preserve and display wax funerary effigies of their ancestors. To “have a person”—habere personam—was to have a face before the eyes of the law, to possess the rights of a free and propertied citizen, to be entrusted to offer testimony on the strength of one’s own word, to be capable before a magistrate of appeal to higher authority. At the far opposite end of the social scale, however, was that far greater number of individuals who could be classed as “non habentes personas,” “not having persons”—not, as it were, having faces before the law or, for that matter, before society. The principal occupants of this category were, of course, slaves. These could call on no privileges or rights before the law, apart from a few meager protections; they were not even usually trusted to offer testimony before a court apart from a judicious application of torture. And, as part of the peasantry of a subject people, Peter would have possessed scarcely any greater “countenance” in Roman eyes.

It is practically impossible for us today to appreciate the magnitude of the scandal that many pagans naturally felt at the bizarre prodigality with which the early Christians were willing to grant full humanity to persons of every class and condition. But we can certainly hear the tone of alarm in reading the anti-Christian polemics of Celsus, or Eunapius of Sardis, or Porphyry, or the Emperor Julian. Perched as they were at the vertiginous summit of the social hierarchy of their time, the church’s pagan critics could only look down on the Christian movement, and could see it not as the liberation of deep but hitherto unexpressed human longings, but only as something monstrous and degenerate, threatening the very order of the world. In his Against the Galilaeans, Julian lavished his contempt on the vicious, disreputable, contemptible individuals that the Christians had from the earliest days invited into their ranks, and admitted with no more than a ritual bath—as if, he huffed, water could cleanse the soul. Eunapius confessed his revulsion at the “base gods” venerated by the church, by which he meant the saints whose relics the Christians preserved and honored: all of them, he said, men and women of the most deplorable sort, justly tortured, condemned, and executed for their crimes, but glorified after death as martyrs of the faith, and accorded the devotion once reserved for the divine spirits reigning from on high. Not even the most morally admirable of the pagan philosophical schools, Stoicism, made so great a spiritual virtue of indifference to social station as did the followers of Jesus. Indeed, such was the sheer perversity of the Christian movement that the inversion of “natural” rank—this insistence that the last be first and the first last—became something like its chief moral value. We see this, for instance, in the Didascalia, that very early manual of Christian life, which requires a bishop never to interrupt his service to greet a person of high degree who might enter the basilica, and yet, on seeing a pauper enter the assembly, to do everything in his power to make room for the new arrival, even if it should mean giving up his own seat and sitting instead on the floor.

One should not, admittedly, exaggerate the virtues of the early Christians here. Perfection is not to be found in any human institution, and the church has certainly always been that. Even in the early days of the Church, certain social distinctions proved far too redoubtable to exterminate; a Christian slaveholder’s Christian slaves were still slaves, even if they were also their master’s brothers in Christ. And, after Constantine, as the Church became that most lamentable of things—a pillar of respectable society—it learned all too easily to tolerate many of the injustices it supposedly condemned. But neither should we underestimate how extraordinary the religious ethos of the earliest Christians was with regard to social order, or fail to give them credit for the attempts they did make to erase the distinctions in social dignity that separated persons of different rank from one another, but that they believed Christ had abolished. In truth, the pagan critics of the early church were quite perceptive—perhaps more than the Christians themselves—in seeing this new faith as something deeply and disturbingly subversive. Christianity may never have been a revolution in the political sense, attempting to replace the general social order with something altogether different; but, for just that reason, the change it brought about was not merely a local and transient flirtation with enchanting impossibilities (as most revolutionary movements are). The Christian vision of reality was nothing less than (to borrow a phrase from Friedrich Nietzsche’s The Antichrist) a “transvaluation of all values,” a profound revision of the moral and conceptual categories by which human beings understand themselves and one another and their places within the world, one that took root and grew principally in consciences rather than in political arrangements. And it was most definitely (again to use Nietzsche’s words, this time from The Genealogy of Morals) a “revolt of the slaves in morality,” but paradoxically a slave revolt “from above.” As Paul said in Philippians 2:6-10, it had been accomplished by one who had willingly exchanged the “form of God” for the “form of a slave,” and only in that form had overthrown the powers that reigned on high.

Just as striking as the tale of Peter’s tears, again, is that of Christ’s arraignment before Pilate, especially in the fourth gospel’s telling. And once again we are separated from the age in which the story was written down by an immense historical abyss. Nothing makes us more insensible to the utter oddity of this story in its own time and place, and to the metaphysical and moral implications of that oddity, than our own habitual sympathies. To many of its earliest readers, the entire episode would have seemed perversely out of joint. On one side of the tableau (so to speak) there stands a man of noble birth, one moreover invested with the full authority of the Roman Empire, endowed with the sacred duty of imposing the pax Romana on a barbarous people far too prone to religious fanaticism. On the other side there stands a poor and quite probably demented colonial of obscure origins, professing unintelligible beliefs and answering the charge that he thinks himself “King of the Jews” with only enigmatic invocations of some “kingdom not of this world” and of some mysterious “truth” to which he feels called to bear witness. No sane and educated person of late antiquity could have failed to grasp the ridiculous imbalance in this scene, or to recognize which side of the picture represented the “truth” of all things. In the great cosmic hierarchy of rational powers—descending from the Highest Divinity down to the lowliest of slaves—Pilate’s is a particularly exalted place, a little nearer to heaven than to earth, and illumined with something of the splendor of the gods. Christ, by contrast, is no one at all; he has no natural claim on Pilate’s clemency, and certainly no rights; simply said, he has no “person” before the law. The one figure, then, commands total sway over life and death, while the other no longer belongs even to himself. And this wild asymmetry becomes even starker (and perhaps even more absurd) when Jesus is brought before Pilate for the second time, having been scourged, wrapped in a soldier’s cloak, and crowned with thorns. To the ears of any educated person of the late antique world, Pilate’s question to his prisoner now—“Where do you come from?”—would probably have sounded like a sardonic reminder to Christ of his lack of pedigree and of Pilate’s patrician origins; and this difference in status would only have been confirmed by Pilate’s still harsher reminder to Christ that “I have the power to crucify you.” Christ’s riposte, however, that Pilate possesses no powers not given him from above would have sounded, at most, like comical impudence on the part of a lunatic. Could any ancient witness to this scene, seeing how fate had apportioned to its principals their respective places in the order of things, have doubted on which side the full “truth” of things was to be found? After all, what greater measure of reality is there, in a world sustained by immutable hierarchies of social privilege, than the power to judge and kill another person? This may, as it happens, have been the deepest import in Pilate’s earlier, tersely rhetorical question to Christ: “What is truth?”

We, however, are not ancient men and women. Something far vaster and more indomitable than a mere span of centuries separates us from their vision of the world. We simply cannot see Christ’s broken, humiliated, and doomed humanity as something self-evidently contemptible and ridiculous; in a very real sense, we are destined to see it as embracing the whole mystery of our own humanity in its deepest fathoms: a sublime fragility, tragic and magnificent, pitiable and wonderful. Even the worst of us, who are capable of looking upon the sufferings of others with indifference or even contempt, have arrived at their callousness only through a prior violence to their own consciences. Raised in shadow of the Christian world, inheritors of its moral grammar and imagination, we no longer enjoy the luxury of a capacity for innocent cruelty. Living as we do in the long aftermath of a revolution whose effects linger deep in our souls and natures, we cannot guilelessly look away from the abasement of the victim and fix our eyes in admiration upon his persecutor, no matter how grand the latter might be. Hence, we lack any immediate awareness of the radical inversion of perspective unfolded in this tale. Seen from within a certain pre-Christian vision of reality, Pilate’s verdict is perfectly just, not because it imposes a penalty “proportionate” to the “crime,” but because it reaffirms the natural and divine order of reality. In consigning a worthless man to an appropriately undignified death, and in restoring order through the destruction of the agent of disorder, it proclaims once again that the order of the state and of the hierarchy of social power is nothing less than the order of the gods transcribed into its appropriate terrestrial expression. The Gospel of John, however, takes an entirely contrary approach to the confrontation between Christ and Pilate. It sees the whole scene entirely in the light of the risen Christ and from the vantage of the empty tomb. This alters everything. God, it would appear, not only refuses to approve the verdict of his earthly “representatives”—whether gentiles or Jews, whether emperors, kings, generals, or judges—but goes so far as to reverse their judgment. Further, indeed: he vindicates and restores to life the very man those eminent authorities have “justly” condemned in the interest of public tranquility. This is an astonishing realignment of every perspective, a reversal of all the ancient values, a rebellion against “reality.”

In any event, the new world being brought into being in the gospels is a world in which the grand cosmic architecture of prerogative and power has been superseded by a new and positively “anarchic” order: one in which the glory of God can reveal itself in a crucified slave, and in which, therefore, we are forced to see the face of God in the forsaken of the world. In this shocking and ludicrously disordered order, everything is cast in a radically transforming light, and comes to mean something entirely new and perhaps unsettling. We do not laugh at “the man of sorrows” draped in a mock robe and pierced with a mock crown and jocosely hailed as a king b his persecutors. For us, this figure possesses a grandeur that would have been quite invisible to our more distant ancestors, an ironic beauty that entirely and irrevocably reverses the mockery. It is not he who is absurd, but rather all those kings and emperors who preposterously celebrate their pedigrees, and who rejoice in their power to command and to kill, and who are therefore unaware that the pompous symbols of greatness in which they drape themselves are nothing more than rags and thorns. In a sense, the figure of Christ being mocked, and yet somehow impregnable to every indignity, is the perfect emblem of what can only be called a “total humanism.” In him, we are afforded a vision of humanity in its widest and deepest scope, one in the full nobility and mystery and beauty of the human countenance—the human person—wholly resides in each unique instance of our common nature. Seen thus, Christ’s descent from the “form of God” into the “form of a slave” is not a paradox at all, but an altogether apt confirmation of the indwelling of the divine image in each soul. And, once the world has been seen in this way, it can never again be what it once had been.

Editorial Note: Throughout the month of October Church Life Journal will explore the sanctity of life and the hospitable imagination. What we mean by the hospitable imagination is the ecclesial formation of a way of seeing the world that is more spacious and welcoming. It is a way of seeing that recognizes the inherent sanctity of life and seeks to heal the perceived division between life issues and social justice issues. Catholic Social Teaching teaches us that a radical hospitality for life at all its stages and solidarity with the weak is cruciform. As our authors explore the various dimensions of the hospitable imagination (please click the link for a list of the posts), we invite you to think along with us.

By David Bentley Hart and originally published on October 26, 2017 in Church Life Journal which can be found here.

The Washington Times reports on a religious discrimination lawsuit filed last week in Idaho federal district court by a former player on the Idaho State University tennis team. The suit also alleges negligence, infliction of emotional distress and other causes of action growing out of harassment of plaintiff Orin Duffin by his teammates and his coaches. The complaint (full text) in Duffin v. Idaho State University, (D ID, filed 5/20/2016) alleges that when the team learned that Duffin was a Mormon, his coaches began to harass him, in part through inappropriate questions about sexual practices and his religious beliefs. The harassment peaked after he told the team that he would be on his mission call in Taiwan the following school year. While the team was staying in Las Vegas, one of the coaches arranged a trip to a strip club, provided the team with alcoholic beverages, and sent two prostitutes to Duffin’s room to tempt him. Duffin became the butt of jokes and comments after the Las Vegas trip.

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in The Atlantic which, I thought, was pretty insightful. Be edified.

_______________

One day last summer, around noon, I called Athena, a 13-year-old who lives in Houston, Texas. She answered her phone—she’s had an iPhone since she was 11—sounding as if she’d just woken up. We chatted about her favorite songs and TV shows, and I asked her what she likes to do with her friends. “We go to the mall,” she said. “Do your parents drop you off?,” I asked, recalling my own middle-school days, in the 1980s, when I’d enjoy a few parent-free hours shopping with my friends. “No—I go with my family,” she replied. “We’ll go with my mom and brothers and walk a little behind them. I just have to tell my mom where we’re going. I have to check in every hour or every 30 minutes.”

Those mall trips are infrequent—about once a month. More often, Athena and her friends spend time together on their phones, unchaperoned. Unlike the teens of my generation, who might have spent an evening tying up the family landline with gossip, they talk on Snapchat, the smartphone app that allows users to send pictures and videos that quickly disappear. They make sure to keep up their Snapstreaks, which show how many days in a row they have Snapchatted with each other. Sometimes they save screenshots of particularly ridiculous pictures of friends. “It’s good blackmail,” Athena said. (Because she’s a minor, I’m not using her real name.) She told me she’d spent most of the summer hanging out alone in her room with her phone. That’s just the way her generation is, she said. “We didn’t have a choice to know any life without iPads or iPhones. I think we like our phones more than we like actual people.”

I’ve been researching generational differences for 25 years, starting when I was a 22-year-old doctoral student in psychology. Typically, the characteristics that come to define a generation appear gradually, and along a continuum. Beliefs and behaviors that were already rising simply continue to do so. Millennials, for instance, are a highly individualistic generation, but individualism had been increasing since the Baby Boomers turned on, tuned in, and dropped out. I had grown accustomed to line graphs of trends that looked like modest hills and valleys. Then I began studying Athena’s generation.

Around 2012, I noticed abrupt shifts in teen behaviors and emotional states. The gentle slopes of the line graphs became steep mountains and sheer cliffs, and many of the distinctive characteristics of the Millennial generation began to disappear. In all my analyses of generational data—some reaching back to the 1930s—I had never seen anything like it.

The allure of independence, so powerful to previous generations, holds less sway over today’s teens.

At first I presumed these might be blips, but the trends persisted, across several years and a series of national surveys. The changes weren’t just in degree, but in kind. The biggest difference between the Millennials and their predecessors was in how they viewed the world; teens today differ from the Millennials not just in their views but in how they spend their time. The experiences they have every day are radically different from those of the generation that came of age just a few years before them.

What happened in 2012 to cause such dramatic shifts in behavior? It was after the Great Recession, which officially lasted from 2007 to 2009 and had a starker effect on Millennials trying to find a place in a sputtering economy. But it was exactly the moment when the proportion of Americans who owned a smartphone surpassed 50 percent.

The more I pored over yearly surveys of teen attitudes and behaviors, and the more I talked with young people like Athena, the clearer it became that theirs is a generation shaped by the smartphone and by the concomitant rise of social media. I call them iGen. Born between 1995 and 2012, members of this generation are growing up with smartphones, have an Instagram account before they start high school, and do not remember a time before the internet. The Millennials grew up with the web as well, but it wasn’t ever-present in their lives, at hand at all times, day and night. iGen’s oldest members were early adolescents when the iPhone was introduced, in 2007, and high-school students when the iPad entered the scene, in 2010. A 2017 survey of more than 5,000 American teens found that three out of four owned an iPhone.

The advent of the smartphone and its cousin the tablet was followed quickly by hand-wringing about the deleterious effects of “screen time.” But the impact of these devices has not been fully appreciated, and goes far beyond the usual concerns about curtailed attention spans. The arrival of the smartphone has radically changed every aspect of teenagers’ lives, from the nature of their social interactions to their mental health. These changes have affected young people in every corner of the nation and in every type of household. The trends appear among teens poor and rich; of every ethnic background; in cities, suburbs, and small towns. Where there are cell towers, there are teens living their lives on their smartphone.

To those of us who fondly recall a more analog adolescence, this may seem foreign and troubling. The aim of generational study, however, is not to succumb to nostalgia for the way things used to be; it’s to understand how they are now. Some generational changes are positive, some are negative, and many are both. More comfortable in their bedrooms than in a car or at a party, today’s teens are physically safer than teens have ever been. They’re markedly less likely to get into a car accident and, having less of a taste for alcohol than their predecessors, are less susceptible to drinking’s attendant ills.

Psychologically, however, they are more vulnerable than Millennials were: Rates of teen depression and suicide have skyrocketed since 2011. It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones.

Even when a seismic event—a war, a technological leap, a free concert in the mud—plays an outsize role in shaping a group of young people, no single factor ever defines a generation. Parenting styles continue to change, as do school curricula and culture, and these things matter. But the twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.

In the early 1970s, the photographer Bill Yates shot a series of portraits at the Sweetheart Roller Skating Rink in Tampa, Florida. In one, a shirtless teen stands with a large bottle of peppermint schnapps stuck in the waistband of his jeans. In another, a boy who looks no older than 12 poses with a cigarette in his mouth. The rink was a place where kids could get away from their parents and inhabit a world of their own, a world where they could drink, smoke, and make out in the backs of their cars. In stark black-and-white, the adolescent Boomers gaze at Yates’s camera with the self-confidence born of making your own choices—even if, perhaps especially if, your parents wouldn’t think they were the right ones.

Fifteen years later, during my own teenage years as a member of Generation X, smoking had lost some of its romance, but independence was definitely still in. My friends and I plotted to get our driver’s license as soon as we could, making DMV appointments for the day we turned 16 and using our newfound freedom to escape the confines of our suburban neighborhood. Asked by our parents, “When will you be home?,” we replied, “When do I have to be?”

But the allure of independence, so powerful to previous generations, holds less sway over today’s teens, who are less likely to leave the house without their parents. The shift is stunning: 12th-graders in 2015 were going out less often than eighth-graders did as recently as 2009.

Today’s teens are also less likely to date. The initial stage of courtship, which Gen Xers called “liking” (as in “Ooh, he likes you!”), kids now call “talking”—an ironic choice for a generation that prefers texting to actual conversation. After two teens have “talked” for a while, they might start dating. But only about 56 percent of high-school seniors in 2015 went out on dates; for Boomers and Gen Xers, the number was about 85 percent.

The decline in dating tracks with a decline in sexual activity. The drop is the sharpest for ninth-graders, among whom the number of sexually active teens has been cut by almost 40 percent since 1991. The average teen now has had sex for the first time by the spring of 11th grade, a full year later than the average Gen Xer. Fewer teens having sex has contributed to what many see as one of the most positive youth trends in recent years: The teen birth rate hit an all-time low in 2016, down 67 percent since its modern peak, in 1991.

Even driving, a symbol of adolescent freedom inscribed in American popular culture, from Rebel Without a Cause to Ferris Bueller’s Day Off, has lost its appeal for today’s teens. Nearly all Boomer high-school students had their driver’s license by the spring of their senior year; more than one in four teens today still lack one at the end of high school. For some, Mom and Dad are such good chauffeurs that there’s no urgent need to drive. “My parents drove me everywhere and never complained, so I always had rides,” a 21-year-old student in San Diego told me. “I didn’t get my license until my mom told me I had to because she could not keep driving me to school.” She finally got her license six months after her 18th birthday. In conversation after conversation, teens described getting their license as something to be nagged into by their parents—a notion that would have been unthinkable to previous generations.

Independence isn’t free—you need some money in your pocket to pay for gas, or for that bottle of schnapps. In earlier eras, kids worked in great numbers, eager to finance their freedom or prodded by their parents to learn the value of a dollar. But iGen teens aren’t working (or managing their own money) as much. In the late 1970s, 77 percent of high-school seniors worked for pay during the school year; by the mid-2010s, only 55 percent did. The number of eighth-graders who work for pay has been cut in half. These declines accelerated during the Great Recession, but teen employment has not bounced back, even though job availability has.

Of course, putting off the responsibilities of adulthood is not an iGen innovation. Gen Xers, in the 1990s, were the first to postpone the traditional markers of adulthood. Young Gen Xers were just about as likely to drive, drink alcohol, and date as young Boomers had been, and more likely to have sex and get pregnant as teens. But as they left their teenage years behind, Gen Xers married and started careers later than their Boomer predecessors had.

Gen X managed to stretch adolescence beyond all previous limits: Its members started becoming adults earlier and finished becoming adults later. Beginning with Millennials and continuing with iGen, adolescence is contracting again—but only because its onset is being delayed. Across a range of behaviors—drinking, dating, spending time unsupervised— 18-year-olds now act more like 15-year-olds used to, and 15-year-olds more like 13-year-olds. Childhood now stretches well into high school.

Why are today’s teens waiting longer to take on both the responsibilities and the pleasures of adulthood? Shifts in the economy, and parenting, certainly play a role. In an information economy that rewards higher education more than early work history, parents may be inclined to encourage their kids to stay home and study rather than to get a part-time job. Teens, in turn, seem to be content with this homebody arrangement—not because they’re so studious, but because their social life is lived on their phone. They don’t need to leave home to spend time with their friends.

If today’s teens were a generation of grinds, we’d see that in the data. But eighth-, 10th-, and 12th-graders in the 2010s actually spend less time on homework than Gen X teens did in the early 1990s. (High-school seniors headed for four-year colleges spend about the same amount of time on homework as their predecessors did.) The time that seniors spend on activities such as student clubs and sports and exercise has changed little in recent years. Combined with the decline in working for pay, this means iGen teens have more leisure time than Gen X teens did, not less.

So what are they doing with all that time? They are on their phone, in their room, alone and often distressed.

One of the ironies of iGen life is that despite spending far more time under the same roof as their parents, today’s teens can hardly be said to be closer to their mothers and fathers than their predecessors were. “I’ve seen my friends with their families—they don’t talk to them,” Athena told me. “They just say ‘Okay, okay, whatever’ while they’re on their phones. They don’t pay attention to their family.” Like her peers, Athena is an expert at tuning out her parents so she can focus on her phone. She spent much of her summer keeping up with friends, but nearly all of it was over text or Snapchat. “I’ve been on my phone more than I’ve been with actual people,” she said. “My bed has, like, an imprint of my body.”

In this, too, she is typical. The number of teens who get together with their friends nearly every day dropped by more than 40 percent from 2000 to 2015; the decline has been especially steep recently. It’s not only a matter of fewer kids partying; fewer kids are spending time simply hanging out. That’s something most teens used to do: nerds and jocks, poor kids and rich kids, C students and A students. The roller rink, the basketball court, the town pool, the local necking spot—they’ve all been replaced by virtual spaces accessed through apps and the web.

You might expect that teens spend so much time in these new spaces because it makes them happy, but most data suggest that it does not. The Monitoring the Future survey, funded by the National Institute on Drug Abuse and designed to be nationally representative, has asked 12th-graders more than 1,000 questions every year since 1975 and queried eighth- and 10th-graders since 1991. The survey asks teens how happy they are and also how much of their leisure time they spend on various activities, including nonscreen activities such as in-person social interaction and exercise, and, in recent years, screen activities such as using social media, texting, and browsing the web. The results could not be clearer: Teens who spend more time than average on screen activities are more likely to be unhappy, and those who spend more time than average on nonscreen activities are more likely to be happy.

There’s not a single exception. All screen activities are linked to less happiness, and all nonscreen activities are linked to more happiness. Eighth-graders who spend 10 or more hours a week on social media are 56 percent more likely to say they’re unhappy than those who devote less time to social media. Admittedly, 10 hours a week is a lot. But those who spend six to nine hours a week on social media are still 47 percent more likely to say they are unhappy than those who use social media even less. The opposite is true of in-person interactions. Those who spend an above-average amount of time with their friends in person are 20 percent less likely to say they’re unhappy than those who hang out for a below-average amount of time.

The more time teens spend looking at screens, the more likely they are to report symptoms of depression.

If you were going to give advice for a happy adolescence based on this survey, it would be straightforward: Put down the phone, turn off the laptop, and do something—anything—that does not involve a screen. Of course, these analyses don’t unequivocally prove that screen time causes unhappiness; it’s possible that unhappy teens spend more time online. But recent research suggests that screen time, in particular social-media use, does indeed cause unhappiness. One study asked college students with a Facebook page to complete short surveys on their phone over the course of two weeks. They’d get a text message with a link five times a day, and report on their mood and how much they’d used Facebook. The more they’d used Facebook, the unhappier they felt, but feeling unhappy did not subsequently lead to more Facebook use.

Social-networking sites like Facebook promise to connect us to friends. But the portrait of iGen teens emerging from the data is one of a lonely, dislocated generation. Teens who visit social-networking sites every day but see their friends in person less frequently are the most likely to agree with the statements “A lot of times I feel lonely,” “I often feel left out of things,” and “I often wish I had more good friends.” Teens’ feelings of loneliness spiked in 2013 and have remained high since.

This doesn’t always mean that, on an individual level, kids who spend more time online are lonelier than kids who spend less time online. Teens who spend more time on social media also spend more time with their friends in person, on average—highly social teens are more social in both venues, and less social teens are less so. But at the generational level, when teens spend more time on smartphones and less time on in-person social interactions, loneliness is more common.

So is depression. Once again, the effect of screen activities is unmistakable: The more time teens spend looking at screens, the more likely they are to report symptoms of depression. Eighth-graders who are heavy users of social media increase their risk of depression by 27 percent, while those who play sports, go to religious services, or even do homework more than the average teen cut their risk significantly.

Teens who spend three hours a day or more on electronic devices are 35 percent more likely to have a risk factor for suicide, such as making a suicide plan. (That’s much more than the risk related to, say, watching TV.) One piece of data that indirectly but stunningly captures kids’ growing isolation, for good and for bad: Since 2007, the homicide rate among teens has declined, but the suicide rate has increased. As teens have started spending less time together, they have become less likely to kill one another, and more likely to kill themselves. In 2011, for the first time in 24 years, the teen suicide rate was higher than the teen homicide rate.

Depression and suicide have many causes; too much technology is clearly not the only one. And the teen suicide rate was even higher in the 1990s, long before smartphones existed. Then again, about four times as many Americans now take antidepressants, which are often effective in treating severe depression, the type most strongly linked to suicide.

What’s the connection between smartphones and the apparent psychological distress this generation is experiencing? For all their power to link kids day and night, social media also exacerbate the age-old teen concern about being left out. Today’s teens may go to fewer parties and spend less time together in person, but when they do congregate, they document their hangouts relentlessly—on Snapchat, Instagram, Facebook. Those not invited to come along are keenly aware of it. Accordingly, the number of teens who feel left out has reached all-time highs across age groups. Like the increase in loneliness, the upswing in feeling left out has been swift and significant.

This trend has been especially steep among girls. Forty-eight percent more girls said they often felt left out in 2015 than in 2010, compared with 27 percent more boys. Girls use social media more often, giving them additional opportunities to feel excluded and lonely when they see their friends or classmates getting together without them. Social media levy a psychic tax on the teen doing the posting as well, as she anxiously awaits the affirmation of comments and likes. When Athena posts pictures to Instagram, she told me, “I’m nervous about what people think and are going to say. It sometimes bugs me when I don’t get a certain amount of likes on a picture.”

Girls have also borne the brunt of the rise in depressive symptoms among today’s teens. Boys’ depressive symptoms increased by 21 percent from 2012 to 2015, while girls’ increased by 50 percent—more than twice as much. The rise in suicide, too, is more pronounced among girls. Although the rate increased for both sexes, three times as many 12-to-14-year-old girls killed themselves in 2015 as in 2007, compared with twice as many boys. The suicide rate is still higher for boys, in part because they use more-lethal methods, but girls are beginning to close the gap.

These more dire consequences for teenage girls could also be rooted in the fact that they’re more likely to experience cyberbullying. Boys tend to bully one another physically, while girls are more likely to do so by undermining a victim’s social status or relationships. Social media give middle- and high-school girls a platform on which to carry out the style of aggression they favor, ostracizing and excluding other girls around the clock.

Social-media companies are of course aware of these problems, and to one degree or another have endeavored to prevent cyberbullying. But their various motivations are, to say the least, complex. A recently leaked Facebook document indicated that the company had been touting to advertisers its ability to determine teens’ emotional state based on their on-site behavior, and even to pinpoint “moments when young people need a confidence boost.” Facebook acknowledged that the document was real, but denied that it offers “tools to target people based on their emotional state.”

In July 2014, a 13-year-old girl in North Texas woke to the smell of something burning. Her phone had overheated and melted into the sheets. National news outlets picked up the story, stoking readers’ fears that their cellphone might spontaneously combust. To me, however, the flaming cellphone wasn’t the only surprising aspect of the story. Why, I wondered, would anyone sleep with her phone beside her in bed? It’s not as though you can surf the web while you’re sleeping. And who could slumber deeply inches from a buzzing phone?

Curious, I asked my undergraduate students at San Diego State University what they do with their phone while they sleep. Their answers were a profile in obsession. Nearly all slept with their phone, putting it under their pillow, on the mattress, or at the very least within arm’s reach of the bed. They checked social media right before they went to sleep, and reached for their phone as soon as they woke up in the morning (they had to—all of them used it as their alarm clock). Their phone was the last thing they saw before they went to sleep and the first thing they saw when they woke up. If they woke in the middle of the night, they often ended up looking at their phone. Some used the language of addiction. “I know I shouldn’t, but I just can’t help it,” one said about looking at her phone while in bed. Others saw their phone as an extension of their body—or even like a lover: “Having my phone closer to me while I’m sleeping is a comfort.”

It may be a comfort, but the smartphone is cutting into teens’ sleep: Many now sleep less than seven hours most nights. Sleep experts say that teens should get about nine hours of sleep a night; a teen who is getting less than seven hours a night is significantly sleep deprived. Fifty-seven percent more teens were sleep deprived in 2015 than in 1991. In just the four years from 2012 to 2015, 22 percent more teens failed to get seven hours of sleep.

The increase is suspiciously timed, once again starting around when most teens got a smartphone. Two national surveys show that teens who spend three or more hours a day on electronic devices are 28 percent more likely to get less than seven hours of sleep than those who spend fewer than three hours, and teens who visit social-media sites every day are 19 percent more likely to be sleep deprived. A meta-analysis of studies on electronic-device use among children found similar results: Children who use a media device right before bed are more likely to sleep less than they should, more likely to sleep poorly, and more than twice as likely to be sleepy during the day.
I’ve observed my toddler, barely old enough to walk, confidently swiping her way through an iPad.

Electronic devices and social media seem to have an especially strong ability to disrupt sleep. Teens who read books and magazines more often than the average are actually slightly less likely to be sleep deprived—either reading lulls them to sleep, or they can put the book down at bedtime. Watching TV for several hours a day is only weakly linked to sleeping less. But the allure of the smartphone is often too much to resist.

Sleep deprivation is linked to myriad issues, including compromised thinking and reasoning, susceptibility to illness, weight gain, and high blood pressure. It also affects mood: People who don’t sleep enough are prone to depression and anxiety. Again, it’s difficult to trace the precise paths of causation. Smartphones could be causing lack of sleep, which leads to depression, or the phones could be causing depression, which leads to lack of sleep. Or some other factor could be causing both depression and sleep deprivation to rise. But the smartphone, its blue light glowing in the dark, is likely playing a nefarious role.

The correlations between depression and smartphone use are strong enough to suggest that more parents should be telling their kids to put down their phone. As the technology writer Nick Bilton has reported, it’s a policy some Silicon Valley executives follow. Even Steve Jobs limited his kids’ use of the devices he brought into the world.

What’s at stake isn’t just how kids experience adolescence. The constant presence of smartphones is likely to affect them well into adulthood. Among people who suffer an episode of depression, at least half become depressed again later in life. Adolescence is a key time for developing social skills; as teens spend less time with their friends face-to-face, they have fewer opportunities to practice them. In the next decade, we may see more adults who know just the right emoji for a situation, but not the right facial expression.

I realize that restricting technology might be an unrealistic demand to impose on a generation of kids so accustomed to being wired at all times. My three daughters were born in 2006, 2009, and 2012. They’re not yet old enough to display the traits of iGen teens, but I have already witnessed firsthand just how ingrained new media are in their young lives. I’ve observed my toddler, barely old enough to walk, confidently swiping her way through an iPad. I’ve experienced my 6-year-old asking for her own cellphone. I’ve overheard my 9-year-old discussing the latest app to sweep the fourth grade. Prying the phone out of our kids’ hands will be difficult, even more so than the quixotic efforts of my parents’ generation to get their kids to turn off MTV and get some fresh air. But more seems to be at stake in urging teens to use their phone responsibly, and there are benefits to be gained even if all we instill in our children is the importance of moderation. Significant effects on both mental health and sleep time appear after two or more hours a day on electronic devices. The average teen spends about two and a half hours a day on electronic devices. Some mild boundary-setting could keep kids from falling into harmful habits.

In my conversations with teens, I saw hopeful signs that kids themselves are beginning to link some of their troubles to their ever-present phone. Athena told me that when she does spend time with her friends in person, they are often looking at their device instead of at her. “I’m trying to talk to them about something, and they don’t actually look at my face,” she said. “They’re looking at their phone, or they’re looking at their Apple Watch.” “What does that feel like, when you’re trying to talk to somebody face-to-face and they’re not looking at you?,” I asked. “It kind of hurts,” she said. “It hurts. I know my parents’ generation didn’t do that. I could be talking about something super important to me, and they wouldn’t even be listening.”

Once, she told me, she was hanging out with a friend who was texting her boyfriend. “I was trying to talk to her about my family, and what was going on, and she was like, ‘Uh-huh, yeah, whatever.’ So I took her phone out of her hands and I threw it at my wall.”

I couldn’t help laughing. “You play volleyball,” I said. “Do you have a pretty good arm?” “Yep,” she replied.

By Jean M. Twenge and originally published in The Atlantic in September 2017 and can be found here.

“Yesterday two members of the U.S. House of Representatives, Joe Kennedy III and Bobby Scott, announced the introduction of the Do No Harm Act (full text). The bill would amend the Religious Freedom Restoration Act to preclude its use in ways that result in discrimination or harm to third parties or impose one person’s religious views on another. More specifically, the bill would preclude using RFRA to create religious exemptions from various civil rights laws or labor laws, or accommodations which limit access to health care, or receipt of goods or services from the government or from government contractors or grantees.”

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in Aeon which, I thought, was pretty insightful. Be edified.

_______________

In 1966, just over 50 years ago, the distinguished Canadian-born anthropologist Anthony Wallace confidently predicted the global demise of religion at the hands of an advancing science: ‘belief in supernatural powers is doomed to die out, all over the world, as a result of the increasing adequacy and diffusion of scientific knowledge’. Wallace’s vision was not exceptional. On the contrary, the modern social sciences, which took shape in 19th-century western Europe, took their own recent historical experience of secularisation as a universal model. An assumption lay at the core of the social sciences, either presuming or sometimes predicting that all cultures would eventually converge on something roughly approximating secular, Western, liberal democracy. Then something closer to the opposite happened.

Not only has secularism failed to continue its steady global march but countries as varied as Iran, India, Israel, Algeria and Turkey have either had their secular governments replaced by religious ones, or have seen the rise of influential religious nationalist movements. Secularisation, as predicted by the social sciences, has failed.

To be sure, this failure is not unqualified. Many Western countries continue to witness decline in religious belief and practice. The most recent census data released in Australia, for example, shows that 30 per cent of the population identify as having ‘no religion’, and that this percentage is increasing. International surveys confirm comparatively low levels of religious commitment in western Europe and Australasia. Even the United States, a long-time source of embarrassment for the secularisation thesis, has seen a rise in unbelief. The percentage of atheists in the US now sits at an all-time high (if ‘high’ is the right word) of around 3 per cent. Yet, for all that, globally, the total number of people who consider themselves to be religious remains high, and demographic trends suggest that the overall pattern for the immediate future will be one of religious growth. But this isn’t the only failure of the secularisation thesis.

Scientists, intellectuals and social scientists expected that the spread of modern science would drive secularisation – that science would be a secularising force. But that simply hasn’t been the case. If we look at those societies where religion remains vibrant, their key common features are less to do with science, and more to do with feelings of existential security and protection from some of the basic uncertainties of life in the form of public goods. A social safety net might be correlated with scientific advances but only loosely, and again the case of the US is instructive. The US is arguably the most scientifically and technologically advanced society in the world, and yet at the same time the most religious of Western societies. As the British sociologist David Martin concluded in The Future of Christianity (2011): ‘There is no consistent relation between the degree of scientific advance and a reduced profile of religious influence, belief and practice.’

The story of science and secularisation becomes even more intriguing when we consider those societies that have witnessed significant reactions against secularist agendas. India’s first prime minister Jawaharlal Nehru championed secular and scientific ideals, and enlisted scientific education in the project of modernisation. Nehru was confident that Hindu visions of a Vedic past and Muslim dreams of an Islamic theocracy would both succumb to the inexorable historical march of secularisation. ‘There is only one-way traffic in Time,’ he declared. But as the subsequent rise of Hindu and Islamic fundamentalism adequately attests, Nehru was wrong. Moreover, the association of science with a secularising agenda has backfired, with science becoming a collateral casualty of resistance to secularism.

Turkey provides an even more revealing case. Like most pioneering nationalists, Mustafa Kemal Atatürk, the founder of the Turkish republic, was a committed secularist. Atatürk believed that science was destined to displace religion. In order to make sure that Turkey was on the right side of history, he gave science, in particular evolutionary biology, a central place in the state education system of the fledgling Turkish republic. As a result, evolution came to be associated with Atatürk’s entire political programme, including secularism. Islamist parties in Turkey, seeking to counter the secularist ideals of the nation’s founders, have also attacked the teaching of evolution. For them, evolution is associated with secular materialism. This sentiment culminated in the decision this June to remove the teaching of evolution from the high-school classroom. Again, science has become a victim of guilt by association.

The US represents a different cultural context, where it might seem that the key issue is a conflict between literal readings of Genesis and key features of evolutionary history. But in fact, much of the creationist discourse centres on moral values. In the US case too, we see anti-evolutionism motivated at least in part by the assumption that evolutionary theory is a stalking horse for secular materialism and its attendant moral commitments. As in India and Turkey, secularism is actually hurting science.

In brief, global secularisation is not inevitable and, when it does happen, it is not caused by science. Further, when the attempt is made to use science to advance secularism, the results can damage science. The thesis that ‘science causes secularisation’ simply fails the empirical test, and enlisting science as an instrument of secularisation turns out to be poor strategy. The science and secularism pairing is so awkward that it raises the question: why did anyone think otherwise?

Historically, two related sources advanced the idea that science would displace religion. First, 19th-century progressivist conceptions of history, particularly associated with the French philosopher Auguste Comte, held to a theory of history in which societies pass through three stages – religious, metaphysical and scientific (or ‘positive’). Comte coined the term ‘sociology’ and he wanted to diminish the social influence of religion and replace it with a new science of society. Comte’s influence extended to the ‘young Turks’ and Atatürk.

The 19th century also witnessed the inception of the ‘conflict model’ of science and religion. This was the view that history can be understood in terms of a ‘conflict between two epochs in the evolution of human thought – the theological and the scientific’. This description comes from Andrew Dickson White’s influential AHistory of the Warfare of Science with Theology in Christendom (1896), the title of which nicely encapsulates its author’s general theory. White’s work, as well as John William Draper’s earlier History of the Conflict Between Religion and Science (1874), firmly established the conflict thesis as the default way of thinking about the historical relations between science and religion. Both works were translated into multiple languages. Draper’s History went through more than 50 printings in the US alone, was translated into 20 languages and, notably, became a bestseller in the late Ottoman empire, where it informed Atatürk’s understanding that progress meant science superseding religion.

Today, people are less confident that history moves through a series of set stages toward a single destination. Nor, despite its popular persistence, do most historians of science support the idea of an enduring conflict between science and religion. Renowned collisions, such as the Galileo affair, turned on politics and personalities, not just science and religion. Darwin had significant religious supporters and scientific detractors, as well as vice versa. Many other alleged instances of science-religion conflict have now been exposed as pure inventions. In fact, contrary to conflict, the historical norm has more often been one of mutual support between science and religion. In its formative years in the 17th century, modern science relied on religious legitimation. During the 18th and 19th centuries, natural theology helped to popularise science.

The conflict model of science and religion offered a mistaken view of the past and, when combined with expectations of secularisation, led to a flawed vision of the future. Secularisation theory failed at both description and prediction. The real question is why we continue to encounter proponents of science-religion conflict. Many are prominent scientists. It would be superfluous to rehearse Richard Dawkins’s musings on this topic, but he is by no means a solitary voice. Stephen Hawking thinks that ‘science will win because it works’; Sam Harris has declared that ‘science must destroy religion’; Stephen Weinberg thinks that science has weakened religious certitude; Colin Blakemore predicts that science will eventually make religion unnecessary. Historical evidence simply does not support such contentions. Indeed, it suggests that they are misguided.

So why do they persist? The answers are political. Leaving aside any lingering fondness for quaint 19th-century understandings of history, we must look to the fear of Islamic fundamentalism, exasperation with creationism, an aversion to alliances between the religious Right and climate-change denial, and worries about the erosion of scientific authority. While we might be sympathetic to these concerns, there is no disguising the fact that they arise out of an unhelpful intrusion of normative commitments into the discussion. Wishful thinking – hoping that science will vanquish religion – is no substitute for a sober assessment of present realities. Continuing with this advocacy is likely to have an effect opposite to that intended.

Religion is not going away any time soon, and science will not destroy it. If anything, it is science that is subject to increasing threats to its authority and social legitimacy. Given this, science needs all the friends it can get. Its advocates would be well advised to stop fabricating an enemy out of religion, or insisting that the only path to a secure future lies in a marriage of science and secularism.

By Peter Harrison and originally published in Aeon on September 7, 2017 and can be found here.

In Tree of Life Christian Schools v. City of Upper Arlington, (6th Cir., May 18, 2016), the U.S. 6th Circuit Court of Appeals in a 2-1 decision reversed and remanded in a RLUIPA land use case, finding that material facts remain as to the application of RLUIPA’s “equal terms” provision. At issue is an Ohio city’s refusal to rezone a large office building for use as a religious school. The office building is in an area zoned as an “Office and Research District” — an area designed for uses that would maximize the city’s tax revenues. The majority said in part:

The religious land use that TOL Christian Schools proposes is, we assume without deciding, deleterious to the purpose of the regulation at issue (which we assume to be increasing income-tax revenue). But the nonreligious uses that the government concedes it would allow seem to be similarly situated to the regulation….. [T]he government suggested at oral argument that it would prefer that [the property] be used for an ambulatory care center or outpatient surgery center. But we cannot assume as a fact… that an ambulatory care center (or an outpatient surgery center, or a data and call center, or office space for a not-for-profit organization, or a daycare) would employ higher-income workers than TOL Christian Schools would….

In In re St. Thomas High School, (TX App., May 1, 2016), a Texas state appellate court held that the ecclesiastical abstention doctrine requires dismissal of a breach of contract lawsuit against a Catholic high school brought by a 16-year old student who was expelled and by his parents. The expulsion came after the parents sent the school a letter about the handling of a grade dispute. The letter complained that the teacher involved had not called the parents as they had requested. It alleged that when the teacher told the student the reason for failing to call– he was too busy preparing for a romantic night with his wife to celebrate their wedding anniversary– that this amounted to engaging in a discussion with the student “in a sexually harassing fashion.”

The school concluded that the false accusations of sexual harassment against the teacher, made it impossible for other teachers to teach the student without fear of similar charges. The court said in part:

we conclude that St. Thomas’s status as a Catholic high school does not place it outside the ecclesiastical abstention doctrine’s reach. No less than a Catholic church, St. Thomas is a religious institution enjoying First Amendment protection for the free exercise of religion….

This record belies any contention that spiritual standards and religious doctrine play no role in the parties’ dispute. Plaintiffs expressly relied on the Catholic nature of a St. Thomas education to justify their demands…. In addition … this record also demonstrates impermissible interference with St. Thomas’s management of its internal affairs and encroachment upon its internal governance.

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in First Things which, I thought, was pretty insightful. Be edified.

_______________

My friends who work in scientific fields were aghast when they saw that the organizers of a planned “March for Science” had tweeted that “colonization, racism, immigration, native rights, sexism, ableism, queer-, trans-, intersex-phobia, & econ justice are scientific issues [black power emoji][rainbow emoji].” Who can blame them for their horror? The impartial search for truth is having enough problems these days, what with the discovery that many prominent scientific results, over a broad swath of fields, are non-replicable and likely false. It seems altogether the wrong time to inject a dose of partiality.

My correspondents always hasten to add that, of course, they’re in favor of racial and gender-based outreach that seeks to increase the relatively low proportion of working scientists who are women or who belong to certain ethnic groups. They inform me that the institutions and practices of science are still shaped by covert and overt misogyny and racism, and I have no reason to doubt them. What makes them wary, however, is the even more illiberal desire to inject the views and interests of progressive social causes into the methodology of science itself (hypothesis formation, experiment, analysis) and perhaps even into its conclusions. This, in their eyes, would represent an overstepping of ideological bounds and a transgression against the most sacred ideals of the scientific enterprise (empiricism, objectivity, impartiality). It would transform science into a different activity, one which they do not recognize and of which they do not wish to be a part.

This is a naive view. In fact, the purported objectivity of scientific inquiry is a damaging myth, and the illiberal instincts of the Marchers for Science represent a corrective, though not a cure. Science has been ideologically captured since its birth, and “value-laden inquiry” is not a recent deviation but is rather fundamental to its successful practice. The successful conquest of the institutions of science by overtly politicized forces would change little on the ground, but it would help to update society’s perceptions so that they match the underlying reality. We should welcome the March for Science as it sets out to destroy the academy’s undeserved reputation for neutrality and to reveal science for what it has always been.

According to the popular understanding, science is simply the comparing and ordering of sense data originating from experiment or from the observation of natural phenomena. If we are lucky, patterns or other forms of order emerge from these data. Scientists can then build theories that describe and abstract these regularities, and perhaps even use them to make predictions about as-yet-unobserved phenomena. Finally, these theories are put to the test by new observations and discarded if they contradict the best available new data. This is a process of induction, whereby simple, raw observations are grouped together in such a way that the law that connects them becomes evident. The higher-level relations and associations are grouped in turn, such that the meta-law which underlies them all comes into focus, and so on higher and higher up the chain of abstraction, toward theories ever more rarefied and powerful. Yet in principle, even the most complex theory does nothing more than tie together a vast number of simple observations, each of which is pure, objective, and incontestable.

A crucial feature of this story, and the source of a great deal of its attraction, is the freedom it offers from the oppressive legacies of ideology, privilege, and prejudice that taint every human institution. If science is nothing more than the cataloging and systematization of information directly accessible to our senses, then it could be a source of knowledge that is objective, neutral, and accepted by all. Moreover, if each step of this process is solely determined by the data—that is, if at no point does a theorist have a free choice between alternative interpretations or generalizations—then we can be sure that no lingering taint in the scientist’s mind will impress itself upon the completed theory. The scientist is like an automaton, albeit a clever and subtle one, transforming inputs into outputs, discovering rather than inventing, performing a mechanical rather than an artistic task.

This is why those most invested in science as a way of knowing the world react with such horror to the proposal that values, even the progressive values they overwhelmingly share, should inform the scientific method. The threat is not so much that such a program would have grave consequences if carried out, as that the assumptions behind it threaten to undercut what they believe makes science unique. If such a thing as “feminist science” or “XYZ science” were even possible, then it would mean that science as it currently exists might not be perfectly neutral and value-free. It would imply that there are many possible ways of doing science, and that those different ways might reach different answers. Worst of all, it would make who does science a relevant question—a sort of scientific Donatism—opening up the field to further suspicion from its ideological enemies.

The trouble is that this idealized view is wrong. The political, moral, and religious views of a scientist really do affect the results that he gets. Consider the process of theory formation. A theorist is struck by inspiration: Something innocuous, like a passing remark by a stranger at the grocery store, suddenly triggers the realization that two unrelated phenomena can be linked, or an existing body of theory can be simplified or unified through a new form of explanation. The scientist then goes looking for evidence to bolster his theory (the precise opposite, it’s worth noting, of Karl Popper’s rather idealistic conception of the scientific method). Given the messiness and flexibility of all real-world datasets, he will invariably be able to find it. Partisans of the old theory remain unmoved and argue, convincingly, that looked at in a different way, the data support their interpretation instead. Often the ensuing scholarly battle stimulates the development of new experimental techniques, and sometimes these new methods are able to settle the matter decisively. Other times the battle can rage for years, or even decades. Even when questions are settled, it usually isn’t because either the old guard or the upstarts won their rivals over, but because one party failed to make the case to the next generation of students and eventually died off.

Scientists who are caught in the raptures of a new theory will often stick with it for a time even when all available evidence counts against it. Sometimes, such a theory even wins in the end. A dramatic, and perhaps surprising, example comes from one of the most famous scientific theories of the twentieth century: Albert Einstein’s special theory of relativity. A year after Einstein proposed it, the theory suffered a devastating blow from the famous experimentalist Walter Kaufmann, who published an empirical result that appeared to disprove the new theory. We now know that Kaufmann’s equipment was insufficiently sensitive to detect the effect Einstein predicted, and moreover that it was miscalibrated, but it took a decade before this became clear. In the meantime, Einstein brushed aside the criticism and continued propounding his theory, winning an increasing number of converts over time, despite the fact that the best experimental evidence had “refuted” it.

The experience evidently had a profound effect on Einstein. He began his career as a dedicated positivist and empiricist, only losing the faith when it failed him again and again. Rigorous attempts to inductively postulate laws from data brought him only years of stagnation and failure while he searched for the field equations of general relativity, and nearly cost him priority for the discovery. In desperation, Einstein searched for the mathematically simplest explanation, embracing prior philosophical criteria as a constraint on the space of possible theories, and then found his answer almost immediately. He ultimately concluded that, as he put it in his Autobiographical Notes, “no collection of empirical facts however comprehensive can ever lead to the formulation of such complicated equations. A theory can be tested by experience, but there is no way from experience to the construction of a theory.” In other words, the inductive approach to theory-building on which so many of science’s claims to neutrality hang is not only a poor description of science as it exists, but is, because of the limited powers of the human mind, not a way that science even could be done. The consequence of this, as Einstein said in an interview at the end of his life, is that “every true theorist is a kind of tamed metaphysicist, no matter how pure a ‘positivist’ he may fancy himself.”

Einstein’s claim is essentially a practical one: It is far too hard for human beings to reason backward from a mass of complex and entropic data to the compact and simple law that gave rise to it. Yet this argument is not as devastating to the inductivist story of science as it may at first sound. Yes, one might concede, the actual practice of the scientific method may be messy, or even the complete opposite of the inductive approach, but the fact remains that there is a law out there that is generating the data of our experience. So long as we continue to be guided by the data, we will gradually approach closer and closer to the true laws of nature, even if not by inductive means.

But the trouble is that there is never just one such law. Theory is almost always underdetermined by data. It’s simple enough to construct artificial examples of different laws that make identical predictions, but most can be dispatched by Occam’s razor (though note that this is a sneaky application of metaphysics if there ever was one!). History, however, offers something altogether more disturbing: countless examples where data could be explained by two fundamentally different types of theory, trafficking in different approaches, different causal mechanisms, even different ontologies.

Consider, for instance, the astonishing accuracy with which both Newtonian mechanics and general relativity predict the motions of the various bodies in the solar system. This may seem like an odd example—isn’t it a case in which a flawed theory explained the evidence for some time, and was eventually replaced by a better theory? Yes, but as Einstein put it in his Herbert Spencer lecture, On the Method of Theoretical Physics: “We can point to two essentially different principles, both of which correspond with experience to a large extent; this proves at the same time that every attempt at a logical deduction of the basic concepts and postulates of mechanics from elementary experiences is doomed to failure.” If two theories barely inhabiting the same conceptual universe can both explain our observations with such accuracy, what if there’s another? What if there are ten more? What if they give identical predictions beyond the accuracy of any instruments we will build for ten thousand years? When forced to choose between two such radically different theories, parlor tricks like Occam’s razor win us nothing. The choice is philosophical and metaphysical: It can be informed by experience, but can never be settled by science.

In practice, scientists are rarely paralyzed by indecision when faced with situations of this sort, which implies that they must have prescientific metaphysical beliefs to help them to make the choice, even if those beliefs go unstated. Scientific theories compete with one another to explain a given body of evidence while also exhibiting the greatest simplicity, elegance, scope, consonance with other theories, and internal harmony. But they do more than that; they also make claims, implicitly or explicitly, about what evidence needs explaining and what would constitute a satisfactory explanation.

In the official story, evidence inspires us to create theories, or sometimes refutes existing theories. But in reality, theories can also create and destroy evidence by highlighting some sorts of the elementary data of experience as significant while dismissing others. A superficial example of this might be the evidentiary standards of many of the social sciences, where studies achieving a significance value of p < 0.05 are arbitrarily considered to be results that a theory must explain or at least accommodate. There is nothing in nature that recommends a sharp cutoff. It is purely a social and indeed ideological consensus to make p < 0.05 the standard. This is a free parameter of the metatheory which could be varied, and which, given the limited power of most studies, if varied, might very well lead to a different body of “facts” and hence different forms of explanation achieving dominance.

But there are deeper cases of theory affecting the kinds of evidence by which theories are judged. Take the behaviorist school of psychology. According to behaviorism, all human and animal behaviors are merely reactions to external stimuli and previous conditioning. In particular, behaviorists believe the internal states of individuals have no causal effects on their actions, regardless of what those individuals may claim. Now imagine that a behaviorist and a non-behaviorist come up with an identical hypothesis explaining some form of activity, but every individual in their study explains, “Actually, the reason I did it was that I believed it would be wrong to do otherwise.” The non-behaviorist might take this as strong evidence that the hypothesis was incorrect. However, the behaviorist, already committed to a theory of human activity that rejects the causal effects of internal states, might rule out these protestations and refuse to consider them as evidence. Whose methodology is correct? Science cannot tell us the answer. Our beliefs about what even constitutes empirical data with which our science must reckon cannot be self-justifying. Indeed, they can be influenced by whatever theory is currently in vogue.

As with evidence, so with what counts as a satisfactory explanation for a given body of evidence. Taking again our example from psychology, suppose a behaviorist and a non-behaviorist are trying to explain why an individual did something apparently irrational. When asked, the subject replies, “Because I thought that if I did it, I would receive a million dollars.” The non-behaviorist might find this belief to be curious, and might inquire further to discover a reason for the belief, but he would almost certainly consider the belief itself to be a sufficient explanation for the action. The behaviorist, on the other hand, would consider the act of speaking, and perhaps even the act of holding a belief, to be nothing more than another behavior, and therefore not sufficient as an explanation for the observed action, since only external stimuli and conditioning can cause behaviors. So if the non-behaviorist formulated a theory that said “individuals will do strange things if they believe that doing so will result in a million dollars,” the behaviorist wouldn’t even consider this theory to be wrong. Rather, it would be not-a-theory, a category error, something as unscientific as saying that fairies did it.

Behaviorism is not just a pathological case; nor can these issues be dodged by avoiding sciences dependent upon the unobservable inner life of conscious beings. Every theory makes claims about which phenomena demand an explanation in simpler or more fundamental terms, and which are just brute facts about reality that neither need nor permit explanation. For example: Newton’s theory of mechanics had great and immediate predictive success, but it was assailed as unscientific at its birth because, unlike Descartes’s vortices and hooked atoms, it did not offer a causal chain of influences whereby one body affected another.

Many scientists, when pressed, will say that our theories progress precisely by becoming more reductive and demanding explanations for things previously seen as brute facts. But it’s often quite unclear what “more reductive” even means. Consider the introduction of variational methods into mechanics by Jean d’Alembert, Joseph-Louis Lagrange, and William Rowan Hamilton. The use of an extremal principle to compute the behavior of a system was seen as unacceptably teleological and non-mechanistic (and continues to be resisted by each new generation of undergraduates). I can’t count the number of times I’ve explained the Lagrangian approach to a non-physicist scientist, only to be met with a dropped jaw and a “that isn’t science!” Perhaps the only reason physicists are comfortable with the approach is that they refuse to think too hard about what they’re doing.

Well, and perhaps also because it works astonishingly well. Most scientific fields today are conceptualized as the handmaidens of technology. Consequently, the forms of explanation which are accepted as scientific tend to be those that give humanity greater powers of prediction and control. This was not an inevitable development, however, and different fields of science have succumbed to the Promethean temptation to varying degrees. Is it possible that a science which valued different qualities in an explanation could have evolved along different lines and given rise to alien theories that traffic in fundamentally different concepts? One of the greatest tragedies of the globalization and homogenization of scientific inquiry is precisely that we are now far less likely to discover these, and other roads not taken.

Science is not simply the answering of questions; it is also the choosing of which questions to ask. Contrary to the inductivist account, facts and data do not just present themselves to us. Experimental and observational studies must be formulated and conducted, often at great cost, to gather them. This is commonly done in the service of one or more research programs—broad efforts to answer a question or to understand a phenomenon. But these programs grow out of an extended dialogue within a community of scientists, or due to funding pressures, and either way are the product of the norms, values, and interests of broader society. Thus these norms and values shape not only what qualifies as evidence, but what evidence is even available to be considered in need of explanation.

For instance, imagine two studies on gift-giving, one conducted by a neuro-economist and the other by a sociologist. Were we to have only the former’s data, we might conclude that people give gifts in response to an activation of the anterior cingulate gyrus. Were we to have only the latter’s, we might conclude that people engage in gift-giving in order to consolidate their social status. Both accounts might be accurate and useful answers to the narrow question that they sought to address, while at the same time being utterly impoverished accounts of human behavior. The trouble comes when we confuse the mere fact that a theory explains some empirical data with the notion that a theory tells the “whole truth” about a facet of the world. Often, the data were gathered in response to the theory, and no theory can be successfully falsified by data that nobody looked for.

Another way in which our metaphysical beliefs construct the body of evidence that is available for theory to address lies in the ways we classify and categorize the world. Every theory makes choices about what elements it considers to be the primitive constituents of the world, what groupings of those elements make for interesting objects of study, and what makes objects more closely or more distantly related. One could imagine a social science that instead of treating individuals as its fundamental units of analysis instead chose families, or neighborhoods, or athletic clubs. Such a science doubtless would come up with very different empirical “laws” governing the behaviors and dynamics of human institutions. Indeed, one of the great triumphs of feminist thought was precisely that it constructed “women” as a separate category and subject of inquiry, thereby turning “how does this policy affect women?” into an interesting scientific question, unlike, say, “how does this policy affect red-haired people?” All such competing schemes for carving the world at its joints represent the enactment of a particular ontological and metaphysical vision.

Again, this all remains true when one moves to “harder” sciences. In fact, the disciplinary boundaries themselves are contingent choices about how to chop up the universe that end up influencing the kinds of questions that are asked and answered. But lay that aside and contemplate a question like “how should we classify forms of cancer?” By the organ affected? By genetic similarity? By typical biological course in the absence of treatment? All of these have been tried, and all give very different answers for when one cancer is “like” another.

To all this the pragmatists have a partial answer: “Pick the divisions that are the most fruitful! The ones that result in useful regularities!” The trouble with this answer is that the world is absolutely rotten with order. Much of it is real, and much more is conjured into being when fallible, order-seeking minds go hunting for it. A great many schema for organizing the world, such as the classification of beetles by visual appearance rather than by genetic similarity, generalize gracefully beyond the examples that inspired their development, despite presumably not tracking the deep cleavages that underlie reality and that science seeks to map.

None of this is meant as a counsel of despair, or a suggestion that the world is so inaccessible to our reason that we should speak only about measurements without reference to underlying reality, or any other of the rather silly views that have sprung from the revelation that science is a contingent, underdetermined social phenomenon. The point, rather, is just that science is not unique, and that it can never be self-justifying. Questions like “which science?” and “why this science?” are often useful ones. Scratch a scientist, find a metaphysicist, even if he doesn’t realize it.

Einstein, acutely sensitive to these issues, was in favor of bringing the oft-unstated prescientific beliefs of scientists out into the light and making them explicit. So, in a very similar way, are feminist philosophers of science like Helen Longino, Lynn Nelson, and Elizabeth Anderson. The difference is that where Einstein’s nonempirical, metaphysical criteria for selecting between theories tended to be “internal” qualities of a theory, like mathematical simplicity or aesthetic balance, these later critics are willing to bring political and moral considerations to bear in the selection of a science.

A wonderful case study of “feminist science” is offered by Anderson in a 2004 paper analyzing a book-length treatment of divorce outcomes by Abigail Stewart and colleagues. Anderson breaks down the process of investigating a scientific question into eight steps—orienting to the background of the field, framing a question, articulating a conception of the object of inquiry, deciding what types of data to collect, establishing and carrying out data-gathering procedures, analyzing the data, deciding when to stop analyzing data, drawing conclusions—and then shows how Stewart’s feminist beliefs influence and inform her methodology at each of these steps.

To take just one example: Stewart and her team consciously chose to reject the “traditionalist” interpretation of divorce as a traumatic and negative event, and searched carefully for ways in which the divorces they studied had produced opportunities for personal growth and maturation on the part of both parents and children. Sure enough, they found them where previous researchers had not. One might object that cancer and broken legs also provide opportunities for personal growth, and that a study which focused on them without mentioning the pain and harm that they cause is a study that lies by omission, or by misplaced emphasis, just like the neuro-economist’s account of gift-giving. But this is precisely the feminists’ point! One need not posit data manipulation or academic dishonesty to see that a researcher’s prior beliefs about the desirability of divorce will shape the results of a study. Merely by changing the questions that are asked, by shifting the background conception of the subjects of study, and by seeking out and collecting a new type of evidence, “feminist science” is able to reach a new and different conclusion.

And none of this—none of the norms, values, and agendas guiding the outcomes of scientific research—touches on the way science is made up of fallible institutions and fallible individuals. Yet the mechanisms of peer review, grant-making and funding, access to laboratory resources, and so on make it all too easy for a dedicated cabal to deliberately (or even accidentally) freeze out research that does not conform to their vision of the world. The ease with which accidental or deliberate error can enter data analysis provides yet another mechanism for the views of a scientist to leach into his or her results. Given the degree of esteem and respect still paid to assertions bearing the imprimatur of a study, it would be madness for the partisans of any faction not to try to ensure that as many of their own as possible occupy positions related to science production.

Which is why my progressive scientist friends are deluded if they think that those genuinely concerned about “colonization, racism, immigration, native rights, sexism, ableism, queer-, trans-, intersex-phobia, & econ justice” can be dissuaded from attempting to capture not just the institutions of science, but its methods and research programs as well. Every instance of scientific inquiry, every study, rests on a vast submerged set of political, moral, and ultimately metaphysical assumptions. As the great quantum theorist Max Planck put it:

It is said that science has no preconceived ideas: there is no saying that has been more thoroughly or more disastrously misunderstood. It is true that every branch of science must have an empirical foundation: but it is equally true that the essence of science does not consist in this raw material but in the manner in which it is used. The material always is incomplete . . . [and] must therefore be completed, and this must be done by filling the gaps; and this in turn is done by means of associations of ideas. And associations of ideas are not the work of the understanding but the offspring of the investigator’s imagination—an activity which may be described as faith, or more cautiously, as a working hypothesis. The essential point is that its content in one way or another goes beyond the data of experience. The chaos of individual masses cannot be wrought into a cosmos without some harmonizing force and, similarly, the disjointed data of experience can never furnish a veritable science without the intelligent interference of a spirit actuated by faith.

But faith in what? It is entirely rational for people of all persuasions to seek to ensure that it is their faith that is doing the work. The rhetoric of the organizers of the March for Science does not reflect a temporary aberration, a momentary bit of enthusiasm, a fruitless revolt against a coldly rational age. It is the future. A future in which the politicization of science stops being implicit and starts being aware of itself. To face this future with intellectual sophistication rather than sloganeering, we need metaphysical reflection. Scientists would do well to start with a frank acknowledgment that they do not really know the deeper sources of their own dearly held scientific truths.

By William A. Wilson and originally published in First Things on November 2017 and can be seen here.

On Tuesday, the U.S. 4th Circuit Court of Appeals heard oral arguments in American Humanist Association v. Greenville County School District. (Audio of full oral arguments.) At issue was the graduation ceremony prayer policy of the Greenville County, South Carolina school district, as well as its practice of holding some graduation ceremonies at a religious chapel on a local college campus. (See prior posting.) Greenville News reports on the oral arguments.

Every now and again I come across a fantastic article the warrants posting here; I recently came across one in First Things which, I thought, was pretty insightful. Be edified.

_______________

Something strange is going on in America’s bedrooms. In a recent issue of Archives of Sexual Behavior, researchers reported that on average, Americans have sex about nine fewer times a year than they did in the late 1990s. The trend is most pronounced among the young. Controlling for age and time period, people born in the 1930s had the most sex, whereas those born in the 1990s are reporting the least. Fifty years on from the advent of the sexual revolution, we are witnessing the demise of eros.

Despite all the talk of the “hookup culture,” the vast majority of sex happens within long-term, well-defined relationships. Yet Americans are having more trouble forming these relationships than ever before. Want to understand the decline of sex? Look to the decline in marriage. As recently as 2000, a majority—55 percent—of Americans between the ages of twenty-five and thirty-four were married, compared with only 34 percent who had never been married (see Figure 1). Since then, the two groups have swapped places. By 2014, 52 percent of Americans in that age group had never been married, while only 41 percent were married. Young Americans are now more apt to experience and express passion for some activity, cause, or topic than for another person.

Figure 1.

A decline in commitment isn’t the only reason for the sexual recession. Today one in eight adult Americans is taking antidepressant medication, one of the common side effects of which is reduced libido. Social media use also seems to play a part. The ping of an incoming text message or new Facebook post delivers a bit of a dopamine hit—a smaller one than sex delivers, to be sure, but without all the difficulties of managing a relationship. In a study of married eighteen- to thirty-nine-year-old Americans, social media use predicted poorer marriage quality, lower marital happiness, and increased marital trouble—not exactly a recipe for an active love life.

If these were the only causes, the solution would be straightforward: a little more commitment, a little less screen time, a few more dates over dinner, more time with a therapist, and voilà. But if we follow the data, we will find that the problem goes much deeper, down to one of the foundational tenets of enlightened opinion: the idea that men and women must be equal in every domain. Social science cannot tell us if this is true, but it can tell us what happens if we act as though it is. Today, the results are in. Equality between the sexes is leading to the demise of sex.

To understand why this is, we need to turn to Gary Becker, an economist who won a Nobel Prize for his study of the economic principles behind human interactions. He documented how the benefits of marriage receded as women’s earning power rose relative to that of men. The years between 1973 and 1983 were decisive. In that decade, young women’s wages climbed steadily while men’s actually fell, never to recover. Women had less reason to marry, and they had less attractive mates should they nonetheless decide to. Though women had often entered marriages for financial reasons, many nonfinancial benefits followed, including the formation of a stable, intimate relationship with a spouse and the sense of purpose that comes with raising a family. These are things that no job—however lucrative—can deliver.

This is the first of your three free articles for the month. The introduction of the Pill has not changed what men and women value most, but it has transformed how they relate. The marriage market before the Pill was populated by roughly equal numbers of men and women, whose bargaining positions were comparable and predictable. Men valued attractiveness more than women, and women valued economic prospects more than men. Knowing that men wanted sex, but realizing that sex was risky without a corresponding commitment, women often demanded a ring—a clear sign of his sacrifice and commitment.

Not anymore. Artificial contraception has made it so that people seldom mention marriage in the negotiations over sex. Ideals of chastity that shored up these practical necessities have been replaced with paeans to free love and autonomy. As one twenty-nine-year-old woman demonstrated when my research team asked her whether men should have to “work” for sex: “Yes. Sometimes. Not always. I mean, I don’t think it should necessarily be given out by women, but I do think it’s okay if a woman does just give it out. Just not all the time.” The mating market no longer leads to marriage, which is still “expensive”—costly in terms of fidelity, time, and finances—while sex has become comparatively “cheap.”

For every one hundred women under forty who want to marry, there are only eighty-two men who want the same. Though the difference may sound small, it allows men to be more selective, fickle, and cautious. If it seems to you that young men are getting pickier about their prospective spouses, you’re right. It’s a result of the new power imbalance in the marriage market. In an era of accessible sex, the median age at marriage rises. It now stands at an all-time high of twenty-seven for women and twenty-nine for men, and is continuing to inch upward. In this environment, women increasingly have to choose between marrying Mr. Not Quite Right or no one at all.

For the typical American woman, the route to the altar is becoming littered with failed relationships and wasted years. Take Nina, a twenty-five-year-old woman my team interviewed in Denver. Petite, attractive, and faring well professionally in her position with an insurance company, Nina was nevertheless struggling when it came to relationships. She had a history of putting men she valued as confidantes in the “friend zone.” With these men, a sexual relationship seemed too risky. If it went awry, she’d lose not only a potential mate but also a valued friend. On the other hand, if she didn’t know the man well, she was willing to have casual sex while hoping for something more.

After several years, this approach had taken its toll: an abortion, depression, and a string of failed relationships. Nina now believed that a marriage ought to begin as a friendship, and for the first time in years, she had someone in particular—David—in mind. Though she had been raised by liberal parents to be open-minded about sex and wary of traditional household roles, she had come to see things differently. She was blunt: “I’m dead serious. . . . I would marry him, I would raise his kids, raise a family.”

In her 2013 book Hard to Get, Leslie Bell, a sociologist and psychotherapist, tries to understand the lives of women like Nina. She laments that the skills they developed “in getting ahead educationally and professionally have not translated well into getting what they want and need in sex and relationships.” When it comes to relationships, their “unprecedented sexual, educational, and professional freedoms” have led to “contradictory and paradoxical consequences.”

Nonsense, I say. The only contradictory and paradoxical thing here is the unrealistic expectation of so many that the financial independence of women would have wholly positive effects on the dance of the sexes. Women and men still want each other, but the old necessities that once brought them together have disappeared. Many are going it alone, apparently. Since 1992, there has been a 100 percent growth in the share of men and nearly 275 percent increase in the share of women who masturbate at least weekly.

Even those who marry are having trouble in the bedroom. According to the study, the frequency with which married couples had sex fell 19 percent between 2000 and 2014. An even steeper decline is evident in the just-released 2016 data. It’s not just married couples, either; cohabiting Americans are also reporting a drop in sexual activity. In their 1994 landmark sex study, University of Chicago sociologist Edward Laumann and his colleagues reported that 1.3 percent of married men and 2.6 percent of married women between the ages of eighteen and fifty-nine had not had sex within the past year. Twenty years later, 4.9 percent of married men and 6.5 percent of married women in the same age range report that it has been more than a year since they have had sex with their spouses. How do we account for this?

Here, too, equality is the enemy of eros. Differences between men’s work and women’s work—between breadwinner and homemaker, father and mother—are increasingly viewed as arbitrary and oppressive. And yet this loss of everyday oppositions between men and women has made Americans less, not more, attractive to each other. It was not supposed to be this way. Some sociologists have guessed—or perhaps hoped—that men who are willing to take on traditionally female household tasks might enjoy more active sexual lives with their wives—quid in the kitchen for quo in the bedroom. The authors of a recent analysis of the National Survey of Families and Households conjectured that women would use the promise of sex to convince men to do more domestic tasks. Despite the transactional way of framing the problem, the researchers harbored a fond hope: that more equal relationships would also be more erotic ones. So, do men who do a greater share of the housework enjoy more sex? No. In fact, they’re penalized in the bedroom. Husbands who do little or no housework had sex with their wives nearly two more times per month than did husbands who do all of it. Meanwhile, doing a greater share of traditionally male work around the house—mowing the lawn, fixing things—correlates with more sex. Men and women are not attracted to sameness, but to difference. We long for what is missing in ourselves. Needing each other makes us want each other.

Recognizing this doesn’t mend everything between men and women, however. The cheap sex that was made possible by the Pill, further discounted by pornography, and made more efficient by Tinder has proven to be a bad bargain for women, leaving them (and, in turn, men) lonelier and less connected than they once were. I see it in the statistics and I hear it in their stories.

“Equality,” Israeli sociologist Eva Illouz writes in her 2011 book, Why Love Hurts, “demands a redefinition of eroticism and romantic desire that has yet to be accomplished.” Indeed. Egalitarianism promised the flourishing of eros, but by abolishing the difference between the sexes, it has made sexual acts self-referential—even those that are not performed alone. Men and women are not interchangeable, and our effort to make them so has only increased the loneliness and disaffection of American life. We cannot have both eros and strict equality between the sexes. Saving one requires sacrificing the other.

By: Mark Regnerus, and originally published in First Things in October 2017 and can be found here.