There were also people who thought that because I was Jewish, I had no right to create these preppy clothes. Harvard, Yale, Princeton. Ralph Lauren

The novels of F. Scott Fitzgerald, for example are peopled with earnest heroes who hailed from the Midwest but who came to play in the racy world of New York via Princeton or Yale. Cooke

Not only will I not say that again, but I’ll be more thoughtful going forward in the way that I talk about our marriage, and also the way in which I acknowledge the truth of the criticism that I have enjoyed white privilege. So yes, I think the criticism is right on. My ham-handed attempt to try to highlight the fact that Amy has the lion’s share of the burden in our family — that she actually works but is the primary parent in our family, especially when I served in Congress, especially when I was on the campaign trail — should have also been a moment for me to acknowledge that that is far too often the case, not just in politics, but just in life in general. I hope as I have been in some instances part of the problem, I can also be part of the solution.Beto O’Rourke

It was the second apology O’Rourke made during the podcast. The first was for his writings as a teenager when he was a member of a group of activist hackers. Those writings, which came under the pseudonym « Psychedelic Warlord » and included a piece of fiction from a killer’s point of view, were revealed in a Reuters report. He said he was « mortified to read it now, incredibly embarrassed … whatever my intention was as a teenager doesn’t matter. » « I have to look long and hard at my actions, at the language I have used, and I have to constantly try to do better, » he said. The comments came as O’Rourke responded to a question about how he would combat white supremacy. O’Rourke criticized President Donald Trump, saying that Mexican and Muslim children « internalize it » when the President attacks them with a broad brush. He also criticized Trump’s response to the violence at a white supremacists’ rally in Charlottesville, Virginia, in 2017. « We also have to confront this racism, this xenophobia, this nativism and this hatred, or else I’m confident it will consume us. And so calling it out is part of it, and then setting an example of how we want to treat each other, » he said.CNN

The Vanity Fair cover photo of Beto O’Rourke, taken by Annie Leibovitz, is an apparent homage to the famous Time magazine portrait of Ronald Reagan when he was chosen as Man of the Year in 1980. (…) Reagan was shown in a blue shirt and jeans with a brown leather belt and his hands on his hips. (…) O’Rourke, a former Democratic Texas congressman, was photographed with a light-blue shirt, tucked into a pair of jeans and a leather belt. He is standing next to his truck on a dirt road and has his hands on his hips. (…) O’Rourke is entering a crowded field of candidates for the Democratic nomination. The latest Real Clear Politics average puts O’Rourke a distant 6th place with 5.3 percent. Former Vice President Joe Biden, who has not yet announced a bid, leads the pack in the high-20’s with Sen. Bernie Sanders, I-Vt., sitting in second place. O’Rourke exemplifies a new normal. None of the other major white progressive candidates—Bernie Sanders, Elizabeth Warren, or Kirsten Gillibrand—invoked God in their presidential announcements either. (Amy Klobuchar, who is running as a comparative moderate, did.) Today’s white liberals don’t only talk about faith less than their predecessors did. They talk about it in a strikingly different way. Earlier Democrats invoked religion as a source of national unity. (…) The implication was that religious observance was something Americans of both parties shared. Today, by contrast, progressive white candidates more often cite religion as a source of division. In his announcement video, O’Rourke boasted that during his Senate campaign in Texas, “people allowed no difference, however great or however small, to stand between them and divide us. Whether it was religion or gender or geography or income, we put our labels and our differences aside.” The only reference to faith in Warren’s announcement speech was an acknowledgment that “we come from different backgrounds. Different religions.” The lone reference in Sanders’s was a call for “ending religious bigotry.” While white progressives once described religion as something that brought Americans together, they’re now more likely to describe it as something that drives them apart. It’s not hard to understand why. For starters, the percentage of white Democrats who express no religious affiliation has skyrocketed. According to unpublished data tabulated for me last year by the Public Religion Research Institute (PRRI), 8 percent of white Democrats expressed no religious affiliation in 1990. By 2016, the figure was 33 percent. In 1990, white self-described liberals were 39 points more likely to describe themselves as Protestant than as religiously unaffiliated. By 2016, religiously unaffiliated beat Protestant by nine points. Secular Democrats haven’t only grown more numerous. They’ve also become some of the party’s most motivated activists. As The Atlantic’s Emma Green has noted, a PRRI poll taken last August and September found that Democrats who shun organized religion were more than twice as likely to have attended a political rally in the previous year than Democrats who identify with a religious group. The other reason liberal candidates more often describe religion as a source of division is the rise of Islamophobia and anti-Semitism. Before Donald Trump, Republican religious discourse was more ecumenical. The 2000 Republican convention featured a Muslim prayer, and George W. Bush regularly spoke about Americans who attended a “church, synagogue, or mosque.” In such an environment, it was easier for Democrats to depict an America divided by race, class, and gender but unified by religious faith, even if different Americans expressed that faith in different ways. Today, by contrast, since more Americans don’t practice a religion, and the president demonizes some of those who do, it’s more natural to describe religion as a rift to be overcome. But while there are legitimate reasons to talk about religion less (America has become a less religious country) and to describe it more negatively (religious bigotry has risen sharply), doing so could hurt Democrats such as O’Rourke in their efforts to defeat Trump. According to a 2016 Pew Research Center poll, while a small plurality of Democrats thinks politicians talk about religion too much, Republicans overwhelmingly think politicians talk about it too little. Among those Republicans are devout Christians who agree with Trump on abortion but consider him a detestable human being, and might be lured into voting against him by a Democrat who both spoke compellingly about a guiding faith and appeared to live by it. Democratic candidates might be tempted to pursue an opposite strategy: employing secular rhetoric to rouse their secular base. But the Democratic base isn’t overwhelmingly secular; it’s partly secular and partly religious. Republicans, by contrast, are overwhelmingly religious. Which may explain why, according to a 2017 study in the Journal for the Scientific Study of Religion, candidates who were perceived as secular experienced a “drop in Republican support that … was not balanced by an increase in Democratic support.” That’s partly because of African Americans. While many white Democrats want politicians to speak about religion less, black Democrats overwhelmingly want them to speak about it more. When asked in 2016 whether political leaders were talking about “their faith and prayer” too much or too little, black Protestants said “too little” by a larger margin than even Republicans. While only 41 percent of Democrats said it was very or somewhat important that a president shared their religious views, among black Protestants, the figure was 72 percent, again even higher than among Republicans. (…) For Harris and Booker, whose path to the Democratic nomination requires winning the black vote, religious language is a necessity. And the same religious language that helps them win over African Americans in the primary may help them win over Republicans in the general election. In their appetite for public professions of faith, black Democrats and white Republicans are similar. It’s white liberals who stand out. White progressives such as O’Rourke, Sanders, and Warren tacitly recognize that religion is no longer the force for national unity it once was. For Harris and Booker, the intriguing possibility is that it’s still unifying enough to propel them to the White House.The Atlantic

Let us count the ways in which college admissions are corrupt. They are corrupted by the reserving of spots for ‘legacy’ applicants. To qualify for one of these highly selective non-competitive places, you need to be born with forebears who attended your choice of college, and to be able to sit straight without drooling out of either corner of your mouth. Legacy places are essentially affirmative action for the wealthier sort of white people. They should not be confused with a more recent form of corruption, affirmative action for the wealthier sort of non-white people. Reserving a certain number of spots on the basis of race was originally intended to assist the upward mobility of black people, many of whose ancestors having been owned by the ancestors of the people who still monopolize legacy admissions. But these days, affirmative action effectively preserves the class advantages of any non-white applicant with good-enough SAT scores, and at the expense of a poorer non-white applicant. The exceptions to this rule are American applicants of East Asian and Indian background. These hard-working children of hard-working immigrants are penalized for their hard work and family values, and have to get higher SAT scores than other racial groups, especially African Americans. It is an inarguable fact that if America’s top colleges admitted students solely by academic merit and potential, their entire intake would be of Chinese and Indian extraction, with a sprinkling of Jews to make the jokes. All colleges rig the racial profile of their intake by explicitly racist measures. The Ivy League adds an extra layer of racial screening by insisting on ‘character’, which means impersonating the manners of white people. This is an elaborately cruel form of corruption which has grown out of the corruption of affirmative action, itself a corrective to the earlier corruption of college admissions by race and class. As William ‘Rick’ Singer is alleged to know, college admissions are openly corrupted by sporting ability. I’ve taught in what are laughably sold as top liberal arts colleges. Almost all the students on sports scholarships are semi-literate. They sleep through their lectures, which is understandable, given their rigorous training schedules. They pay their less athletic fellow students to write their papers for them, which is also understandable, given their selfless donation of their sporting talent to the community. They just sit there like sleepy bears, giving off a faint whiff of locker rooms and vanilla protein shake as they twiddle with their cellphones. College admissions are also corrupted by admitting foreign students who can’t speak or write English, but whose parents are willing to pay top dollar. It’s an open secret that many mainland Chinese and South Korean applicants to ‘top liberal arts colleges’ don’t write their application essays; either that, or their English goes into reverse after sending off the essays. But, just as you can’t fire an athlete, you can’t send the foreign students home. Finally, colleges are begging to be corrupted by donations. The more colleges replace merit with profiling on the basis of racial background, family connections, economic origin, or sporting ability, the greater the squeeze on the remaining places. This creates an incentive for bribery by ‘donation’. When colleges claim that they’re not swayed by donations, they’re lying. If they were serious about reducing the scope for bribery, they’d refuse to accept donations from families with applications active or imminent. William Deresiewicz, one of the few people to have taught at an American university and spoken honestly about the hollowing of the system, wrote a book in 2014 called Excellent Sheep. Deresiewicz believes that the risk-averse selection strategies of elite colleges have created a narrow and risk-averse elite. It now turns out that elite colleges do admit a wide and risk-embracing pool of applicants with low SAT scores — providing their parents pay a bit extra, or a lot. Everything is for sale in the American university except a decent liberal education. Money talks, and merit comes last. Huffman, Loughlin and the other parents are in court not just because they seem to have been blessed with children of inordinate stupidity, but because they grasped the rules of college corruption perfectly, and played the game the wrong way, and perhaps too well. William ‘Rick’ Singer knew the system so well that he created a simulacrum of the admissions process. He invented a fake charity, which is what most private colleges are. He paid competent students to sit entry exams, which happens all the time. He cut deals with sports coaches, rather than the coaches and the scouts cutting deals with the family. He obtained sports scholarships for students who didn’t lift a finger or a bat once they were in. And, like the elite schools, he extracted a fortune from suckers. When he gets out of prison, a brilliant career awaits, possibly as dean of a liberal arts college in Vermont. Dominic Green

New England is home to four colleges that comprise the Ivy League athletic conference: Harvard, Yale, Brown and Dartmouth. The other colleges – Princeton, Cornell, University of Pennsylvania and Columbia – are also in the Northeast. In the conformist 1950s, students at these colleges popularized the Ivy League look, which had its roots in the conservative styles of New England. For men, the Ivy League look consisted of a suit with a narrow-shouldered unfitted jacket, worn with a button-down shirt, skinny tie, and penny loafers (preferably Bass Weejuns). Charcoal gray and olive were the preferred colors. Chinos and tweed blazers offered a casual alternative. The look spread beyond campuses to young men in all parts of suburban America where details such as buckle straps from Ivy trousers were transplanted to caps, shirts, and shoes. High school students wore a more extreme four-button jacket bearing the name « Jivey Ivy. » By 1960, most men sported modified Ivy models that incorporated unpadded shoulders, narrow lapels, and tapered trousers. Brooks Brothers, a citadel of conservatism, came to the forefront as the Ivy League style became popular. When the young John Fitzgerald Kennedy, a senator from Massachusetts, became the president of the United States, the Ivy League look reached the White House. Ivy League women wore cashmere twin sets, Shetland sweaters, or blazers with kilts or tweed skirts. In the summer, blouses with peter pan collars were worn with Bermuda shorts. A pearl necklace set off any outfit. The Ivy look is well bred, understated, but not fussy. Many New England men and women held to the conservative, classic styles that compromised the Ivy League look during the sartorial upheavals of the 1960s and 1970s. In the late 1970s, conservative styles once again seemed right for the times, and the Ivy League look resurfaced as the preppy look. The essential ingredients for the male preppy wardrobe included a conservative gray flannel suit, preferably made by Brooks Brothers, a long-time favorite label of New Englanders. For less formal wear, button-down oxford shirts or Lacoste polo shirts worn with khakis or corduroys sufficed. Other favorites included Harris Tweed jackets, down vests, Burberry tench coasts, L.L. Bean field coats, and camel hair Polo coats. Preppy women wore female versions of masculine styles: khaki, flannel, or corduroy slacks; a kilt or plaid skirt, a blazer or tweed jacket; and a Shetland or Fair-Isle sweater over a ruffle-necked white blouse or cotton turtleneck. Preppy styles for women were rather androgynous: female versions of the men’s styles produced by the same companies. Both genders wore clothes of Indian madras, a cotton plaid fabric that had first become popular in the early 1960s. Shoes common to both men and women were loafers or Sperry Top-Siders (boat shoes). Socks were optional. Men donned wing tips for dressy affairs while women wore simple pumps.vLike the Ivy League look before it, the preppy look emphasized the wearing of classic fabrics from natural fibers. The only departure from conservative dressing was the bright pink and green color combinations seen in preppy ensembles. Preppy clothes were well made, with attention to detail. Brand names were important. The American designer Ralph Lauren has built a financial empire on fashions inspired by this old money New England look. Michael Sletcher

The clothes look good in magazines, but look older in stores. I would never buy Polo at full price. Christina

Sometimes, I hear designers from older generations saying, ‘Oh, fashion needs to make women dream. I feel that this is really difficult today. I think it’s dated. Fashion shouldn’t make you dream in 2016. It should just be there, for us to wear. Gvasalia

Lauren built a career by brazenly positioning himself as the quintessential interpreter of the American zeitgeist. More than any designer, he has used America’s mythology — our secular religion — for profit. In doing so, he has displayed a keen understanding of our cultural symbols. He can parse the difference between a pair of blue jeans worn with cowboy boots and those worn with a black leather jacket. He sees the romance in a prairie skirt or a well-worn Native American blanket. He knows what it means in our racially conflicted society to photograph a dark-skinned, athletic black man in his preppiest, old-money brand. And he knows how a bright-eyed blonde feeds our vision of Mayflower blue bloods. And as consumers, we have bought into those symbols and made Lauren an extremely wealthy man. The Washington Post

Entering the Rhinelander Mansion on New York’s Upper East Side is like quietly opening a window into Ralph Lauren’s mind. Many describe Lauren’s superpower as his ability to turn his wildest dreams into reality, and inside that mansion, Ralph Lauren’s original flagship location, his dreams are made real in every nook and cranny of the place. Each room presents one lavish scene after the next, and it’s not hard to imagine Lauren himself toiling at the displays to make sure everything sits just right. Spaces are small and illuminated with candles and the softest of lighting, beckoning shoppers to linger. A glass of water arrives on a small silver platter, garnished with a single slice of lemon, just for you. It’s stunningly clear here, walking slowly up a staircase lined with oil paintings from the company’s collection, that Ralph Lauren is a lifestyle. No detail is left to chance: Ralph Lauren ties are fanned out on a table in front of a bar stacked high with Ralph Lauren shirts, next to a case of monogrammed Ralph Lauren cufflinks. Ralph Lauren briefcases are placed next to Ralph Lauren paperweights on a Ralph Lauren desk topped with Ralph Lauren stationery, positioned underneath a giant, glittering chandelier that can’t possibly — but maybe? — be branded Ralph Lauren. Everything, right down to the 82,000 square feet of mahogany hauled in for the mansion’s renovation in the 1980s, reeks of style and status and money. Old money. [but] Once you leave the giant department stores of New York City and head to the malls of suburbia, Ralph Lauren becomes a few racks of Oxfords, polos, and pleated pants. Reliably found in your local Dillard’s, and just as reliably found on sale. (…) Most shoppers haven’t encountered the totality of Ralph Lauren’s world. How could they? Since the early 2000s, Ralph Lauren Corporation has owned and operated at least 25 different brands. It’s a staggering list: Polo Ralph Lauren, Polo Jeans, Polo Golf, Pink Pony, Purple Label, Blue Label, Black Label, Ralph by Ralph Lauren, Lauren Ralph Lauren, Lauren for Men, Women’s Collection, RRL, RLX, Rugby, Denim & Supply, Club Monaco, Chaps, Ralph Lauren Childrenswear, Ralph Lauren Watches, Ralph Lauren Fine Jewelry, American Living, Ralph Lauren Home, Lauren Home, Ralph Lauren Paint, and Lauren Spa. Not all are still in operation. For the shoppers who actually are familiar with the company’s multitude of lines, it’s still exhausting. « The identity of the brand gets lost, » laments Efney Hall, who has been shopping Ralph Lauren for over a decade. Lauren has stepped aside to make way for a new CEO, Stefan Larsson — the first person besides Lauren to ever hold that title in the company’s 50-year history. The company has been in the process of whittling down the brand list and there are plans to refocus on just three main lines: Ralph Lauren (the new umbrella label for Women’s Collection and Purple Label), Polo Ralph Lauren, and Lauren Ralph Lauren. At the same time that Ralph Lauren is reevaluating its structure and bringing in fresh leadership, it also has to contend with the fact that the specific style of Americana that’s so deeply embedded in every inch of the brand isn’t something shoppers are clamoring to align themselves with now. If the privileged, preppy aesthetic that Lauren built his company around is no longer the height of aspiration, what will the future of Ralph Lauren look like? Ralph Lauren did not grow up living the lifestyle that would later make him a billionaire. No, Ralph Lauren was born Ralph Lifshitz, a shy Jewish kid who lived in a small, two-bedroom apartment in the Bronx with his parents and three siblings. In Genuine Authentic: The Real Life of Ralph Lauren, writer Michael Gross paints a picture of young Ralph as a dreamer, never one to run with the crowd. « If white bucks were in fashion, he wore saddle shoes, » a former classmate told Gross. « When we wore crew necks, he wore V-necks. He was always a step ahead. » lLauren’s perception of taste and class was constructed by what he saw around him, according to Gross. His richer friends’ parents drove convertibles, went on European vacations, and had country club memberships. In films, he watched Gary Cooper, Cary Grant, and Fred Astaire glide across the screen, wearing beautiful suits and getting the girls every time. (…) However, Lauren’s mother had set a strict path for Ralph: he was to be a rabbi. (…) At 19, he and his brother Jerry changed their last name from Lifshitz to Lauren. (As Gross reports it, Ralph polled friends on two alternatives, London or Lauren; he was personally partial to London.) In the official document filed for the name change, the reason listed was confusion over people, both at school and at work, who shared the same last name. In reality, Lifshitz had the word « shit » in it and Ralph’s plans for himself did not include dealing with that for the rest of his life. (…) Lauren had no professional training in design, but he believed so deeply in his wild ties that other people did too. He caught the attention of Norman Hilton, one of the biggest names in the menswear industry at the time, who eventually became the first investor in Lauren’s business. Polo Fashions, Inc., named after the posh sport (not the shirts Lauren would later become famous for), launched in 1968 and, as Hilton’s son Nick remembers it, his father poured $75,000 into the startup. By the end of his first year running Polo Fashions, Lauren had expanded from ties into full suits that the Daily News Record (a menswear trade publication that was later folded into WWD) featured alongside heavyweights like Bill Blass and Oleg Cassini. (…) It was then that he decided to change the name on his labels from Polo Fashions to Polo by Ralph Lauren, in part to imitate how other designers were using their own names on their womenswear labels. And then, for the launch of women’s button-down shirts, the company added a new design element: a small embroidered polo player. It was an overnight success. (…) As Lauren’s business grew, buoyed in large part by the ‘80s prep revival, the polo player became an integral part of the women’s and men’s lines, including on the polo shirts that became a signature of the Ralph Lauren look. Chaps was the first of many extensions that Ralph Lauren would experiment with. Chaps was Lauren’s answer to Polo knockoffs that were flooding the market. He couldn’t stop the knockoffs from being produced, so he created a cheaper line to compete with them. The company also expanded quickly through a number of licensing partnerships, a relatively easy way to put the Ralph Lauren name on a variety of products without having to deal with manufacturing any of it. (…) Lauren’s vision of America drew heavily from the world of Ivy League preps, but the brand appealed far beyond the country club crowd. (…) Ralph Lauren went public in 1997 and continued to thrive throughout the early 2000s, opening new lines seemingly on a whim. (…) By 2012, Ralph Lauren stock was trading at more than $170 per share, having shot up by $100 in five years. There was so much faith in the success of the company. (…) The company employed approximately 25,000 people in 2012, and was reporting $6.8 billion in sales and net profits of $681 million. Then came the slide and Ralph Lauren’s literal and metaphorical stock began to tumble. Shares fell nearly 50 percent from a high point of $192 in May 2013 to $82 in February 2016. Sales were still holding steady, but profits slid drastically. (…) Ralph Lauren is going through operational struggles during not only a tumultuous period in the retail industry, but also a time that’s seeing a cultural shift away from what the brand stands for. The prep aesthetic has always smacked of privilege, something accessible primarily to white people with trust funds and monogrammed shirtsleeves. Now, the WASP lifestyle that completely captivated Lauren as a young entrepreneur is considered out of touch at best, offensive and oppressive at worst. Take, for instance, the media’s reaction to the company’s Olympic uniform designs this year. Headlines announcing the kits included: « Ralph Lauren’s Olympic Uniforms Are Straight Out of Prep School Hell »; « USA’s Olympic Uniforms Are WASPy Bullshit »; « Team USA’s Official Olympic Uniforms are Peak Vanilla »; and Racked’s own contribution, « I Need More From Team USA’s Olympic Uniforms ». The Daily Mail rounded up the best tweets from the debacle. (…) Today’s shoppers are interested in more democratic clothing options — options that are casual, practical, and mass. Athleisure is a $97 billion business in the US, accounting for nearly one-third of the entire apparel, footwear, and accessories market. Vetements, the French design collective led by Demna Gvasalia that no one can stop talking about, is making a killing off of what can best be described as incredibly ordinary clothing. (…) The counterculture revolution of the late ‘60s and ‘70s ushered in an era of long hair and bell bottoms as a response to the conservative style of the ‘50s. Then, in the ‘80s, Lauren led a massive preppy revival that other traditional menswear retailers like Brooks Brothers and J.Press also felt the effects of. This aligned with the Reagan era, a time when conservative politics replaced the freewheeling ideals of the previous two decades. When Lisa Birnbaum published The Preppy Handbook in 1980, it was meant to satirize the prep scene that was reemerging, but ended up being regarded as a literal handbook. The Financial Times described Ralph Lauren as the greatest fashion beneficiary of the book, saying he « cashed in as the preppy wannabe’s clothier. » Then the pendulum swung back away from prepsters in the ‘90s, when grunge became the go-to cool kid look. But in the early aughts, prep was popular yet again. Birnbaum published a sequel to the Handbook called True Prep. Lauren’s business was on an upswing. Abercrombie & Fitch had infiltrated every high school in America. (…) And now, here we are again, back at a place where anti-establishment sentiment runs deep. How does a company like Ralph Lauren react to these cultural ebbs and flows? By giving its take on whatever the look of the moment is. (…) Patricia Mears, the deputy director of the museum at the Fashion Institute of Technology, remembers observing how Lauren’s merchandise morphed to speak to different generations when she was conducting research for a book and exhibition on Ivy style at FIT in 2012. Racked

The iconic brand is struggling. How did we get here, and what happens next?

Erika Adams

Racked

Jul 26, 2016

Entering the Rhinelander Mansion on New York’s Upper East Side is like quietly opening a window into Ralph Lauren’s mind. Many describe Lauren’s superpower as his ability to turn his wildest dreams into reality, and inside that mansion, Ralph Lauren’s original flagship location, his dreams are made real in every nook and cranny of the place.

Each room presents one lavish scene after the next, and it’s not hard to imagine Lauren himself toiling at the displays to make sure everything sits just right. Spaces are small and illuminated with candles and the softest of lighting, beckoning shoppers to linger. A glass of water arrives on a small silver platter, garnished with a single slice of lemon, just for you.

It’s stunningly clear here, walking slowly up a staircase lined with oil paintings from the company’s collection, that Ralph Lauren is a lifestyle. No detail is left to chance: Ralph Lauren ties are fanned out on a table in front of a bar stacked high with Ralph Lauren shirts, next to a case of monogrammed Ralph Lauren cufflinks. Ralph Lauren briefcases are placed next to Ralph Lauren paperweights on a Ralph Lauren desk topped with Ralph Lauren stationery, positioned underneath a giant, glittering chandelier that can’t possibly — but maybe? — be branded Ralph Lauren. Everything, right down to the 82,000 square feet of mahogany hauled in for the mansion’s renovation in the 1980s, reeks of style and status and money. Old money.

Twenty blocks away, inside the Lord & Taylor on Fifth Avenue, the dream gets a little murkier. Lauren Ralph Lauren dominates one of the women’s floors, and while the gold-plated signage is shiny and the tan leather couches comfy, the endless sea of khaki dresses belted at the waist are not so much impressive as they are predictable. There are no nooks nor crannies filled with odds and ends from Ralph’s archives; nothing begs a pause. Jammed up in between racks of floral fit ‘n’ flare dresses and rows of athleisure, it’s harder to see Ralph Lauren’s appeal. A similar scene unfolds on the sales floor at the Herald Square Macy’s, a short 10-minute walk away.

Once you leave the giant department stores of New York City and head to the malls of suburbia, Ralph Lauren becomes a few racks of Oxfords, polos, and pleated pants. Reliably found in your local Dillard’s, and just as reliably found on sale.

« The clothes look good in magazines, but look older in stores, » says Christina, a 31-year-old from Long Island, flipping through a rack of button-down shirts at Macy’s. She likens the brand to Michael Kors — oversaturated and devalued. « I would never buy Polo at full price. »

Jan Freemantle, a tourist visiting New York from Sydney, Australia, recalled how her husband used to bring her back Polo shirts picked up on business trips to California before she could find the brand in Sydney. Polo was all she knew about Ralph Lauren until recently, when on a trip to Aspen, she came across a Ralph Lauren store that carried the Purple Label and Collection lines. « It was so nice, but so expensive, » she says.

Ralph Lauren is clearly a man who knows how to build an empire, but right now, the empire is in turmoil.

For the shoppers who actually are familiar with the company’s multitude of lines, it’s still exhausting. « The identity of the brand gets lost, » laments Efney Hall, who has been shopping Ralph Lauren for over a decade. She likes it for its classic, elegant appeal, but she’s noticed that lately, the fit of the pants has changed. She finds herself skimming over the brand’s Lauren Ralph Lauren racks. She’s over it.

Ralph Lauren is clearly a man who knows how to build an empire, but right now, the empire is in turmoil. Layoffs have struck the company two years in a row, eliminating 750 jobs in 2015 and another 1,000 this summer. (One former Ralph Lauren designer commented to a colleague on Instagram in June: « Glad you survived the RL Hunger Games this week! »)

Lauren has stepped aside to make way for a new CEO, Stefan Larsson — the first person besides Lauren to ever hold that title in the company’s 50-year history. The company has been in the process of whittling down the brand list and there are plans to refocus on just three main lines: Ralph Lauren (the new umbrella label for Women’s Collection and Purple Label), Polo Ralph Lauren, and Lauren Ralph Lauren.

At the same time that Ralph Lauren is reevaluating its structure and bringing in fresh leadership, it also has to contend with the fact that the specific style of Americana that’s so deeply embedded in every inch of the brand isn’t something shoppers are clamoring to align themselves with now. If the privileged, preppy aesthetic that Lauren built his company around is no longer the height of aspiration, what will the future of Ralph Lauren look like?

Ralph Lauren did not grow up living the lifestyle that would later make him a billionaire. No, Ralph Lauren was born Ralph Lifshitz, a shy Jewish kid who lived in a small, two-bedroom apartment in the Bronx with his parents and three siblings. In Genuine Authentic: The Real Life of Ralph Lauren, writer Michael Gross paints a picture of young Ralph as a dreamer, never one to run with the crowd. « If white bucks were in fashion, he wore saddle shoes, » a former classmate told Gross. « When we wore crew necks, he wore V-necks. He was always a step ahead. »

Lauren’s perception of taste and class was constructed by what he saw around him, according to Gross. His richer friends’ parents drove convertibles, went on European vacations, and had country club memberships. In films, he watched Gary Cooper, Cary Grant, and Fred Astaire glide across the screen, wearing beautiful suits and getting the girls every time.

« I grew up playing a lot of basketball, reading, and living at the movies, » Lauren said in an old interview that Gross unearthed for the book. « I guess they influenced my taste level. I liked the good things and the good life. I did not want to be a phony. I just wanted more than I had. »

However, Lauren’s mother had set a strict path for Ralph: he was to be a rabbi. He shuttled between secular public schools and Jewish yeshivas during his youth, eventually convincing his mother to allow him to transfer from Manhattan Talmudical Academy, where he was on the Hebrew teacher-in-training track, to DeWitt Clinton High School, an all-boys public school. In his senior yearbook, listed right below his extra-curricular participation in « Lunch Room Squad » and « Health Ed. Squad, » Lauren declared what he wanted to be when he grew up: a millionaire.

At 19, he and his brother Jerry changed their last name from Lifshitz to Lauren. (As Gross reports it, Ralph polled friends on two alternatives, London or Lauren; he was personally partial to London.) In the official document filed for the name change, the reason listed was confusion over people, both at school and at work, who shared the same last name. In reality, Lifshitz had the word « shit » in it and Ralph’s plans for himself did not include dealing with that for the rest of his life.

College was never a big draw for Lauren, who dropped out of the City College of New York school system after three years. He was drafted into the Army and served for two years, but the military, with all its rules and regulations, wasn’t a good fit either. After the Army, he kicked off his career as a salesman, first for glove companies. Then he got into ties.

« I liked the good things and the good life. I did not want to be a phony. I just wanted more than I had. »

Lauren got his first shot at professional tie design at Rivetz & Co., a high-end neckwear company. It didn’t go over well. « Rivetz was a traditional firm, » David Price, whose father used to own the Rivetz & Co. business, explains. « They were doing all sorts of crazy pinks and oranges and all the Ralph colors, and the industry and the customer base at Rivetz thought it was just atrocious. »

But instead of backing down, Lauren went from Rivetz to Beau Brummell Cravats, where his boss, Ned Brower, let him sell his own ties — colorful, wide, and expensive — out of a drawer in the showroom. Lauren had no professional training in design, but he believed so deeply in his wild ties that other people did too. He caught the attention of Norman Hilton, one of the biggest names in the menswear industry at the time, who eventually became the first investor in Lauren’s business. Polo Fashions, Inc., named after the posh sport (not the shirts Lauren would later become famous for), launched in 1968 and, as Hilton’s son Nick remembers it, his father poured $75,000 into the startup. By the end of his first year running Polo Fashions, Lauren had expanded from ties into full suits that the Daily News Record (a menswear trade publication that was later folded into WWD) featured alongside heavyweights like Bill Blass and Oleg Cassini.

The company was a critical success from the beginning, although according to Nick Hilton, it was always almost bankrupt in its first few years. In 1970, Lauren won his first Coty Award (the predecessor to the CFDA Awards) for menswear, and he launched womenswear after that. In Ralph Lauren: The Man Behind the Mystique, author Jeffrey Trachtenberg describes how the move into womenswear transformed Lauren’s business. It was then that he decided to change the name on his labels from Polo Fashions to Polo by Ralph Lauren, in part to imitate how other designers were using their own names on their womenswear labels. And then, for the launch of women’s button-down shirts, the company added a new design element: a small embroidered polo player. It was an overnight success.

« The polo player became the new status symbol for women, » Raleigh Glassberg, the buyer who purchased Ralph’s first women’s shirts for Bloomingdale’s, told Trachtenberg. The shirts were as pricey as Lauren’s ties, but it didn’t matter. Everybody wanted one. As Lauren’s business grew, buoyed in large part by the ‘80s prep revival, the polo player became an integral part of the women’s and men’s lines, including on the polo shirts that became a signature of the Ralph Lauren look.

Chaps was the first of many extensions that Ralph Lauren would experiment with. Chaps was Lauren’s answer to Polo knockoffs that were flooding the market. He couldn’t stop the knockoffs from being produced, so he created a cheaper line to compete with them.

The company also expanded quickly through a number of licensing partnerships, a relatively easy way to put the Ralph Lauren name on a variety of products without having to deal with manufacturing any of it.

« The bulk of the company’s profits come from royalties on its extremely lucrative licensing agreements, which lend the Ralph Lauren name to manufacturers of eyewear, fragrance, furniture, and a range of apparel, » the New York Times‘ Stephanie Strom reported in the mid-’90s. « Polo Ralph Lauren only manufactures its men’s sportswear, coats, and furnishing lines; all other Ralph Lauren products, ranging from towels and sheets to shoes and sunglasses, are manufactured by others under license. »

The article also noted the voracity with which Lauren launched new lines, started new partnerships, and continually built upon his vision. « The sheer number of new ideas coming out of Mr. Lauren’s head at a time when the fashion industry seems to be satisfied with endlessly regurgitating old looks gives him an edge, » Strom writes. « In the last year alone, he has started RRL, Polo Sport, a line of Polo Sport skin treatments, and the Ralph label. »

As Lauren’s empire grew, the accolades kept coming. According to the CFDA, Lauren is the first and only designer to win four of the CFDA’s top honors: the CFDA Lifetime Achievement Award (1991), the CFDA Womenswear Designer of the Year Award (1995), the Menswear Designer of the Year Award (1996), and the CFDA Award for Humanitarian Leadership (1998).

Lauren’s vision of America drew heavily from the world of Ivy League preps, but the brand appealed far beyond the country club crowd.

« Insecurity can sometimes make a man do bold things, » Cathy Horyn wrote in a profile of Ralph Lauren for the Washington Post. « It can make him create not one world but many worlds. And it can make him think that what he has done is not only good but better. The upshot has been rather intriguing: a quarter-century of glorious ephemera from a designer who can’t draw so much as a sleeve. Never could. »

In that profile, Lauren couldn’t help but describe his legacy in broad, sweeping strokes. « Did I lift America up a little bit? Did I give it a little bit of quality? Because we were known for polyester. People don’t remember that. You couldn’t buy good things here. America is mass, » he told Horyn.

« And so, as I traveled around and got more sophisticated, I started to see what wasn’t there, and I became more nationalistic. Every year of my life. And I’d think, ‘Why is this country so insecure about what it is?’ So, my thing became more than clothes. It became bigger. It became — America. »

Lauren’s vision of America drew heavily from the world of Ivy League preps, but the brand appealed far beyond the country club crowd.

The Lo Lifes, a Brooklyn gang officially founded in 1988, used to make a show out of shoplifting Ralph Lauren from department stores around New York City back when they first formed; now, it’s more about appreciating the Lifshitz to Lauren, self-made billionaire element of the designer’s story, as well as showing off vast collections of archival pieces. (Viceinterviewed a Lo Life member who at one point had over 1,000 items.) However, the Lo Lifes’ influence on Lauren’s brand, specifically its place in hip-hop, isn’t officially recognized by the company.

« All together, it makes for a potent folk history of capitalist sedition, » Jon Caramanica wrote of the group. « In a time when Polo was being made for and marketed to the aspirational white middle class, some of the most rigorously sourced collections were sitting in closets in the Brooklyn housing projects. »

That’s not to say the company totally eschewed diversity. Ralph Lauren is credited with catapulting Tyson Beckford to supermodel status, making him the first black male model to hold that title. Beckford’s Polo ads were lauded when they first appeared, and the Times ran a story on his breakout success. « I believe I’m setting a good example, » Beckford told the paper. « The Polo ad says that I’m not a basketball star or a rap star, but an all-American type. It separates me from those stereotypes, which is good. »

« Lauren built a career by brazenly positioning himself as the quintessential interpreter of the American zeitgeist, » Robin Givhan later wrote in The Washington Post. « More than any designer, he has used America’s mythology — our secular religion — for profit. In doing so, he has displayed a keen understanding of our cultural symbols. He can parse the difference between a pair of blue jeans worn with cowboy boots and those worn with a black leather jacket. He sees the romance in a prairie skirt or a well-worn Native American blanket. He knows what it means in our racially conflicted society to photograph a dark-skinned, athletic black man in his preppiest, old-money brand. And he knows how a bright-eyed blonde feeds our vision of Mayflower blue bloods. And as consumers, we have bought into those symbols and made Lauren an extremely wealthy man. »

Ralph Lauren went public in 1997 and continued to thrive throughout the early 2000s, opening new lines seemingly on a whim. « At Ralph Lauren, there wasn’t that outside perspective, » says a former designer who requested anonymity since he still works in the industry. « We all, including myself, had our heads up our own asses. It was just so great to be there that even if we were doing something that we couldn’t validate based off of the competitive landscape it was like, ‘Well, this is Ralph Lauren. We can do what we want.’ We set the tone. »

By 2012, Ralph Lauren stock was trading at more than $170 per share, having shot up by $100 in five years. There was so much faith in the success of the company. « Everybody was just feeling the effects of the money that was rolling in, and that it was on a steady incline, » says the former designer. The company employed approximately 25,000 people in 2012, and was reporting $6.8 billion in sales and net profits of $681 million.

Then came the slide and Ralph Lauren’s literal and metaphorical stock began to tumble. Shares fell nearly 50 percent from a high point of $192 in May 2013 to $82 in February 2016. Sales were still holding steady, but profits slid drastically.

« I used to feel really good about working for that company, but there was so much uncertainty for so long and the lack of communication from the top down was almost absurd. »

Underlying problems with the company’s organizational structure became more pronounced as the good times gave way to struggling years. « People were just so unhappy, » says the former designer. « I used to feel really good about working for that company, but there was so much uncertainty for so long and the lack of communication from the top down was almost absurd. You didn’t even know what your job was, you didn’t know what your role was. You didn’t know if you were going to have a brand the next day. »

Several former employees pointed to that lack of communication as a real point of frustration within their departments. « It was like nowhere I had ever worked before, » says an employee who worked in materials sourcing for the company’s volume brands. « Everyone worked in silos. Manufacturers had one job that they were specific to and the designers only had to report to other designers and we really were kind of bumping into each other trying to do our own jobs. It was really inefficient. »

Compared to other retail companies where she had worked, the former employee was surprised by how many managers were assigned to each department. « Ralph is a very, very top heavy company, » she explains. « It was a lot of management and not a lot of doers, which is a huge problem. »

The organizational problems had long bled into the company’s dealings with its wholesale accounts. Michael Schumann, the owner of furniture retailer Traditions, eventually cut ties with Ralph Lauren after years of headaches associated with selling Ralph Lauren Home products in his stores.

« It was no longer worth it to put up with the bullshit in order to have the name, which was too bad, » says Schumann. He recalled how Ralph Lauren Home would issue beautiful, hardbound catalogs to stores and then not refresh them for two years since it was too costly to produce the books every six months when new collections would come out.

The rules around where and how to advertise the product were extra strenuous; Ralph Lauren’s logo had to be twice the size of the retailer’s logo, and ads could only be placed in premium locations. Schumann found success selling Lauren Home, a less expensive line, but then Ralph Lauren implemented a rule that Lauren Home and Ralph Lauren Home couldn’t be sold in the same store. « It was just impossible to work with these people, » Schumann says.

Ralph Lauren’s managerial structure was broken, relationships were being severed, the quarterly financial reports got more and more alarming, and Ralph Lauren himself wasn’t the same radical young guy wooing customers to buy into his dream lifestyle. Change was needed.

For years, David Lauren, Ralph’s only child who works at the company, was assumed to be the heir apparent. In 2006, The New York Observer wrote that it was « clear » Lauren would run the company at some point. Fast Company mentioned « industry-wide speculation » that he would take the throne in a 2011 profile. In 2014, Business of Fashion noted that many in the industry pegged the son as the father’s successor.

But when the time came for Ralph Lauren to relinquish his CEO title, David Lauren’s name wasn’t called. Instead, it was Stefan Larsson, a young retail industry darling who built his career at H&M and wowed the industry with a successful three-year stint as the brand president of Old Navy, who would inherit the crown.

When Lauren and Larsson tell the story of how they met, it often includes the tale of a magical first dinner together. Both walked in wondering what the hell they were doing there, both came out knowing that this partnership needed to happen. Larsson is a young star just as Lauren was back in the day, and Larsson has entrepreneurial roots as well — he started his own company to put himself through business school, according to the Financial Times.

Larsson also passed the most crucial test, in Lauren’s eyes. « He understands what dreams are, » Lauren told the Associated Press when Larsson’s new role was announced. (Ralph Lauren declined to make Lauren, Larsson, or any other executives available for comment for this story.)

« In terms of where Stefan is, I saw that he had the background and the excitement and the energy and the knowledge that I don’t have. »

David Lauren still retains his position as a company executive and a member of the board of directors, and if the new dynamic is awkward, it only comes through a little bit. At the company’s inaugural Investor Day presentation in early June, where Larsson laid out his plan for the future of the company, Lauren took the stage for about 20 minutes to talk about the brand history and endorse Larsson.

« I’ve had great people in my company over the years, wonderful people, » Lauren told analysts in the meeting. « But whether someone’s going to carry the CEO flag was a different thing because I’m entrusting my baby to him. And that baby has to grow up. And that baby is in the front row, David on the one hand and uh, Stefan on the other. But in terms of where Stefan is, I saw that he had the background and the excitement and the energy and the knowledge that I don’t have. »

Larsson spent nine months from the point of the initial CEO announcement last September to the Investor Day this summer to take stock of the business and figure out what needed to change.

For those watching the turnaround, there’s a lot of optimism about the possibilities under Larsson’s leadership. « When you look at Stefan and some of his core competences and what he brings to the table, it’s his ability to truly understand and diagnose a weakness within a company and go forth and make the necessary changes, » says Jerry Sheldon, an analyst for IHL Consulting Group.

« He really seems to have an understanding of consumers and is able to articulate that understanding, turn it into a business strategy, and execute on that strategy in a very effective way, » notes Sheldon.

First up, Larsson is assembling a new executive team filled with people from companies like H&M and Amazon. New blood will likely be just what Ralph Lauren needs. In recent years, employees witnessed how the old guard, which had been in their roles for years and years, weren’t cultivating an innovative environment anymore. There was also a sense that Lauren could not be questioned.

« When Ralph has an idea and starts something, nobody ever stands up and says, ‘Hey, this is not right. This is not the way to go,' » notes the former designer. « Everybody just kind of kneels to every word that comes out of his mouth. And when he personally would ask for opinions and direction, people had it and they didn’t voice it until he was out of the room, and that was just the way that it went for years and years. »

« If anything, I see the old management team as being beholden to Ralph and that was probably part of the problem, » says Paul Swinand, a retail analyst for Morningstar. « It wasn’t that he had lost his touch or that he was too old — you might have thought that — but it also might have been that the old management team was not trying to go out and create anything new, they were just trying to get along and finish out their last few years. »

Larsson’s public diagnosis of the company’s problems was unveiled via the aptly-named Way Forward plan. The main points include a new, more hands-off employee structure (eliminating three levels of management), cutting down the time from initial development of a product to getting it on the sales floor to nine months (down from 15), improving communication between departments, and focusing on three core brands while maintaining a smaller stable of secondary ones. The Way Forward also detailed 1,000 job cuts and 50 retail store closures.

« From a nostalgic, brand-loving perspective, I feel sad about the layoffs, and I’m very fearful that this will be like the JCPenney situation from a few years back, » says a former employee in Ralph Lauren’s digital operations, who requested anonymity. « But from the business side, it makes a lot of sense to me. Our department did not need three managers. »

Larsson is also pulling back from outlet stores, a market where Ralph Lauren had previously been expanding, and cutting down on promotional activity to try and retrain customers not to associate discounts with the brand.

« If anything, I see the old management team as being beholden to Ralph and that was probably part of the problem. »

In addition, Ralph Lauren has a huge wholesale business which accounts for nearly half of the company’s overall revenue. Macy’s in particular is a significant Ralph Lauren buyer; that account alone accounts for about 25 percent of the company’s wholesale revenue. But Macy’s reported a terrible financial quarter in May, and it doesn’t look like it will be making a comeback anytime soon.

« The department store channel is losing market share in general, » says John Kernan, an analyst with Cowen & Company, « and Ralph Lauren, the brand, needs to find new channels of distribution like Amazon and other areas where they can grow. »

Ralph Lauren is going through operational struggles during not only a tumultuous period in the retail industry, but also a time that’s seeing a cultural shift away from what the brand stands for. The prep aesthetic has always smacked of privilege, something accessible primarily to white people with trust funds and monogrammed shirtsleeves. Now, the WASP lifestyle that completely captivated Lauren as a young entrepreneur is considered out of touch at best, offensive and oppressive at worst.

« The uniforms couldn’t play more into the world’s most unflattering stereotypes of Americans unless they added cigars dangling out of the athletes’ mouths, Bibles tucked under their arms, and $100 bills falling out of their pockets, » Christina Cauterucci wrote for Slate.

Christian Chensvold, founder of the website Ivy Style and a regular contributor to Ralph Lauren’s RL Magazine, broached the subject in a series of posts last fall that questioned whether the Ivy League look was still politically correct. This included a satirical post that imagined a social justice warrior responding to different aspects of Ivy style (example: « Dinner jacket: Offensive to the underfed »); some readers were not amused.

« I would imagine that some of your readers would certainly find ‘club ties’ exclusive and elitist, » one commenter wrote, referring to a line joking that club ties should be banned for their exclusionary symbolism. Club ties, identified by their repeating motifs, actually did historically denote membership to elite clubs. « I know clothing itself is not elitist; it is the choice behind what we wear that speaks volumes about who were [sic] are. »

Later on, when Chensvold published an April Fools’ post detailing how preppy style had been banned from college campuses due to the classism and racism that it signified, plenty of readers thought it was real news.

Today’s shoppers are interested in more democratic clothing options — options that are casual, practical, and mass. Athleisure is a $97 billion business in the US, accounting for nearly one-third of the entire apparel, footwear, and accessories market. Vetements, the French design collective led by Demna Gvasalia that no one can stop talking about, is making a killing off of what can best be described as incredibly ordinary clothing. Its spring 2017 show, held during haute couture week in Paris, featured collaborations with 18 different brands including Juicy Couture and Carhartt.

« Sometimes, I hear designers from older generations saying, ‘Oh, fashion needs to make women dream,' » Gvasalia told W in an interview earlier this year. « I feel that this is really difficult today. I think it’s dated. Fashion shouldn’t make you dream in 2016. It should just be there, for us to wear. » It’s not hard to imagine Lauren burying his head in his hands over that one.

« It could become a social liability to look really old money and traditional, to wear this kind of stuff. »

« Ten years from now, when fashion is coming back around in its cycle and these young people are now well into their careers — assuming they have careers with the economy and their crippling student loan debt — when they become 35 years old, are they going to be wearing navy blazers and Alden tasseled loafers and striped ties because that epitomizes success and so forth? I don’t know, » says Chensvold.

« Theoretically, it could be a version of what we had in the late 1960s with the counterculture revolution, » he continues. « This is an election year; the country is more polarized than ever. It could become a social liability to look really old money and traditional, to wear this kind of stuff. »

Rebecca Tuite, the author of Seven Sisters Style, a book chronicling the history of the women’s equivalent to Ivy League style before many of the actual Ivies were co-ed, sees what’s happening now as a less vitriolic version of the backlash to ‘80s prep.

The counterculture revolution of the late ‘60s and ‘70s ushered in an era of long hair and bell bottoms as a response to the conservative style of the ‘50s. Then, in the ‘80s, Lauren led a massive preppy revival that other traditional menswear retailers like Brooks Brothers and J.Press also felt the effects of. This aligned with the Reagan era, a time when conservative politics replaced the freewheeling ideals of the previous two decades. When Lisa Birnbaum published The Preppy Handbook in 1980, it was meant to satirize the prep scene that was reemerging, but ended up being regarded as a literal handbook. The Financial Times described Ralph Lauren as the greatest fashion beneficiary of the book, saying he « cashed in as the preppy wannabe’s clothier. »

Then the pendulum swung back away from prepsters in the ‘90s, when grunge became the go-to cool kid look. But in the early aughts, prep was popular yet again. Birnbaum published a sequel to the Handbook called True Prep. Lauren’s business was on an upswing. Abercrombie & Fitch had infiltrated every high school in America.

« For some, the Lauren prep has become cliché, but actually I think that there is so much genius involved in his reinvention of preppy traditions and that is why whenever the preppy trend circles back to the top, it’s Ralph Lauren who is right there, front and center, leading the pack, » Tuite explains in an email. « He offers a closet full of preppy staples that perennially sell well, but can still bring a fresh take on a well-trod fashion path. »

And now, here we are again, back at a place where anti-establishment sentiment runs deep. How does a company like Ralph Lauren react to these cultural ebbs and flows? By giving its take on whatever the look of the moment is. In a roundup of old Ralph Lauren advertisements, Vanity Faircaptioned a ‘90s ad featuring a cropped long sleeve top and a denim maxi skirt as: « Ralph Lauren did grunge?! »

Patricia Mears, the deputy director of the museum at the Fashion Institute of Technology, remembers observing how Lauren’s merchandise morphed to speak to different generations when she was conducting research for a book and exhibition on Ivy style at FIT in 2012.

« When we were looking at images for the book, one of the things that we saw was a more recent photo shoot with young men, handsome, Ralph Lauren-esque. They were wearing certain things like beautiful crested navy blue blazers, but then they also had knitted caps like what you’d see on surfers or skaters, » says Mears. « Ralph was very smart about incorporating things like skate culture into a look that is still going to include the cornerstones of the Ralph Lauren vocabulary. It will still have chino pants or a navy blazer, but the T-shirt and the hat and some of the other accessories are going to be much more cutting-edge and something that a twentysomething today can relate to. »

Recently, some of Ralph Lauren’s lines have a boho feel in accordance with current trends. Carly Heitlinger, the blogger behind The College Prepster, says she doesn’t consider Ralph Lauren a traditional prep brand based on the current women’s merchandise, because it is so fashion-forward.

« A lot of their designs are a little bit trendier, a lot of crochet and knit, » says Heitlinger. « I’m sure you could find a piece or two within each collection that fit into more classic, traditional outfits like the button-downs, but there’s a lot of trendier stuff in there too. I think they really embraced this bohemian look. » She isn’t buying much from the brand these days, but says she would shop it more if it moved back towards its traditional prep roots.

No matter how the brand may change under its new CEO, Lauren’s own effect on fashion will always be far-reaching. So many designers have come up under his tutelage, from Vera Wang to Thom Browne to Tory Burch. His reputation in the industry precedes him.

« I asked Marc Jacobs one day, ‘Who’s your favorite designer?,' » says Mears. « At first when he said Ralph Lauren, I thought that was an interesting choice, but then he elaborated that there’s no person in the world who has done a better job of galvanizing that classical American look and turning it into an empire. When you see a Ralph Lauren piece you really know you’re looking at Ralph Lauren. He said that he’s probably the best designer in the world at that. »

And as the company looks forward, Lauren is adamant that Ralph Lauren will continue to be « a part of life, » as he told analysts at that Investor Day meeting. « This is about creativity, about life, » he said. « It’s not did we make a new shirt, look at us, we made a shirt with three buttons. It’s about living. It’s about dreams. And everyone has a dream. »

“What can you say about a 25-year-old girl who died?” reads the opening line of Erich Segal’s 1970 best-seller Love Story. Well, for starters, Jenny—or the real-life model for Segal’s fictional tragic heroine—didn’t die. Her name is Janet, she’s Jewish, and she’s alive and well and living in New York City.

In 1998, a series of misreported conversations made it sound as if Al Gore had claimed that he and then-wife Tipper had inspired the young couple at the center of Love Story: the preppy Oliver Barrett IV, and working-class ingénue, Jennifer Cavilleri. A woman named Janet Sussman stepped forward as the “real” Jenny, which was a revelation of such proportions that Maureen Dowd wrote about it in her column for the New York Times, People Magazine ran a feature story, Inside Edition interviewed her, and Oprah later followed up with a taped special segment. But these quick takes only scratched the surface of what turns out to be a more revealing—and very Jewish—story, involving a youthful love triangle in Midwood and an author who would transform unrequited love into a book that made him rich and famous.

Janet Sussman grew up in Flatbush, the younger daughter of intellectual Russian-Polish immigrants who came to the United States with the help of Zionist organizations. The family was part of what her older sister Deborah called “the Tribe,” a close-knit social circle dedicated to raising money to help establish a Jewish homeland. That circle included the Gartners, the parents of a boy named Gideon, with whom Janet shared the same piano teacher, Roberta Berlin. From the time they were “eight, nine, ten years old, we were performing together in recitals playing four-hand piano duets,” Janet recalled, when I spoke with her recently.

Around the same time, she started attending Camp Kinderwelt (Yiddish for “children’s world”), a sleep-away summer camp for boys and girls aged six to fifteen located in Highland Mills, New York. It operated in tandem with Unser Camp (“our camp”), a resort that attracted Yiddish intellectuals and artists—theater actors, directors, poets, and teachers. Established in the late 1920s, Kinderwelt accommodated about 500 campers and counselors at its height. All that remains of Kinderwelt today is a website run by Suzanne Pulier, who was a camper in the late 1940 and ’50s at the same time as Janet. Suzanne recalls that Janet’s nickname was “Machine-Gun.” Suzanne explains: “Apparently she had a laugh that sounded like that and was very contagious. I know many a boy had a big crush on her!”

Suzanne’s memories concur with those of Marty “Smitty” Smith, who remembers Janet as being “very pretty, part of the ‘with-it’ group,” who sang for Saturday services and Friday night Shabbat. Suzanne clarifies: “We sometimes performed in Yiddish for the Unser Camp adults, we walked through their camp on Shabbat showing off our white clothes for the evening’s religious services and we sang for the adults when they had events in the Literashe Vinkel (the little amphitheater).” Janet also played the piano to accompany the singing.

In 1952, while at camp, 15-year-old Janet received an unexpected love letter from a 17-year-old schoolmate in Brooklyn. It was a seven-page confession that he loved her with all the force a love-struck teenager could muster. He felt compelled to write because he was about to leave for college, and feared he’d lost his chance to go on a date with her. He dreamed they would get married in ten years. He dreamed that she would just write back. He worried that he was foolish to confess, because so many other guys were also in love with her.

***

In the early 1950s, Janet Sussman attended Midwood High School along with Gideon Gartner and Erich Segal. (Allan Konigsberg, class of ’53, was at Midwood with them too. He would later become known as Woody Allen.) Erich was the same age as Janet but a grade ahead of her, and Gideon, who wrote her the seven-page love letter, was two years older but three grades ahead. Inside the social ecology of Midwood High, there was very little overlap between Erich and Gideon, and no points of intersection between Janet and Erich.

Now an undergraduate at M.I.T., Gideon was always calling her, trying to get her to go out with him. When that failed, he began writing her letters. But Janet was not interested in him. “I had work to do!” Janet exclaims in tones of mild indignation. She was too busy with friends, singing, and her studies to bother with a boyfriend. With lifelong best friend Helen Mones, Janet would play guitars once a week; they also sang together in the All-City Chorus, which brought together students from all five boroughs.

So, when Janet began receiving more anonymous valentines, she didn’t think anything of it. “In those days,” she explains, “life was what happened. You didn’t question things. I got letters. I figured that everybody got letters.” The letters started in 1954, when she was 17 years old, and a junior in high school. “And then he started to sign his letters to me,” Janet says. The new letter-writer, Erich Segal, would keep writing her letters for the next 16 years.

Janet knew who Erich was, of course. Everybody knew him as the Mayor of Midwood, the equivalent of the student body president. But she had no idea how she came to his attention. He didn’t hang out by her locker or participate in chorus. She didn’t see him at all during her normal daily routines.

Unbeknownst to Janet at the time, Erich had asked Dick Kolbert to find out about her. Dick was class of ’55, a junior, and into basketball. Erich was class of ’54, a senior, a star of track and field. Dick sat next to Janet in English class and had played Algernon to her Cecily in the student production of Oscar Wilde’s play The Importance of Being Earnest. Dick’s secret assignment was to “ask her questions, find out if she had a boyfriend, what she said about him, that sort of thing,” and to report back to Erich what he’d learned. “It became clear that Erich was stuck on Janet,” Dick says crisply, “and Janet did not reciprocate.”

The following year, “Old Kolbert” succeeded Erich as Mayor of Midwood. A dignified man who speaks carefully, Dick doesn’t doubt that Erich based “Jenny” on Janet. There were too many similarities between them. And he believes that Erich’s feelings were sincere. “Teenagers really do experience genuine love,” he muses. “Even if it’s love at a distance. He may have been a little self-dramatizing, but he was genuinely smitten.” Though he wouldn’t reveal any details, Dick confirmed that Erich subsequently wrote “a long letter to him, pouring out his heart” regarding the depths of his feelings for Janet and asked him to burn it after reading. And so he did.

***

Dick Kolbert remembers that Janet was very pretty, but that it was Marge Cama who was voted “Prettiest Girl” in their graduating class at Midwood. The difference was that Janet had, “well … something,” he offered vaguely. “If there was a Geiger counter, she would be towards the top.”

Teenage boys may have been drawn to Janet’s artless good looks, but it was that ineffable quality of being your own person that made them fall in love with her. When Janet’s sister Deborah was 14, she wrote a strangely revealing essay about her then 8-year-old sibling. “Her tiny pug nose and tinkling laugh are two of the factors which gave her the titles, ‘Mis-Chief’ and ‘Machine-Gun,’ ” Deborah wrote of her sister in third grade. “I often wonder how she acts with male friends of her own age, and why they are so attached to her.”

Deborah titled the English-class essay “A Female Casanova” on account of the fact that Janet had a “boyfriend”–another third-grader named Ronald, who briefly became sick and stayed at home, leaving room for “Richard” to move in. Yet Deborah was perplexed. What qualities did Janet possess that made boys become infatuated with her? Nobody I spoke to about Janet’s years at Camp Kinderwelt or Midwood High School described her as a flirt, “boy crazy,” or any variation thereof. Rather, her response to male attention seems consistently to have been amused bafflement, followed by indifference.

In high school, Janet shared Erich’s letters with her other life-long best friend, Diana Stone. Diana lived across the street from the Sussmans’ home in Brooklyn, and she and Janet were as close as sisters. With a note of incredulity in her voice, Janet exclaims: “I think Diana even tried to persuade me to relent, just a little!” To no avail. She wasn’t rude to Erich, Janet stresses. But she never went out with him. Yet Erich’s letters kept coming.

At Barnard, Janet double-majored in music and French. An accomplished pianist, she composed the entrance music to Barnard’s “Greek Games” as well as a second piece for the dance portion, choreographed by Tobi Bernstein (today the distinguished dance critic Tobi Tobias). “This piece became the soundtrack to our childhood,” Janet’s youngest child, Aleba, states, “as it was the one piece she would turn to again and again when playing the piano for pleasure and not for work. We loved it so much—it totally captures her passion. It has very intricate syncopation amid gorgeous melodies.” A serious student, Janet took a composition class with Otto Luening across the street at Columbia University. This very small class included two future winners of the Pulitzer Prize for music composition: John Corigliano, who was also in Janet’s class at Midwood High School; and Charles Wuorinen, who “sat in the back row alone all the time,” Janet elaborates. “We all knew he was the genius of geniuses.”

By fall of 1959, she had graduated from college and finished up a summer at the Cummington School of the Arts in Massachusetts. She then found a job working as an administrative assistant for Columbia Records in New York City. Meanwhile, her older sister Deborah, a protégée of Ray and Charles Eames, had been on a Fulbright year in Ulm, Germany, and had moved on to Paris, where she settled into her new job as a graphic designer for chic department store Galeries Lafayette. Deborah wanted her baby sister to come visit.

It was time for Janet to make a choice. She had a good entry-level job at a business where she might have a future if she stayed—but Paris beckoned. After hemming and hawing, she finally gave notice to her bosses at Columbia Records and joined her sibling for a stint of seven months in Europe. On June 8, 1960, Deborah wrote to their parents:

after five weeks, i cannot imagine being without her. i was amazed and impressed, and so are others, at her insights and awarenesses of people. she has developed very highly intellectually, equaled (need one say?) by her beauty. yet i think that being far from home and her usual routines is the best thing she could do right now, in order to learn a certain independence.
Deborah’s letter made no mention of the man from Midwood High who was planning on delivering himself to their doorstep in the 18th arondissement the very next day, flowers in hand, with plans to take her baby sister out for her birthday, which happened to be close enough to his birthday, too. Erich Segal and Janet were Geminis, the sign of the Twins, born one week apart: June 9 for her, and June 16 for him. Erich had written Janet to let her know of his plans, but he’d not heard from her in two weeks since his last letter, he complained, and had no idea if she would be there when he arrived.

He had reason to fret. If Janet wasn’t exactly avoiding him, she wasn’t waiting breathlessly either. The sisters had driven to South of France, to the French Riviera, taking the routes favored by truck drivers because the food along the way was cheap and good. As for Erich, she says, he was “on his own.” It didn’t help his case that her sister “didn’t think he was important,” Janet relates with a rueful laugh. Ever decisive, Deborah summed Erich up and cut him down in three words: “He’s so short!” Janet concurs that he was short of stature, but with a “magnificent face.” When the three of them eventually ended up having dinner in Paris at the home of Erich’s aunt and uncle, “I was on good behavior,” Janet says lightly. “Deborah—ahem—wasn’t.” The formal meal turned into a scene from Hannah and Her Sisters à la française, with Janet being her charming and effervescent self, and Deborah blurting out caustic remarks in impeccable French.

Deborah, Erich, and Janet on the French Riviera, 1960.
That Janet agreed to dine with Erich’s relatives did not turn her into his girlfriend. “I wasn’t rude,” she exclaimed indignantly. “But he made every effort to see me, and I made none to see him.” Erich flew back to the United States, his love unrequited. And he kept sending her letters. From Cambridge, Massachusetts: a postcard mailed to her in Paris. From New York City: a telegram to her in Paris. From Barrow, Alaska: a letter addressed to her in Brooklyn. From somewhere over the friendly skies: a letter on American Airlines’ stationary, scrawled on a return flight from a Passover visit with his mother and family. From his office at Harvard University, where he’d obtained his undergraduate, Master’s, and doctoral degrees and then joined the faculty: a letter to her in Brooklyn, written in a ruled blue book used for exams. And so on.

Janet still has dozens of letters in her possession. They are full of chatty details of the non-events of the quotidian, using charming terms of endearment, little doodles, and clever puns in Latin, Greek, French, and Hebrew. During Janet’s sojourn in Paris, he visited her parents in Brooklyn. “Erich spoke Russian with my father, Polish to my mother, and Yiddish to my grandmother,” Janet recalls about that visit. Her parents approved of him, and they were thrilled when he told them he would be visiting her. From Paris, Erich sent a letter to Irving and Ruth Sussman to update them on the status of his visit with their daughters, including a description of that interesting supper with his relatives.

Following Erich’s departure from Paris, Janet had remained in France. She was nearing the end of her funds, so she began to look for a job. She found one close by in the 19th arondissement, working with a record company called Etablissements Atlas. Her job was to translate the French information on record labels into English. After two months, her bosses offered her a permanent position in London, where the company was opening a new office. But Janet didn’t take the London job, and she didn’t stay in Paris. Instead, both she and her sister decided to return permanently to the United States, sailing home on a ship called La Liberté. The voyage lasted a week. Upon their arrival, both sisters came back to the family home in Brooklyn, but Deborah immediately began readying to depart for California, where she would resume working for Ray and Charles Eames.

As it turned out, Erich Segal wasn’t the only suitor writing Janet earnest letters. Gideon was sending them too, and he had the home field advantage. “He was one of our own, part of the Rambam club,” Janet explained: “This was one of the arms of the Labor Zionist Federation, with the goal of establishing a nascent Israel. I inherited that fervor from my father.” And if, in high school, Gideon was a typical skinny teenager, Janet recalled, “he came back from college to Brooklyn, transformed.”

***

By October 1960, Janet had started a new job working as the assistant to Geraldine Souvaine, producer of the Metropolitan Opera’s Saturday Afternoon radio broadcasts. When Gideon called and asked her out again, her heart didn’t leap with giddy anticipation. Still, she accepted his invitation—in her mind, a kind of pro forma social ritual, the sort of meetup owed to someone whose parents knew your parents and participated in the same groups. “We’d never before been on a date. When he showed up, he was like a stranger,” Janet says in lingering tones of wonderment. “He was my piano partner but he’d been a boy, then. I came back from Europe and he was a man. I was suddenly romantically attracted to him. I admired his broad shoulders,” Janet laughed, a 78-year-old grandmother suddenly girlish again, “and that was it!” A few months after their first date, he and Janet married.

They’d planned a traditional big wedding in a synagogue, but Janet’s father Irving was hospitalized at Maimonides Hospital with a hip injury and suffering from what would later be diagnosed as the early stages of Parkinson’s. Janet related: “In the Jewish tradition, you don’t postpone the wedding, so we brought the wedding to him. The rabbi came, and the ceremony was held in the hospital. It was very small, just family and close friends. We left the next day for our honeymoon in Puerto Rico.”

It was a marriage between families, joined together inside a community that had supported them since they were small. They’d known each other nearly their entire lives yet had fallen in love as adults. By the time they wed, Janet’s childhood piano partner had pursued her for eight years, and it was the third time she’d resigned her job and walked away from a potential career. She was almost 24 years old.

Erich saw the wedding announcement in the paper and wrote Janet a letter to congratulate her. Still, his letters kept coming.

***

The newlyweds moved to Tel Aviv and lived there for four years. Janet remembered this as a wonderful, emotionally fulfilling period in their lives. Gideon’s extended family in Tel Aviv had embraced her. She learned to speak Hebrew fluently and was thrilled to be having healthy babies—first Perry, then Sabrina—born 16 months apart. For a period of eight months, Gideon was transferred to Paris; then the family returned to Tel Aviv. Perry was too young then to remember specifics, but he recalled that his father was “incredibly ambitious,” displaying the drive that would lead him to found the Gartner Group (now Gartner, Inc.), the first of several successful companies that would establish him as one of the pioneers in the new industry of information technology. Returning to the United States, they eventually settled in Mamaroneck, New York, where their third child, Aleba, was born.

And Janet and Gideon wrote each other love letters. Throughout their marriage, they communicated through music, and regularly exchanged letters—sometimes mailed, often left on each other’s pillows—that paint a complex, profound, private portrait of a marriage. But if one wonders what set Gideon apart from Janet’s other suitors, this simple line may explain it: “To you,” she wrote to Gideon in a letter of 1961, “I’m real.”

Now a married woman with three children, Janet started receiving notes and letters again. She knew who was sending them. Erich had become a professor of Greek and Latin literature at Yale and had also met with success as a Hollywood screenwriter. She figured Erich for a prolific writer who’d simply gotten in the habit of writing down and mailing his thoughts to her. Every once in a while, she responded with a brief note and didn’t ponder the matter any further. Meanwhile, in 1968, her sister Deborah co-founded the pioneering environmental design/graphic arts firm Sussman/Prejza, which became one of the most influential design companies in the country.

In 1969, Janet’s entire young family was sound asleep when the phone rang at 3 a.m. It was Erich Segal. “He was soused,” Janet recalled. “He told me that he’d just written his final love letter to me and that it was over a hundred pages long.” That last, very long letter was Love Story. A shortish novel, it became the best-selling book of 1970 and made Erich an instant millionaire. When the film exploded the following year, Erich invited Janet to accompany him to the Plaza Hotel in New York City, where she dined with him and the film’s stars, Ryan O’Neal and Ali MacGraw, as well as the producer, Robert Evans. Janet recalled: “Gideon said I could go—however, he stipulated that I couldn’t be identified to the press as ‘Jenny’.” She attended the fête as the “mystery woman.”

In the silence, the mystery surrounding Janet’s identity merely thickened, until that fateful conversation in 1998 which transformed Tipper from Tennessee into Jenny from Rhode Island. As result of that story, Erich stated publicly that Oliver Barrett IV had been partly inspired by fellow Harvard student Al Gore, the son of a senator and a WASP product of prep schooling, but he’d given Oliver the personality of Gore’s roommate, Tommy Lee Jones, the poet-cowboy actor from Texas. But what about Jenny?

Though Erich never came out and declared in so many words, “Janet Sussman is Jenny Cavilleri,” he admitted as much in an extensive interview for the May 1971 edition of the Italian magazine Oggi. In addition to describing his changed romantic prospects in the wake of sudden wealth and fame, he confessed his feelings for “Jenny” at length. “When you lose the woman you love,” he said, “it is over for you, whether she leaves you for another man, or she dies. You are still alone. It was at this point that I started thinking about Love Story. That’s why in the book, Jennifer dies, because for me she had died.”

The interview included a photograph of Erich and Janet on the Promenade des Anglais on that trip to France in 1960. The caption, in Italian, identifies Janet as “Jennifer, the muse,” and states plainly that the woman in the photo, Janet, is the inspiration for Love Story. Like “Jenny” as described in Love Story, Janet was: “beautiful. And brilliant … loved Mozart, and Bach. And the Beatles.” Was affectionately called “Four-Eyes,” hated her glasses and removed them at every opportunity. Was a music major, though Janet went to Barnard and also majored in French, whereas “Jenny” attended Radcliffe. Jenny/Janet was an accomplished pianist, an intellectual in the habit of biting her nails, quick with sarcastic retorts, especially with men, and she often used the word “stupid” to express her impatience.

It’s fun to re-read Love Story in light of the history shared by Janet and Erich, as it turns his prose into something far more confessional and intimate. “Either way I don’t come first,” Oliver complains on the very first page, “which for some stupid reason bothers the hell out of me.” “I would like to say a word about our physical relationship,” Oliver begins chapter 5. “For a strangely long while there wasn’t any.” But Jenny is working-class Italian American, and Oliver is a wealthy WASP. Or, as Jenny says, “Ollie, you’re a preppie millionaire, and I’m a social zero.” There is nothing of Jewishness in the novel, which instead celebrates Anglo-American culture and values to the point of pandering. Jewish identity comes up only twice in the book, first when Oliver remarks that the editor of the Harvard Law Review, Joel Fleishman, wasn’t very articulate; and second, when Oliver graduates from law school and is job hunting in New York City. He declares:

I enjoyed one inestimable advantage in competing for the best legal spots. I was the only guy in the top ten who wasn’t Jewish. (And anyone who says it doesn’t matter is full of it). Christ, there are dozens of firms who will kiss the ass of a WASP who can merely pass the bar.

Given that Erich’s father and grandfather were rabbis, this is a curious statement that suggests he envied structural whiteness even as he prided himself on succeeding on his own intellectual merits. In his mind, nonetheless, WASP status seems to have been tied up with better success with women. In the Oggi interview, Erich described one of his final in-person conversations with the “real” Jenny, i.e., Janet, which took place at a restaurant following the book’s release. In it, he not only confirmed to her that she’d inspired the character, but believed that if he’d only been an Oliver in real life, “we would have gotten married”—an assumption that has no basis in Janet Sussman’s subsequent life.

Janet still seems amazed that she had such an impact on Segal and even more confounded that this story would have resonated so strongly with the public. “It was only many years later,” Aleba remarked, “that I realized how difficult it must have been for my mom to be the invisible muse, the real-life healthy living inspiration for this dead heroine in this impossibly huge best-seller and film. She never ever expressed this, but how could she not feel the frustration? They must have had a silent agreement not to make a big deal out of it, in part out of respect to my dad and us three kids.” (After 17 years of marriage, Janet and Gideon divorced, the way couples do once the children are out of the house. Though there have been suitors over the decades since that split, she has not gotten remarried.) Her middle child, Sabrina, clarified: “She had—has, really—no idea how surprising it was that she resisted Erich without thinking much of it. She has a way of doing that, living her life on her own terms, even as others seek to be part of it.”

Maybe she got divorced and moved to Greenwich, where she is alive and well and playing the piano in a chamber music trio.

Janet Sussman Gartner loves Beethoven, Bach and the Beatles. She bites her nails. And, at 60, this mother of three still enjoys putting Harvard preppies in their place.

In this case, the preppie is the Vice President. Mrs. Gartner has come forward to say that she, not Tipper Gore, is the model for the saucy but doomed heroine in »Love Story. »

»They used to call me the girl with sparkle, » said Mrs. Gartner, who offered as proof pictures of herself with Erich Segal, a copy of the Italian magazine, Oggi, in which Mr. Segal is quoted saying Janet is Jenny, and a bunch of old »Sweet Suss » love letters signed »Erich, » »Segal » and »the Kosher Liberace. »

It may seem odd that people are so eager to associate themselves with the most treacly book and movie in modern times. It may seem odd that I keep writing about the most treacly book and movie in modern times. But then, I live in a city that has gone gaga over a puppy.

First the Vice President, to warm up his image, planted the notion that he and Tipper were the models for Oliver Barrett IV and Jenny Cavilleri. But Mr. Segal reined him in, making it clear that Tommy Lee Jones was the model for the sensitive, studly part of the character, while Al got the neurotic father-fixated part, and Tipper got zip.

That raised the question: Why not Tommy Lee Jones for President? Doesn’t America deserve the cool roommate? (And Will Smith for Vice President!)

Now Mrs. Gartner is laying claim to Jenny. What’s next? Are we getting into Anastasia territory? Will a line of Italian men step forward to say they inspired Phil Cavilleri, Jenny’s saintly dad?

In a 1970 interview with The Times, Mr. Segal said he had used the story of a Yale student whose wife had died, but had based Jenny’s personality on a flame from his Harvard days who did not go to Radcliffe.

Mrs. Gartner says she was good friends with Mr. Segal when they were both at Midwood High School in Brooklyn (along with Woody Allen) and later, when she was at Barnard and he was at Harvard. She said they once traveled through the south of France with her sister, but she did not reciprocate the infatuation of her »brilliant suitor. » »I was an idiotic young woman, a pretty girl, I had a million boyfriends and I threw them out like the garbage at the end of every day. »

One night, several years into her marriage to a computer executive, she said, an excited Erich called her at 3 A.M. »My husband was disturbed. Erich said, ‘I just wrote you a 250-page love letter.’ When the movie opened, it was a heady time. He re-entered my life, and invited me to go out to dinner at Trader Vic’s in New York. We were at a large circular table with Ali MacGraw and Ryan O’Neal sipping out of a common huge drink with long straws. I thought, ‘Life can’t get any better than this. Janet Gartner from Mamaroneck is Jenny.’ Now everyone has forgotten. »

Mr. Segal would not confirm or deny Mrs. Gartner’s claim. He sent a fax from London saying: »I would be very happy if you did not write this piece. And extremely grateful. » When my assistant reached him by phone and asked if Janet was Jenny, he again avoided a yes or no: »Can’t you just say that I’d already left the office? »

Mrs. Gartner said Mr. Segal had made her a lower-class Italian to spice up the story with class and religious conflict. »We both had Jewish immigrant families. He was brought up as an Orthodox Jew, I a Conservative Jew. »

She still cherishes the »dazzling » love letters with the English-French puns, and a poem about a Passover and a satyr.

She says Mr. Segal never told her who inspired Oliver. But she’s ready to get on my »Tommy Lee Jones for President » bandwagon. »He’s the most irresistible man on the earth, » she says.

Voir encore:

Her Story

Sophfronia Scott Gregory

People

January 19, 1998

IT WAS A DARK AND STORMY NIGHT—well, it was dark anyway—when Janet Sussman Gartner learned she was the inspiration for high school classmate Erich Segal’s first novel. Gartner, her husband and three children were sound asleep in their Ma-maroneck, N.Y., home in 1969, when the telephone practically exploded at 3 a.m. “Hey, Sussy” crooned a seductive voice. “I just wrote you a 258-page love letter.” It was Segal, then a Yale professor, freshly stoked on completing the manuscript the world would soon know as Love Story. He had taken the essence of Gartner, her razor-sharp wit, playful sexiness and love of music, and created Radcliffe student Jenny Cavilleri, the leukemia-stricken heroine of his bittersweet weeper about an Ivy League romance that ends in Jenny’s death. The novel became the best-selling book of 1970 and a 1970 box office smash starring Ali MacGraw and Ryan O’Neal. “I remember feeling like a jolt of electricity went through me,” recalls Gartner. “I didn’t sleep the rest of the night. I was too overcome with what I realized I meant to him.”

Gartner, 60, kept mum about being Segal’s inspiration, but, she says, Al Gore’s clumsy assertion last month that he and wife Tipper were the models for the book’s couple proved too much to bear in silence. (Gore has since declared it all a misunderstanding.) The idea that Tipper Gore was Jenny, Gartner says, “is the most preposterous thing I ever heard.”

Segal, 60, now married and living in England, isn’t talking. But in a 1971 article on the author, the Italian magazine Oggi identified Gartner in a photo (with Segal in France in 1960) as the woman who had inspired Jenny—without naming her. And, Gartner says, Segal took her to dinner with actress MacGraw in New York City just after the film’s release. Segal recently said that he never met Tipper but that the character of Oliver Barrett was based both on Gore and actor Tommy Lee Jones, Gore’s Harvard roommate.

All these years, Gartner shared her secret only with family and friends. “I grew up knowing that my mother was Jenny,” says daughter Aleba, 31, a publicist. “It was kind of legend in our house.” But Gartner says that she and her husband, Gideon, 62 and CEO of Giga Information Group in Cambridge, Mass., kept quiet because “it seemed a little bit awkward to have my identity known” while she was married and Segal was single.

Though they never dated, Gartner and Segal were close friends from their days at Brooklyn’s Midwood High School in the 1950s. Raised by intellectual immigrants from Poland and Russia, Janet Sussman was quick-witted and outspoken. Like the fictional Jenny, she played piano and, like her own mother, spoke three languages. “I’m not a shrinking violet,” she says. “Someone once coined me ‘the girl with the sparkle’.”

In 1954 the smitten Segal enrolled at Harvard, from whence he filled Gartner’s Brooklyn mailbox with dozens of letters, which she still treasures. “Darling Suss, sweeter than halvah,” he wrote. But Gartner did not return his affection. “He was a very great friend, and my admiration for him was boundless, but I did not share his emotions,” Gartner recalls.

She entered Barnard College in 1955, and in 1960, a year after graduating, she joined her older sister Deborah in Paris for a seven-month sojourn. Segal followed, Gartner says, with intentions of winning her over. The romance never blossomed, and in 1961 she married Gideon Gartner, another high school pal. Though Segal continued corresponding—and he never had to say he was sorry—his tone changed from lovelorn to friendly. Eventually the two fell out of touch. End of story?

Not quite. Gartner, now divorced and living in a Greenwich, Conn., apartment, supports herself as a pianist and music coach. She believes Jenny’s death was a metaphor for Segal’s failing to win her in real life. Indeed, Segal told Oggi that losing the woman you love is the same “whether she’s left you for another or whether she’s died.” Gartner admits she sometimes wonders what might have been had she married Segal. For one thing, he might not have created the greatest hankie wringer in modern literature. “If I had dated him and everything had been fine, I’m not going to say he never would have written Love Story,” muses the muse. “But the need to write it may have been less.”

On October 1st something began bubbling in my subconscious. Ivy Style had reached its four-year anniversary, the MFIT exhibit had recently opened, and the accompanying book had been published.

I found that after four years of trying to look at this topic as objectively as possible, and talking to the men who were actually there during the heyday — Richard Press, Bruce Boyer, Charlie Davidson and Paul Winston — something unanswered remained.

I started thinking about Brooks Brothers and the college campus, which was chosen as the focal point of the MFIT exhibit, wondering about the connection between these two things. I soon found myself asking the most fundamental question: How do we explain how the Ivy League Look came about?

It’s easy to make generalizations, but hard to precisely articulate.

I next began thinking about the interplay between clothiers and their customers, focusing on the why as much as the what. Buttondown oxfords, plain-front trousers with cuffs, rep and knit ties — these are the whats, but what are the whys behind them? The answer couldn’t be simply “because that’s what Brooks Brothers sold,” when Brooks Brothers sold so much more that never became part of the Ivy League Look.

I telephoned Charlie Davidson and told him I was working on a piece though wasn’t sure where it was going. I started by asking him, “What portion of the Ivy League Look comes from Brooks Brothers, and what comes from the culture of young men on campus?” When Charlie, who’s been selling these clothes since 1948, responded, “That’s a good question,” I knew I was on to something.

The following essay is the result of my investigation. What began as an attempt to articulate Ivy’s origins grew into an overview about the whole broad arc of Ivy, how it codified and how it shattered into the complex “post-Ivy” era we’re in today.

In it I will argue:

• The Ivy League Look was as much about styling as the ingredients. And while the ingredients were relatively fixed and admitted new items slowly, the styling came from the campus and was always in a state of flux.

• It was the casual nature of the college environment and the importance of dressing down that led men in the 1930s to prefer rougher, casual fabrics — oxford cloth shirts, brushed Shetland sweaters, Harris Tweed jackets, flannel trousers — that has been the standard of good, understated taste for men on the East Coast ever since.

• The Ivy League Look included clothes for every occasion, from resort to formalwear, from city to country. However, the country element influenced the city far more than the other way around, and remains the most lasting influence of the genre.

• The Ivy League Look can be said to go through the stages of birth, maturity and decline, corresponding to specific points on a timeline.

• Once the look in its original, purist form ceased to be fashionable on campus, it ceased to be fashionable in society as a whole.

This lengthy piece will be presented throughout the week in five parts. New installments will be added at the bottom to preserve one cohesive post and comment thread. — CC

• • •

The Rise And Fall Of The Ivy League Look
Christian Chensvold

Part One: The Rise

In the late 1930s a new shoe became an instant hit on the Yale campus. First seen in Palm Beach in 1936, the “Weejun” penny loafer by GH Bass & Co. was immediately embraced by the students of New Haven. By 1940, the shoe store Barrie Limited was advertising its Horween penny loafers in the Yale Daily News, saying the shoe had “taken the university by storm.”

From the moment it appeared the penny loafer was an “instant classic” for wearers of the Ivy League Look, according to Charlie Davidson, 86-year-old proprietor of The Andover Shop in Harvard Square. Yet how do we explain the shoe’s overnight success, when so many shoes had come before and so many more would come later? For a genre of clothing that was slow to develop, that is characterized by its conservatism and supposed resistance to fads, this love-at-first-sight seems odd. Stranger still, the penny loafer was no temporary trend like the raccoon coat of the ’20s or the buckle-back chino of the mid-‘50s. Its place in the genre of clothing called the Ivy League Look remains to this day. It literally was an instant classic, embraced wholeheartedly and never relinquished.

Those Yalies who first donned the penny loafer in the late ’30s must have seen something special in the shoe, an inherent attractiveness and a harmony with the clothes they got next door at J. Press. “Casual slip-on shoes of the moccasin type are by far the most popular with students,” syndicated fashion columnist Bert Bacharach would later write in his 1955 book “Right Dress,” suggesting it was the penny loafer’s casualness of design — moccasin-style with no brogueing, laces, tassels, wings, nor anything else associated with a business shoe — that accounted for its instant appeal.

One thing’s for certain, however: No manufacturer could have anticipated or dictated the Weejun’s instant success. Something more mysterious and elusive was at work, the process of taste-driven natural selection by the closed culture of Eastern Establishment students of the 1930s. Young men and their peers, not clothing brands or magazine editors, decided what was fashionable.

Though it later achieved and lost mainstream popularity, the penny loafer remains available today at a wide range of prices, supported by both lifelong wearers and a steady supply of new converts. Typically paired with argyle socks in the 1930s, penny loafers were worn with white athletic socks in the ’50s and then sockless in the ’60s, the same item worn differently with each new decade.

The Ivy League Look is not simply a tailoring style accompanied by a specific group of furnishings and accessories. It consists of much more than just sack jackets, buttondown oxfords and penny loafers. It also consists of the taste-driven ethos that led some items to be accepted into the genre while others were rejected, and of a certain way of wearing the items that developed in the various upper-middle-class communities of the East Coast in the first half of the 20th century, chief among them the college campus.

“People made things a classic, not manufacturers,” says Davidson. “It’s people who made some things accepted and not others, otherwise how do we account for all the things that failed?”

Brooks Brothers And Ivy’s Big Bang

The Ivy League Look did not appear suddenly, but developed over time. “It was 30 or 40 years in the making without anyone knowing it would one day be called the Ivy League Look,” says Davidson. Although the clothing genre codified gradually, and while the lines that form the genre’s perimeters are debatable, there was something akin to an Ivy Big Bang, an instigating act that gave birth to this style of dress. And that is the introduction in 1895 of Brooks Brothers’ No. 1 Sack Suit.

Just as the jacket is the foundation of tailored clothing, this single item — natural shoulders, three button (after 1918, according to the Brooks Brothers book “Generations of Style,” by John William Cooke), dartless, with no waist suppression and paired with straight unpleated trousers — formed the blueprint for what would eventually become the Ivy League Look. And throughout the first half of the 20th century Brooks Brothers would continue to introduce a host of English items — the buttondown oxford, Shetland sweater, polo coat, rep ties, argyle socks — that became staples of the Ivy genre.

But Brooks Brothers also offered countless other items — yachting and hunting regalia, double-breasted tapered suits, and other overtly English items less easily Americanized — that were never embraced into the Ivy League Look. Why? For the simple reason that they would have been out of place in a campus environment, the fertile ground where the style would codify and flourish, and where, as we’ll see, an air of casualness and nonchalance was paramount.

So while Brooks Brothers offered everything within the genre, it also offered much more. The Ivy League Look is narrower than the Brooks Brothers catalog (catalog here referring to what the company offered from roughly 1920-1970), and for this reason one could argue that Brooks Brothers’ smaller rival J. Press was a purer Ivy retailer, in that it offered a broader selection (such as in campus-oriented tweeds) within narrower perimeters. Brooks Brothers was Ivy and much more; J. Press was strictly Ivy.

England provided Brooks Brothers with many overcoats to sell to the gentlemen of America. But starting around 1910, one came to dominate the Ivy League Look above all others: the polo coat, another example of taste-driven natural selection at work.

According to Esquire’s Encyclopedia of Men’s Fashion, which draws heavily on historic articles from Apparel Arts and Men’s Wear, camel hair coats were noted for their dominance at the Yale-Princeton football game of 1929, having usurped the powerful but short-lived raccoon coat trend. Cooke writes, “This sporty camel hair garment… becomes the rage on college campuses during the Roaring Twenties.” Decades later, Bacharach would note, “Camel’s-hair polo coats are still the favorite type of outer wear among college men.”

The collegiate popularity of the raccoon coat in the 1920s, which fashion historian Deirdre Clemente has traced to Princeton, is a perfect example of a huge trend that was nevertheless selected for extinction, while the polo coat survived, indeed still available from retailers such as Brooks Brothers, J. Press, Ralph Lauren and O’Connell’s. The coat’s longevity is surely due to its sporting associations and easy ability to style informally — all things that would resonate with young men. It certainly looks more at home on the sidelines of a football field, as coach Vince Lombardi demonstrated throughout his career, and as dramatized in the movie “School Ties,” where polo coats are worn at a tailgate party for a prep school football game. Somehow a Chesterfield just wouldn’t look the same.

With the pink oxford, which rose to prominence in 1955 (the “year for pink” according to LIFE Magazine), Brooks Brothers once again introduced a new item into the Ivy genre. But it could never have anticipated the pairing a pink oxford with evening dress, as Chipp’s Paul Winston has recounted wearing, and which is, for lack of a better expression, a very Ivy thing to do (Charlie Davidson also recalls wearing a buttondown oxford with black tie, albeit a white one, which illustrates Chipp’s penchant for the “go-to-hell” look). Winston’s gesture serves as a perfect example of the styling side of things: Brooks provided the item, and the people found innovative ways of wearing it.

In summary, we can say that Brooks Brothers was the primary provider of the Ivy League Look’s raw ingredients, while the culture — meaning the world of young men competing and conforming sartorially in their WASPy East Coast environment — provided the styling. With each new decade Brooks Brothers showed what to wear, while young men, who drive fashion, showed how the items could be worn. As a wholly arbitrary fractional breakdown, we could say that 2/3 of the Ivy League Look was raw materials, which were relatively fixed and admitted new items slowly, while 1/3 was styling, which was in a constant state of flux.

Town And Country, Or Wall Street And Campus

As the Ivy League Look developed, references to Brooks Brothers increasingly focused on two specific realms: the college campus and the world of finance. In his essay on Brooks Brothers collected in the book “Elegance,” G. Bruce Boyer succinctly notes, “The Brooks Brothers suit seemed to peg a man somewhere between Wall Street and his country house, by way of the Ivy League.”

In a 1932 article, the New Yorker mentions the same two worlds: “Of course, Brooks still have their tables piled with the good old soft-roll, high-lapel sack coats that have been the accepted college and bond-salesman uniform for so long.” Presumably those bond salesmen, like Yalie Nick Carraway in “The Great Gatsby,” picked up the taste for Ivy while at school. “The novels of F. Scott Fitzgerald, for example,” writes Cooke, “are peopled with earnest heroes who hailed from the Midwest but who came to play in the racy world of New York via Princeton or Yale.”

This 1929 ad for Wallach Brothers also mentions the connection between the world of finance and the style-setting universities of Princeton and Yale:

As young men graduated from school to take their place in the world, including the financial industry, their clothing would change from country to town. Writing on Ivy League students in her 1939 book “Men Can Take It,” Elizabeth Hawes notes:

The conventional costume for all the right people is a pair of flannel or tweed trousers and a coat that does not match. When I asked them whether they were going to dress in their quite comfortable tweed for work when they left college, they responded firmly “no.” They were absolutely clear on that issue. They said they were training themselves — or being trained — to take their places in the world, and the required costume would be a neat business suit.

Although it was based in New York, Brooks Brothers specifically merchandised for the college man and sold to him via an army of traveling salesmen who frequented the prep schools and colleges of the Northeast. An 1898 Princeton football program includes an advertisement from Brooks Brothers, with copy reading, “Our stock for the present season continues, we believe, to show improvement, and will be found complete in all the little particulars that go to make the well-dressed man.”

This Brooks Brothers ad appeared in the University of Pennsylvania’s 1926 yearbook:

Brooks Brothers continuously revamped its youth-targeted line throughout the 20th century, adding its University Shop in 1957 and replacing that with Brooksgate in 1974. It’s current Flatiron shop is merely the latest incarnation of a century-long catering to young men as well as their fathers.

The Ivy League Look was for both town and country, Wall Street and campus, but, as we’ll learn, the campus element proved to be the more lasting influence of the two.

The New Guard

Although Charlie Davidson is the oldest-living, still-working purveyor of the genre, he doesn’t consider himself old guard. The Ivy League Look was in full bloom in the 1930s, he notes, well before his founding of The Andover Shop in 1948. At the time Davidson considered himself to be offering clothing within an already established genre, yet targeted at the local geography. This sentiment is echoed by Richard Press, who says that J. Press’ locations outside of New York were meant to provide Brooks Brothers items in areas with an Ivy League campus (Cambridge, Princeton), but no Brooks; only Columbia had that.

As George Frazier put it in 1960, “Around the turn of the century, Arthur Rosenberg, then the foremost tailor in New Haven, began to exploit this [Brooks Brothers] style among Yale undergraduates, and, not long afterwards, J. Press, also of New Haven, fell into line.”

These smaller retailers outside New York took the Brooks Brothers template and focused more on the country side of the genre rather than town. And yet all these other players who used the ingredients that Brooks Brothers had provided felt that taste and small differences distinguished them. “We all thought our taste was better than our competitors,” says Davidson. “Norman Hilton, for example, had exquisite taste, and when you get to the commercialization of the Ivy League Look, he’s at the forefront.”

The most important and lasting clothier providing Brooks-based style for college towns was J. Press. Press’ difference from Brooks is summed up by Episcopal Archbishop of New York Paul Moore, Jr., who writes that Jacobi Press’ “tweeds were a little softer and flashier than Brooks Brothers tweed, his ties a little brighter.”

Richard Press, former J. Press president and grandson of the founder, has also stressed Press’ emphasis on country rather than town. “I think that one of the major differences between Brooks Brothers and J. Press,” he states in his 2011 Q&A with Ivy-Style.com, “beyond the obvious size, was that we were known as a campus store, whereas Brooks Brothers was much more urban.” Indeed, the merchandise for J. Press’ New York store was less purist than its campus shops. “If you look at our brochures,” says Press, “you’ll see that the two-button darted suit was sold only in the New York store, and it probably represented 40 percent of our suit sales there.”

While Brooks Brothers, originator of the Ivy League Look’s ingredients, was based in New York, New Haven is the top candidate for Ivy’s spiritual home. In a 2004 article entitled “The Yale Man,” the New York Times writes, “‘Natural shoulder’ was what men’s magazines called the Yale look, and for decades the clothing stores near campus at Elm and York Streets in New Haven were the natural-shoulder capital of the universe.”

Style setting also thrived in New Haven. “Students and their professors enunciated a new style,” says Press, “with their dirty white bucks, horn rimmed glasses, Owl Shop pipes, raccoon coats, J. Press snap brim hats, stuff that was too informal and sporty for Brooks. Big difference between city and campus wear and Brooks pushed the former, the rest the latter.”

Finally there was the issue of price: “Perhaps most important issue for the proliferation of Ivy,” says Press, “Brooks was too expensive. J. Press and competitors adapted to the more restricted allowances of the campus population and worked below Brooks price points.”

Although these new-guard clothiers used the template created by Brooks Brothers, they did so in the cultural environment where the Ivy League Look’s styling was at its most fertile: the campus. And because these clothiers and the student body were part of the same community, they had a close, symbiotic relationship. Students needed the clothiers to get what they wanted (and to want things they’d never seen before), and clothiers needed to find out what was popular. As a result, Ivy clothiers never left their eye off college men. In 1962, Sports Illustrated notes, “Representatives of the New Haven tailoring establishments—J. Press, Fenn-Feinstein, Chipp, Arthur Rosenberg, et al.—entrain for Cambridge to render biennial obeisance and to see what the young gentlemen are wearing.”

Earlier, in a 1938 article entitled “Princeton Boys Dress In Uniform,” LIFE Magazine writes, “The fact of the matter is that tailors and haberdashers watch Princeton students closely [and] admit they are style leaders.”

Clothiers also made sure college men knew they cared deeply about student tastes. This ad by Irv Lewis, a clothier serving Cornell, explicitly elucidates the relationship:

“The key element of successful campus shops,” Richard Press summarizes, “was their ability to establish personal relationships with students, faculty, coaches and administration. Brooks Brothers in New York and Boston was too diffused, and while each top customer had his clothing man, it changed from floor to floor, from furnishings to shoe department.”

College Students, “The Best-Dressed Men To Be Found Anywhere”

Bert Bachrach states that before World War II many clothing experts considered college students “the best-dressed men to be found anywhere.” The following passage, from a 1933 Apparel Arts article entitled “Clothes For College,” is a prewar reference to this very thing:

Today the college man is looked upon as a leader of fashion, a man who dresses inconspicuously and correctly for all occasions, thanks to the leadership of smart Eastern Universities, which have a metropolitan feeling, or at least are near enough to metropolitan areas for the students to feel all the influences of sophisticated living. We can thank the present-day “collegiate” element for the return to popularity of the tail coat, for the white buckskin shoes, for the gray flannel slacks with odd jackets, and for various other smart fashions which are typical of university men today.

For on-campus wear there is a general acceptance of country clothes in the typical British manner, such as odd slacks and tweed jackets, country brogues and felt hats. This is the way the undergraduates at smart Universities and prep schools dress today during classes.

Another Apparel Arts article from the same year shows that the Eastern Establishment virtues of being dressed down from a formal perspective and dressed up from a casual one most likely have their origin in the collegiate approach to dress that reached fruition in the ’30s. The article includes the quote “a perfect example of the studied negligence that is taken as the standard of good taste among college men,” and goes on to say:

The American University man is justly famed for representing, as a class, a high standard of excellence in personal appearance. Much of the secret of this distinction lies in the fact that the first thing the freshman learns is the importance of never looking “dressed up,” while always looking well dressed. Recently the tendency toward an effect of “careful carelessness” has been emphasized through the trend toward rough, almost shaggy, fabrics for town and campus wear.

The Ivy League Look’s emphasis on rough, hearty fabrics comes from students’ penchant for rustic, country clothes over more starched and pressed town clothes:

There’s a trend toward rougher suitings on all the eastern campuses. Early last fall fashion observers reported the growing popularity, particularly at Princeton and Yale, of rough tweedy type fabrics for all general knock-about campus wear — in fact for all except strictly town purposes. Worn smartly with either flannel, gabardine or other type of slacks, these rough fabrics of the Shetland or Harris variety showed a considerably increased acceptance on the part of the fashion leaders during the Palm Beach season.

Writing in the Saturday Evening Post in the 1930s, Arthur van Vlissingen states that trends aren’t dictated by manufacturers, who couldn’t afford to gamble on a fad that may fail, and that men only embraced a new item once they saw other men wearing it. These style setters were often found “at the places where the country’s leisured and socially prominent loaf, such places as Palm Beach and Newport” (coincidentally Brooks Brothers’ first two locations outside New York), and the college campus. “The fashions in clothing worn by our male population, between the ages of 14 and perhaps 25,” he writes, “usually get their start at Princeton.”

Vlissingen proceeds with the following sartorial breakdown of the Ivy League’s Big Three:

Harvard is a very large university, in a great city which influences the students’ styles heavily. [But] it holds to a tradition of careless dress—well-made clothes seldom dry-cleaned and never pressed. Yale is more compact and more finicky, but New Haven is also a large city. Princeton is in a smaller town, off by itself where it can incubate a style effectively. Practically every Princeton student is well dressed, whereas only one-third or so of the Yale men can qualify by our standards.

As these passages illustrate, if college men of the 1930s — the fortunate few able to afford school in the midst of the Great Depression — were among the nation’s best dressed, they achieved this status despite an insistence on never looking too dressed up by the standards of their time. Elements of the Ivy League Look, such as the penny loafer and polo coat, were embraced into the genre because compared to other footwear and outerwear options they were relatively casual. This certainly holds true for the buttondown shirt, which Bachrach calls the shirt of choice for college men because “the construction of the shirt, which allows the collar to roll rather than lie flat, provides the casual touch which young men like.”

In regards to tailored clothing, Bachrach suggests that the prized Ivy color of charcoal was embraced for its ability to take a beating without looking dirty:

The most important style set by the colleges in recent years has been suits and slacks in charcoal, a gray so dark in tone that it approaches black. This color has become almost a uniform at Harvard, Yale and Princeton. It is practical for a suit since it rarely shows dirt or signs of wear.

If men at Ivy’s Big Three were style setters for the whole nation, that can hardly be said of Columbia, the most interesting sartorial case among the Ancient Eight. For despite its location in the city of Brooks Brothers, Columbia is seldom if ever mentioned for style reasons. As a commuter school, Columbia’s student body differed from the other schools, but one can also conclude that a certain amount of distance from the metropolis was necessary for the styling side of the Ivy League Look to flourish.

This passage from Tobias Wolff’s novel “Old School,” set at a prep school in 1960, serves as a dramatization of how Columbia was viewed compared to the other Ivies:

I wanted out. That was partly why I’d chosen Columbia. I liked how the city seethed up against the school, mocking its theoretical seclusion with hustle and noise, the din of people going and getting and making. Things that mattered at Princeton or Yale couldn’t possibly withstand this battering of raw, unironic life. You didn’t go to eating clubs at Columbia, you went to jazz clubs. You had a girlfriend — no, a lover — with psychiatric problems, and friends with foreign accents. You read newspapers on the subway and looked at tourists with a cool, anthropological gaze. You said cross town express. You said the Village. You ate weird food. No other boy in my class would be going there.

In contrast, “Princeton was especially isolated and characterized by a particularly fervent and insular culture,” writes Patricia Mears in “Ivy Style: Radical Conformists.” Princeton also had the most affluent student body, with 80 percent coming from private schools during the inter-war years. “Although it lay part way between New York City and Philadelphia, Princeton was more geographically isolated than its rivals Harvard and Yale. Its campus was situated in a rural environment, surrounded by acres of bucolic farmland. As such, Princeton relied more intensely on its internally crafted society. The blend of wealth, manners, and aristocratic social construct proved to be the breeding ground for the creation of the elegant Ivy style.”

The Way You Wear Your Hat

The popular term employed during its heyday, the Ivy League Look, is interesting for its inclusion of the word “look.” While there are references to “an Ivy League suit” from the period, the popular term was “look,” not “tailoring” or “clothes.” This broader term suggests that there is more than just clothing involved, but also a proper haircut, and if not a particular social context, then at least all-American good looks. In the 1964 film “Ride The Wild Surf,” Barbara Eden’s character refers to her love interest as “Mr. Ivy League” for his handsomeness, poise and “scrubbed” appearance as much as his conservative clothing.

“Look” is also broad enough to encapsulate how the items are worn, since that is as much a part of dressing in a certain style as the components themselves. This illustration from a 1926 Vanity Fair article on collegiate dress includes a caption stating that Harvard men had their own way of pushing their hats “into a shape never conceived by hat manufacturers”:

Hawes includes several passages attesting to Harvard men’s predilection for an affected Old Money look:

At Harvard they have something called “white-shoe boys.” I gather it is okay to be one if you feel that way. It appears to be the Harvard idea carried to its furthest extreme. These are the sloppiest and worst-dressed of all the Harvard men, I was told. They wear dirty black and white shoes which turn up at the toes, black or white socks and gray flannels, very unpressed, tweed coats — and collars and ties, of course… The thing that distinguishes a “white-shoe boy” is his shoes — and the fact he has the guts to wear them and still feel okay socially.

In 1869 Harvard challenged Oxford to the first of its boat races, and it’s possible that the English influence on Harvard goes back to these sporting competitions. Hawes continues:

The coat should have leather pads on the elbows. These are often put right onto new coats. This is because the country gentlemen of old England have a habit of preserving their tweed coats for generations, mending them from time to time with leather pads and what not. The Harvard boys, not to be outdone by old English exponents of the finer things in life, are going them one better.

After noting that Yale students are much better dressed, Hawes adds, “I think the superiority complex of Harvard probably led them originally into the oldest clothes as a form of snobbishness.” Nevertheless, “I might add that the [men’s wear] trade does not consider Harvard as any source of style ideas at all.”

Russell Lynes’ 1953 Esquire article on the “shoe hierarchy” at Yale further emphasizes how much of the Ivy League Look came down to the elusive qualities of attitude:

… the social smoothies — butterflies in button-down collars — short haired, unbespectacled and with unextinguishable but slightly bored smiles. They wear the current college uniform, Ivy League version, but they wear it with an air of studied casualness, as though they would be at home and socially acceptable anywhere in whatever they had on. The uniform, of course, is the familiar khaki pants, white bucks, or possibly dirty white sneakers, a slightly frayed blue or white button-down Oxford shirt, no necktie, and a grey sweater which the wearer expects you to assume was knitted for him by a girl. On occasions that demand a gesture of formality, dark grey flannels without pleats supplant the khaki pants, a necktie (either regimental stripes or club tie) is worn, and so is a tweed jacket with vent, pocket flaps, ticket pocket, and three buttons. For bucks substitute well-shined cordovan in season. For city wear the uniform is a dark grey flannel suit; the haberdashery stays much the same.

Charlie Davidson also stresses what he calls the “attitude” long associated with wearers of the Ivy League Look, which he describes as a nonchalant approach to dress combined with poise and an air of self-assurance. Whether this poise is real or feigned is up for debate. “The Ivy League Look was a way of life more than anyone has been able to put a finger on,” he says. “In the beginning it was a very closed kind of thing, and so much of it was the attitude of not caring too much and being very assured of their station — and of having the right clothes.”

From the codifying period of the ’30s to the heyday of the ’50s and ’60s, the styling component of the Ivy League Look was constantly changing with each new group of classmen. For a young man to be considered well dressed by his peers in the ’30s or cool in the ’50s, it wasn’t enough just to choose the right items. They also had to be worn in the way that was then fashionable. And what was fashionable was always shifting, and emanated from campus culture.

For example, on page 59 of the 1965 book “Take Ivy,” a student strolls the Princeton campus wearing olive-colored shorts, penny loafers with no socks, and a buttondown oxford with the sleeves down, all topped by the neat haircut that epitomizes the era. He has used the ingredients the genre but put them together in a way that expresses both his personal whims as well as the style of his era, and nothing in the image suggests that a retailer, manufacturer or fashion editor told him to put together his outfit this way.

For a cinematic dramatization, the 1956 film “Tea And Sympathy” shows students styled uniformly in a combination of buckle-back khakis, white canvas sneakers, blue oxford shirts and gray crewneck sweatshirts. For that group of students in that particular location at that particular time, the juxtaposition of a dress shirt with a piece of athletic wear was evidently a style imperative.

This leads us to yet one more inexplicable preference in the Ivy clothing genre worth mentioning: The crewneck sweater. While V-necks and cardigans were always offered by Ivy clothiers, somehow the crewneck became the standard cut, even when worn with a necktie, as the Yale student below demonstrates:

It was something the youngsters picked up early; this outfit is also notable for how the components are put together as much as the items themselves:

It should come as no surprise that the preference for the crewneck can also be traced to style setting at Princeton, where a freshman orientation guide, for reasons unexplained, admonished the younglings not to wear V-neck sweaters. Much later, in his 1983 book “Class,” Paul Fussell would wryly explain why the crewneck is upper middle and the V-neck merely middle.

The Ivy League Look should not be thought of as merely a collection of ingredients. Equally important are the cultural forces that led certain ingredients to be embraced into the genre over others, even though this importance is difficult to trace, clouded as it is in the mists of fashion. Then there’s the element of how the items were worn, an equally vital element of the Ivy League Look. All the elements are a reflection of the tastes and cultural values of the Eastern Establishment, and the tastes and values, specifically, of college men during the interwar years.

The Legacy Of The Heyday

The 1959 movie “The Young Philadelphians” provides a helpful dramatic illustration of one character’s transition from country to town, or from campus to law firm, while still dressing within the confines of the Ivy League Look.

In campus scenes the protagonist, played by Paul Newman, wears a boxy corduroy sack jacket, slim flood-length khakis, white socks and penny loafers. Once he becomes a practicing lawyer, he dons a conservative gray suit, rep tie, pinned-collar shirt and lace-up shoes. While both jackets are undarted and natural shoulder, and all his clothes could have come from the same place, stylistically — in the simplest terms — he’s gone from the campus side of the genre to the Brooks Brothers side, or more from the styling-driven side to the product-driven side, or from an emphasis on how to wear the items correctly to how to select them correctly.

The book “Generations Of Style” includes a Brooks Brothers timeline, and while the listing for 1961 is oversimplified, it nevertheless makes the point that the campus-oriented side of the genre is the more lasting and influential: “A new style of casual, conservative dress defines the country: khakis, Shetland crewnecks, and button-down shirts set the tone… Campus style predominates, with the corporate ‘Man in the Gray Flannel Suit’ now being replaced by the more casual dress: penny loafers, Argyle socks, and tartan plaid sportcoats and shirts.”

Today, when a man passes you today on Madison Avenue and you notice how “Ivy/preppy/trad/whatever” he looks, he’s probably wearing loafers, flannels, a three-button sportcoat, buttondown oxford, and conservative necktie. You’re far more likely to see a man dressed this way than in the far more anachronistic business ensemble of worsted gray sack suit, white pinned club collar and longwings, and if you did, you’d be more likely to say “how IBM” or “how ‘Mad Men’” than “how Ivy League.”

The association of the Ivy League Look with the campus is so strong that even in the downfall year of 1967 an arch-sybarite like Hugh Hefner would remind his biographer of a dapper undergrad:

Black-haired, intense, slightly under six feet, he looks, in his often-photographed costume of white button-down shirt, orange cardigan sweater, slacks, loafers and pipe, like a college senior on his way to class.

Men who wear this genre of clothing today — by whatever name they call it — owe an equal debt to the illustrious firm of Brooks Brothers for introducing so many of the raw elements, and to the countless anonymous college men from the first half of the 20th century who codified the components of the Ivy League Look for future generations.

Part Two: The Fall

From Young Men’s Clothes To Old Men’s

In “Decline of the West,” Oswald Spengler argues that all cultural expressions go through the organic stages of birth, maturity and decadence. The Ivy League Look is certainly an expression of culture, and for it I’d suggest a birth of 1895, a golden age in the 1930s when the style was limited and aristocratic, a democratic silver age during the ‘50s and ‘60s when it was popular, and an end to the silver age in 1967, followed by a gradual decline into our present postmodern era.

This decline was expressed in a variety of ways, and the legacy of the genre is characterized by a range of conflicting manifestations, from the irrelevance of contemporary J. Press and the sack suit, to the generic “timelessness” of blazers, khakis, buttondowns and striped ties available from retailers as mundane as Lands’ End, and to fashion industry pastiche exemplified by some of the more outré items by Thom Browne, Ralph Lauren Rugby, and various neo-prep brands.

If the Ivy League Look didn’t die, then certainly a kind of descent into decadence occurred, which is attested by the mere fact that Brooks Brothers, instigators of Ivy’s big bang with its No. 1 Sack Suit, no longer offers the very item that gave birth to the entire genre, but instead sells a fashion novelty version called the Cambridge.

Furthermore, Brooks Brothers and J. Press long ago changed owners and merchandising strategies and can no longer be counted on to reliably provide what were once genre-distinguishing traits such as natural shoulder and collar roll.

But the death of Ivy can’t be blamed entirely on manufacturers, who simply cater to the needs of the culture as expressed in the marketplace. The Ivy League Look was once a vibrant, dynamic style that was an expression of the values of the Eastern Establishment. Later it was good, smart, current taste for a larger portion of the population. If Ivy is no longer available today in its original form, it is because fashion, which reflects society, has changed. The inversion of values that took place during the cultural revolution of the late ’60s, a topic that has been explored exhaustively by cultural historians and which is too big to discuss here, created a new cultural engine that drove fashion from the bottom up rather than top down.

While in the ’50s and early ’60s many actors and pop singers wore the Ivy League Look as a smart and current style, this was no longer the case after the upheaval of the late ’60s. When pop singers did take up a version of the look, as Dexys Midnight Runners did in 1985, it was the preppier version of the look then current. It was also the temporary costume of entertainers who had radically different looks before and after. In the 1950s and ’60s, pop icons could wear white bucks, buttondowns, neckties and soft-shouldered jackets and come across as sharp and with it. But with contemporary music groups such as Vampire Weekend, or in the films of Wes Anderson, Ivy staples come across as irony.

A glance through “Take 8 Ivy,” the sequel to “Take Ivy,” shows Ivy League students of the 1970s wearing the same plebeian sneakers, jeans and t-shirts worn by every other young person in America.

In assigning an arbitrary date for the end of Ivy, I suggest the year 1967. The change that occurred that year — the year of the infamous “Summer of Love” — is summed up tersely and dramatically in the following passage from “The Final Club” by Geoffrey Wolff (Princeton, ’59). The year 1967 witnessed a sartorial dismantling that was complete by 1968, when a new era was in full flower-child bloom:

Lining the second-floor hall were group portraits of Ivy members, and Nathaniel paused to examine them. Till 1967 the club sections were photographed indoors, in the billiard room; dress was uniform — dark suits, white shirts, Ivy ties. In 1967 a white suit was added here, an open collar there. In 1968 the insolent, smirking group moved outside, and was tricked out in zippered paramilitary kit, paratroop boots, tie-dye shirts, shoulder-length locks, and not a necktie in view.

Although the broader culture was changing rapidly and the hippie movement was spreading, the new open admissions standards at elite universities were changing the student body. Style-setting schools such as Princeton and Yale were no longer populated predominantly by kids who had gone to prep school, where they were forced to wear a jacket and tie every day and maintain a neat haircut. Schools were also dropping their jacket-and-tie dining hall dress codes. It’s impossible to underestimate the pace of social change in the late ’60s; the Ivy League Look, in its original guise, was slated for extinction, and the name attached to it during its popular silver age would fall into almost immediate archaism.

But what’s most important here is that once the Ivy League Look ceased to be fashionable on campus, it ceased to be fashionable period. More specifically, one could argue that once guys at Princeton stopped wearing it, it was over. The campus had always been the stronghold of the look, the place where it flourished for six decades, and was necessary for the look’s broader cultural relevance. Smart young men from the middle class and above had wanted to dress this way for 50 years. Originally it was a small number; later it was larger. Now suddenly no young people wanted to dress this way.

Other symbolically interesting things also occurred in 1967. Brooks Brothers’ president left the company after serving 21 years, all throughout the Ivy heyday, and Ralph Lauren goes into business. These two events are like two sides of the same coin. The man who helmed Brooks Brothers throughout its glorious postwar heyday retires, while Ralph Lauren launches his career. It’s an eerie foreshadowing of the role reversal that would happen over the ensuing decades, during which so much of Lauren’s merchandise would be closer in spirit, style and quality to classic Brooks Brothers than Brooks Brothers’ contemporary merchandise.

Within a few years of 1967 the UPI was calling the look dead, as in this story from 1971:

The Ivy League look as it used to be called died in the recent fashion revolution and the slope-shouldered, three-button jacket is almost a thing of the past. The suits and sports jackets being worn are strictly for special occasions.

Once it was no longer fashionable, the Ivy League Look, to return to the big bang metaphor, experienced a kind of supernova that shattered it into parts, which varied depending on wearer and context.

J. Press and Brooks Brothers continued, yet their clientele would gradually grow older as the look ossified from being young and current to being old and stodgy. J. Press stayed truer to the look, but as society changed rapidly around it, J. Press experienced a complete inversion in its relation to the broader culture, becoming what most would consider a provider of old men’s clothes, when from its founding in 1902 until 1967 it catered largely to young men.

The Twilight Of Ivy And Dawn Of Preppy

Some young people did continue to shop at the same clothiers and wear much of the genre’s items, but fashion was changing rapidly and the new version of youthful, Eastern Establishment style came to be known as preppy. The new generation had a much more casual approach to dress, reflecting changes in society as a whole. This passage from Alison Lurie’s “The Language Of Clothes” from 1979 shows how many of the Ivy League Look’s sportier items were being worn with a new attitude:

What distinguished the Preppie Look from the country-club styles of the 1950s was the range of its wearers. These casual garments were now being worn not only by adolescents in boarding schools and Ivy League colleges, but by people in their thirties and forties, many of whom would have considered such styles dreary rather than chic a few years earlier. Moreover, the Preppie Look was now visible in places and on occasions that in the 1950s would have demanded more formal clothing. Preppies of both sexes in madras check shirts and chino pants and Shetland sweaters could be seen eating lunch in elegant restaurants, in the offices of large corporations and at evening parties-as well as in class and on the tennis courts.

During the preppy ’70s, just as it had been previously, styling and the items themselves were equally important. Lurie notes that the preppy look was distinguished as much by its items as by their combinations, which included novel layering tricks such as jersey turtlenecks or polo shirts worn under oxford buttondowns, accented by a sweater draped around the neck.

As WASPs were gradually losing their stranglehold on power and influence, becoming shameful reminders of the old boys’ club elitism, their taste and lifestyle was beginning to be fetishized and marketed. In 1980 Lisa Birnbach released her detailed look into the culture of the preppy Northeastern upper-middle class, “The Official Preppy Handbook,” and the book so fascinated the nation it became a best-seller. At the same time the rise continued for Ralph Lauren, the doppelganger figure who can be seen as both saving the Ivy League Look from extinction by keeping alive the taste for it, albeit repackaged as fashion, and as commodifying totems of what were once expressions of culture. In “Taste: The Secret Meaning Of Things,” Stephen Bayley suggests that some kind of cultural line had been crossed following the fall of the Ivy League Look and the advent of postmodern, post-Ivy consumerism:

Ralph Lauren was after what Brooks Brothers once had, but packaged it more effectively so as to anticipate, appeal to and satisfy hitherto unrecognized longings among consumers. Interestingly, his critics (easily outnumbered by his happy customers) invoke arguments against him which echo the sumptuary laws of Renaissance Florence and England: “How does a working-class Jew from Mosholu Parkway dare pass off the tribal costumes of the Ivy League as if he owned them?”

Each fall season Ralph Lauren continues to pay tribute to the Ivy heyday with a few retro replicas. These typically tweed sportcoats come with such distinguishing Ivy details as natural shoulders, 3/2 rolls, patch pockets, swelled edges and lapped seams. However, they differ considerably from the kind of quotidian mufti once available at the Yale Co-op in that they have darted chests and carry a $1,300 price tag.

The other fragments that resulted from Ivy’s supernova are the category of vintage clothing anachronism, in which guys with hip sensibilities seek out heyday specimens prized for their authenticity, and the postmodern parody category, in which fashion designers (not haberdashers or merchandisers, the previous creators of the products) take the classic grey sack suit and turn it into a cartoonish gimmick, as in the case of Thom Browne.

Ivy-Style.com’s readership reflects this broad range of motivations for wearing the style, from the J. Press-clad fuddy duddy to the updated traditionalist in Ralph Lauren tweeds and flannels, and from the prep-with-a-twist fashion guy in Gant to the midcentury retro-eccentric dressed head to toe in vintage. It’s a perfectly postmodern incohesive mishmash of taste, temperament and social background all able to find in this genre of clothing something that resonates.

A Rose By Any Other Name

As the Ivy League Look fell into its death throes of cultural relevance, its name became immediately old fashioned. Originally it doesn’t seem to have had a name. “Natural shoulder” seems to have been the closest actually used by clothiers and their customers. The assiduous reporting by the media in the 1930s of what guys at Princeton were wearing is noteworthy for the detailed descriptions of the clothing combined with the complete lack of any attempt to give the style a name. “University fashions” was a typical headline for Apparel Arts, or “campus wear.”

The term “Ivy League Look” came into popularity in the ’50s, perhaps entering the popular lexicon as the result of LIFE Magazine’s 1954 story “The Ivy Look Heads Across US.” After 1967, once the clothes ceased to be fashionable, the term certainly became archaic. Fortunately a new word — for the broader culture — arrived at at just the right time to describe the latest version of the youthful Northeastern upper-middle-class look. “Preppy,” which entered the popular vocabulary in 1970 via the hit film “Love Story,” had a fresh ring to it.

Since its fashion moment in the ’80s, the term “preppy” has become gradually watered down to the point of meaninglessness, with almost no connection to the style and values of the people it described in 1970. Yet despite the efforts of the MFIT’s “Ivy Style” book and exhibit, not to mention Ivy-Style.com, preppy remains closer to the tongue, however bitter it tastes, than “Ivy League” when describing this genre of clothing. If you see someone walking down the street dressed head to toe in J. Press, says Charlie Davidson, “you wouldn’t even say he looks very Ivy, you’d say he looks very preppy, or something like that.”

The struggle for just what to call the post-Ivy remnants of the genre in a way that doesn’t sound girly, as preppy does today, or archaic and elitist, as does the Ivy League Look, accounts for the adoption in certain quarters of the term “trad.” On the surface trad sounds like a snappy and contemporary replacement, but with no historical tradition behind the term, trad quickly became a futile exercise on Internet message boards with endless debates about what qualified as trad and what didn’t, and with each opinion more subjective than the last.

It’s worth noting that in Japan and England, where the clothes were not an expression of their own dynamic and changing cultures, the clothes continued to be called “Ivy,” and much of the styling remained frozen in its heyday form.

With the Ivy League Look reaching full fruition in the 1930s and ending as a current and relevant fashion in 1967, its full flowering spans just three decades. Indeed, there are more years that have passed since the end of the heyday than the years from codification to heyday’s end.

The golden age was the 1930s, when the look was only available from a small number of clothiers and worn by a relatively small number of men. By 1957, in the middle of the silver age of widespread popularity, the look was already considered to be in decline by the old guard. In the April 7, 1957 edition of Town Topics, Princeton’s community newspaper, Princeton-based clothiers lamented a slide in formality among the student body. “You’ve got more of a cross section now,” concluded Joseph Cox of Douglas MacDaid, “not so many rich kids.”

The mass popularity of Ivy during heyday, with all of the department store knock-offs that Richard Press likes to dismiss as “Main Street Ivy,” actually holds within it the seeds of the look’s demise. For fashion is fickle, and Ivy fell from mainstream popularity into irrelevance practically overnight. While it’s true that the establishment was abandoning the look, at least among the younger members, it’s also the case that the middle class no longer had the desire to ape the establishment, at least not overtly. Brooks Brothers and J. Press stuck to their guns as much as possible and for as long as possible, watching their clientele slowly ossify, and Main Street clothiers quickly changed with the winds of fashion.

However, the silver age also cemented Ivy’s legacy in the “classic” and “timeless” sense. It continues — by whatever name and in iterations that conform with contemporary style — to be worn by anyone with the taste for it. And good taste should be available to anyone with the sensibility to appreciate it. Natural-shouldered tweed jackets, grey flannel trousers, oxford-cloth buttondowns, rep and knit ties, argyle socks, tassel and penny loafers, polo coats, Shetland sweaters, side-parted haircuts and horn-rimmed glasses still carry all the baggage, good and bad, that this Northeastern, upper-middle-class, “Ivy/preppy/trad/whatever” look will always have.

The farther you go into postmodern parody, of course, the less baggage the look carries, because in this case it’s just fashion, which is another way of saying it doesn’t mean much. But the straight-up wearer of the Ivy League Look, who projects his natural shoulder and rolled collar with utmost

Thirty years ago, The Official Preppy Handbook cracked the Wasp code-and went on to become a huge best-seller. In an excerpt from the update, True Prep, the author, along with designer Chip Kidd, covers the inevitable changes that are piercing blissful bubbles from Deer Isle to Jackson Hole.

Lisa Birnbach

Illustration by Jean-Philippe DelHomme

The Atlantic

August 11, 2010

PREPPY FASHION RULE NO. 1 We wear sportswear. This makes it easier to go from sporting events to social events.

Wake up, Muffy, we’re back.

O.K., now where were we?

Oh yes. It was 1980, and Ronald Reagan was heading to his improbable victory over Jimmy Carter. We wondered whether joining a club before your 30th birthday made you into a young fuddy-duddy, we considered the importance of owning a dress watch—one thing led to another, and before the year was over, our project became … The Official Preppy Handbook. Yes. That was us. We enjoyed every minute that we still remember, but we seemed to have misplaced a number of brain cells in the process.

Though we maintained that this world has changed little since 1635, when the Boston Latin School was founded, you knew we were exaggerating slightly. And as our world spins faster and faster and we use up more natural resources, and scientists keep finding more sugar substitutes, we have to think about how life in the 21st century affects our safe and lovely bubble.

Muffy van Winkle, you’ve napped long enough. It’s been 30 years! It doesn’t seem possible, does it? Despite changes and crises, the maid quitting, running out of vodka, your NetJets account being yanked, and the Internet, it’s still nice to be prep.

And as we have gotten a bit older and a teensy bit wiser, the world has become much smaller. We are all interconnected, intermarried, inter-everything’d. The great-looking couple in the matching tweed blazers and wide-wale orange corduroy trousers are speaking … Italian. On Melrose Avenue! Whereas once upon a time it was unlikely Europeans would be attracted to our aesthetic, now they’ve adapted it and made it their own. (They’re the women with no hips, in case you were wondering.)

Let’s begin at the beginning of the year. Here are our resolutions. You’ll catch on.

No drinking at lunch.

Call Grandmother once a week.

Get Belgian shoes re-soled (thinnest Cat’s Paw rubber).

Sign up for tennis team at the club.

Actually go to team practices.

Have gravy boat re-engraved.

Find Animal House and return to Netflix.

Send in donation for class gift this year.

And send in write-up for class notes.

Finally use Scully & Scully credit—maybe Pierpont’s next wedding?

Drive mother to cemetery at least once this year.

Order new stationery before supply runs out. (Find die!)

Luggage tags!

Download phone numbers into the thingy.

New Facebook picture?

Work on goals.

Work on topspin.

Get Katharine to do community service somehow.

Clean gutters or get someone to do them.

Repair hinge on broken shutter. Or else!

Finally hire portrait artist for Whimsy. (She’s 84 in dog years; not much time left.)

Who We Are Now

Formerly Wasp. Failing that, white and heterosexual. One day we became curious or bored and wanted to branch out, and before you knew it, we were all mixed up.

Well, that’s the way we like it, even if Grandmother did disapprove and didn’t go to the wedding ceremony. (Did she ever stop talking about the “barefoot and pregnant bride”? Ever?) And now one of our nieces, MacKenzie, is a researcher at the C.D.C. in Atlanta and is engaged to marry the loveliest man … Rajeem, a pediatrician who went to Duke. And Kelly is at Smith, and you know what that means. And our son Cal is married to Rachel, and her father the cantor married them in a lovely ceremony. Katie, our daughter, is a decorative artist living in Philadelphia with Otis, who is a professor of African-American studies at Swarthmore. And then there’s Bailey, our handsome little nephew. Somehow, all he wants to do is ski, meet girls, and drink bee

Well, there’s one out of five.

Fashion Rules

We know that many of you understand the principles of preppy style. But just to be sure, let’s review them again.

We wear sportswear. This makes it easier to go from sporting events to social events (not that there is much difference) without changing.

We generally underdress. We prefer it to overdressing.

Your underwear must not show. Wear a nude-colored strapless bra. Pull up your pants. Wear a belt. Do something. Use a tie!

We do not display our wit through T-shirt slogans.

Every single one of us—no matter the age or gender or sexual preference—owns a blue blazer.

We take care of our clothes, but we’re not obsessive. A tiny hole in a sweater, a teensy stain on the knee of our trousers, doesn’t throw us. (We are the people who brought you duct-taped Blucher moccasins.)

We do, however, wear a lot of white in the summer, and it must be spotless.

On the other hand, your watch doesn’t have to be the same metal as your jewelry.

And you can wear gold with a platinum wedding band and/or engagement ring.

Men’s jewelry should be restricted to a handsome watch, a wedding band if he is American and married, and nothing else. If he has a family-crest ring, it may be worn as well. For black-tie, of course, shirt studs and matching cuff links are de rigueur.

Nose rings are never preppy.

Neither (shudder) are belly-button piercings.

Nor are (two shudders) tongue studs.

And that goes for ankle bracelets.

Tattoos: Men who have been in a war have them, and that’s one thing. (Gang wars don’t count.) Anyone else looks like she is trying hard to be cool. Since the body ages, if you must tattoo, find a spot that won’t stretch too much. One day you will want to wear a halter-necked backless gown. Will you want everyone at the party to know you once loved John Krasinski?

You may wear a Harvard sweatshirt if: you attended Harvard, your spouse attended Harvard, or your children attend Harvard. Otherwise, you are inviting an uncomfortable question.

If your best friend is a designer (clothes, accessories, jewelry), you should wear a piece from his or her collection. If his or her taste and yours don’t coincide, buy a piece or two to show your loyal support—but don’t wear them.

Every preppy woman has a friend who is a jewelry designer.

No man bags.

Preppies don’t perm their hair.

Preppy men do not believe that comb-overs disguise anything.

You can never go wrong with a trench coat.

Sweat suits are for sweating. You can try to get away with wearing sweats to carpool, to pick up the newspaper, or to drive to the dump, but last time you were at the dump, the drop-dead-attractive widower from Maple Lane was there, too.

And finally:

The best fashion statement is no fashion statement.

Logology

Sometime in the 1980s the cart began leading the horse. Don’t look at us; preppies were certainly not to blame. Fashion followers mistakenly thought the logo was the point. (This is the place at which we would write “LOL,” except we loathe “LOL.”)

But wearing a logo-laden outfit or accessory points to the wearer’s painful insecurity. If you think you are being ironic, think again.

Here’s the rule of thumb: The first logo that preppies loved was the Lacoste crocodile. It belonged to the French tennis star René Lacoste, whose nickname was Le Crocodile. It was an authentic, since he himself wore la chemise in 1927, after having been the top tennis player in the world in 1926 and 1927. (He won seven grand-slam singles titles in France, Britain, and the U.S. In 1961 he also invented the first metal tennis racket, which was sold in this country as the Wilson T2000.)

The shirts, made by La Société Chemise Lacoste, became an international sensation in 1933. Initially they had long tails, crocodiles of 2.8 centimeters in width, and embroidered labels with the size in French: Patron, Grand Patron, etc. There was no need (not then nor now) to change the size of the beast.

Fred Perry, the British tennis champion of the 1930s, put his laurel-wreath logo in blue on white polo shirts in 1952 (a few years after inventing the sweatband). Fred Perry shirts were successful immediately.

Brooks Brothers introduced its golden-fleece logo as the company symbol in 1850, but, for casual sport shirts, they sold the Chemise Lacoste until the 1960s. Then they stopped selling Lacostes and segued into men’s polo-style shirts with the golden fleece embroidered. Until 1969, the sheep suspended by golden ribbons was made only in men’s sizes.

Ralph Lauren was already making men’s wear when, in 1971, he embroidered a little man astride a polo pony on the cuff of some women’s shirts. The ponies, 1 1/4 inches high, moved onto his many colored cotton polo shirts in 1972. The logo, now one of the world’s best known, somehow grows up to five inches high (“BIG PONY”) though sometimes stays small.

Vineyard Vines’ little pink whale appeared in 1998, and so far the whale has shown admirable restraint in staying 1.05 inches wide by 0.43 inches high (as per the universal style guide).

When labels began to understand the strong appeal their logos offered, they went wild. Gone were the subtle stripes, woven ribbons, tiny metal trademarks, and interior decoration that had been prized. Now the logos took growth hormones, and there seemed nothing too big or too crass to sell. Today’s customer is more discerning and somewhat disgusted. Removing logos has become something of a hobby for purists.

When Juicy Couture arrived, emblazoning bottoms with the word “juicy” on its pricey sweatpants, we were dismayed when our daughters thought they wanted them. We steered them back to sanity. We believe that the Juicy Couture tracksuit phenomenon signals the end of civilization as we know it. Nothing less.

The Biggest Change in 30 Years

If, in 1980, you had whispered to friends that within the next few decades America would elect a thin, black, preppy, basketball-playing lawyer to be president, they would have laughed at you and exhaled in your face, inside the restaurant or club where you were sitting. And, if you predicted that one day all our children would have little portable phones stuck in their pockets so that they could not answer us when we called them from our little phones, we would have again exhaled in your face—indoors—and said you were talking science fiction.

Still, to our minds nothing is more sci-fi than the fact that preppies in the 21st century all wear the unnatural fibers we collectively refer to as “fleece.” We always thought our reliance on natural “guaranteed to wrinkle” fibers was our right and our trademark. If it’s hot or humid, we’d just roll up our all-cotton long-sleeved shirts. But now we wear polyester fleece, and its offspring, recycled water bottles.

The revolution began in 1981, at a company then called Malden Mills in Lawrence, Massachusetts, manufacturers of textiles including the wool for uniforms in World War II. A place like Malden Mills is populated by textile engineers who spitball, “mess around with fabrics,” and then refine, according to spokesman Nate Simmons. They work collaboratively with clothing manufacturers, as they did in this case with Patagonia. What came off the looms in the early 80s was pure synthetic, soft, quick-wicking, quick-drying, and machine-washable. It did not fade, and changed the wardrobes of athletes forever. Its Malden name was Polarfleece; its Patagonia name was Synchilla.

Frugal Dos and Frugal Don’ts

Do keep repairing old appliances to try to extend their lives. Don’t store them on your front porch or driveway. Invest in great-fitting, well-made shoes. (Italian-made shoes are nice.) Your feet will thank you. Keep re-soling them. Subscribe to a concert, opera, or ballet series. Buy season tickets to basketball. Pairs of tickets you can’t use make great no-occasion gifts. Some nonprofit institutions accept them as tax-deductible donations. Buy very cheap plane tickets to Europe on discount Web sites. Stay at your friend’s grand villa for three weeks. Oh, make it four. Buy him a house gift and pay for dinner a couple of times. Let him win one tennis match every now and then. Complain about the heat.

Have your trustee dump an allowance in your checking account every month. Walk seven blocks out of your way (or drive, if necessary) to the A.T.M. of your bank, so you are not charged that extra $1.95–$3.00 withdrawal fee. Leave the office a little early to take the off-peak commuter train. (Even though you live in one of the 10 most affluent Zip Codes in the United States.)

Travel

We travel, and we’re rather good at it. Some of us have traveled from a very early age, even if it’s been just back and forth from Princeton and Newport. We may travel to see relatives, to take a semester away, or to go to rehab. We go to Europe because it’s there, and there is so very much to learn from Europeans.

In Europe, we learn how to kiss people on both cheeks, how to do math when we convert the dollar into the euro, and how to make ourselves understood in adverse conditions. We get to practice the little bits of foreign languages we’ve retained from school, and to see that Italian men can carry off the sweater-around-their-shoulders look easily

Thou mayest fly business class if thy destination is more than five hours away.

On board, the wine will not be fine; therefore drinkest beer or spirits.

Naturellement, thou never wearest shorts, sweatpants, or flip-flops on an airplane, and thou shalt attempt not to sit next to a miscreant in such garments.

If thou takest a sleeping pill, thou must try not to snore, Pookie.

Thou must not complain about jet lag.

Thou must take loads of photographs.

Thou art encouraged to rent cars in strange places and get into colorful misunderstandings with local drivers.

If there is a Harry’s Bar at thy destination, thou shalt eat there. (Try the carpaccio and the cannelloni.)

Exotic locations are to be encouraged.

Thou must not try to lose thy passport, but, indeed, it could happen, and will provide dinner-table fodder for many happy years to come.

Although thou art traveling in order to “broaden thy horizons” and meet different kinds of people, thou will prefer looking up friends of friends who are also traveling.

Thou shalt tryest the tonic water in other lands, as it tastes different from thy domestic tonic water.

Thou will always have (had) a wonderful time.

Our private economic code is useful when on the road. As stated before, we do not waste money on first-class travel. Unless McKinsey or Aunt Toot is footing the bill, we fly coach. (On the other hand, it would be rude to turn down a no-expense upgrade.) It is consistent with everything we’ve been talking about. First class lasts several hours but costs a fortune. On the other hand, we have been known to splurge on luxury hotels. Wouldn’t it be better to apply those savings to a wonderful room in a wonderful hotel? (Or, at the very least, a small room facing a wall in a wonderful hotel?)

If you cannot stay at the wonderful hotel with the famous bar, you must at least drink at the famous bar. Lunch is also lovely there. During holiday, we always drink at lunch, and, of course, we “walk it off.” Lunchtime drinking is not an obligation, but, well, yes it is. You’re on vacation, the ultimate in prep experiences!

Prep Careers for the New Millennium

Preppies realize society’s need for enterprise. They go to college with the idea of a career—or, should we say, their parents’ idea of a career—planted firmly in their minds. This is why so many of them attend law school. They also understand their need for income. One gets a bad reputation if one is derelict with one’s club dues. As the 21st century unfurls, herewith a vital list of jobs that help preppies maintain their rightful positions in their world:

On April 4, 1967, exactly one year before his assassination, the Rev. Dr. Martin Luther King Jr. stepped up to the lectern at the Riverside Church in Manhattan. The United States had been in active combat in Vietnam for two years and tens of thousands of people had been killed, including some 10,000 American troops. The political establishment — from left to right — backed the war, and more than 400,000 American service members were in Vietnam, their lives on the line. Many of King’s strongest allies urged him to remain silent about the war or at least to soft-pedal any criticism. They knew that if he told the whole truth about the unjust and disastrous war he would be falsely labeled a Communist, suffer retaliation and severe backlash, alienate supporters and threaten the fragile progress of the civil rights movement. King rejected all the well-meaning advice and said, “I come to this magnificent house of worship tonight because my conscience leaves me no other choice.” Quoting a statement by the Clergy and Laymen Concerned About Vietnam, he said, “A time comes when silence is betrayal” and added, “that time has come for us in relation to Vietnam.” It was a lonely, moral stance. And it cost him. But it set an example of what is required of us if we are to honor our deepest values in times of crisis, even when silence would better serve our personal interests or the communities and causes we hold most dear. It’s what I think about when I go over the excuses and rationalizations that have kept me largely silent on one of the great moral challenges of our time: the crisis in Israel-Palestine. I have not been alone. Until very recently, the entire Congress has remained mostly silent on the human rights nightmare that has unfolded in the occupied territories. Our elected representatives, who operate in a political environment where Israel’s political lobby holds well-documented power, have consistently minimized and deflected criticism of the State of Israel, even as it has grown more emboldened in its occupation of Palestinian territory and adopted some practices reminiscent of apartheid in South Africa and Jim Crow segregation in the United States. Many civil rights activists and organizations have remained silent as well, not because they lack concern or sympathy for the Palestinian people, but because they fear loss of funding from foundations, and false charges of anti-Semitism. They worry, as I once did, that their important social justice work will be compromised or discredited by smear campaigns. Similarly, many students are fearful of expressing support for Palestinian rights because of the McCarthyite tactics of secret organizations like Canary Mission, which blacklists those who publicly dare to support boycotts against Israel, jeopardizing their employment prospects and future careers. Reading King’s speech at Riverside more than 50 years later, I am left with little doubt that his teachings and message require us to speak out passionately against the human rights crisis in Israel-Palestine, despite the risks and despite the complexity of the issues. King argued, when speaking of Vietnam, that even “when the issues at hand seem as perplexing as they often do in the case of this dreadful conflict,” we must not be mesmerized by uncertainty. “We must speak with all the humility that is appropriate to our limited vision, but we must speak.” And so, if we are to honor King’s message and not merely the man, we must condemn Israel’s actions: unrelenting violations of international law, continued occupation of the West Bank, East Jerusalem, and Gaza, home demolitions and land confiscations. We must cry out at the treatment of Palestinians at checkpoints, the routine searches of their homes and restrictions on their movements, and the severely limited access to decent housing, schools, food, hospitals and water that many of them face. We must not tolerate Israel’s refusal even to discuss the right of Palestinian refugees to return to their homes, as prescribed by United Nations resolutions, and we ought to question the U.S. government funds that have supported multiple hostilities and thousands of civilian casualties in Gaza, as well as the $38 billion the U.S. government has pledged in military support to Israel. And finally, we must, with as much courage and conviction as we can muster, speak out against the system of legal discrimination that exists inside Israel, a system complete with, according to Adalah, the Legal Center for Arab Minority Rights in Israel, more than 50 laws that discriminate against Palestinians — such as the new nation-state law that says explicitly that only Jewish Israelis have the right of self-determination in Israel, ignoring the rights of the Arab minority that makes up 21 percent of the population. Of course, there will be those who say that we can’t know for sure what King would do or think regarding Israel-Palestine today. That is true. The evidence regarding King’s views on Israel is complicated and contradictory. Although the Student Nonviolent Coordinating Committee denounced Israel’s actions against Palestinians, King found himself conflicted. Like many black leaders of the time, he recognized European Jewry as a persecuted, oppressed and homeless people striving to build a nation of their own, and he wanted to show solidarity with the Jewish community, which had been a critically important ally in the civil rights movement. Ultimately, King canceled a pilgrimage to Israel in 1967 after Israel captured the West Bank. During a phone call about the visit with his advisers, he said, “I just think that if I go, the Arab world, and of course Africa and Asia for that matter, would interpret this as endorsing everything that Israel has done, and I do have questions of doubt.” He continued to support Israel’s right to exist but also said on national television that it would be necessary for Israel to return parts of its conquered territory to achieve true peace and security and to avoid exacerbating the conflict. There was no way King could publicly reconcile his commitment to nonviolence and justice for all people, everywhere, with what had transpired after the 1967 war. Today, we can only speculate about where King would stand. Yet I find myself in agreement with the historian Robin D.G. Kelley, who concluded that, if King had the opportunity to study the current situation in the same way he had studied Vietnam, “his unequivocal opposition to violence, colonialism, racism and militarism would have made him an incisive critic of Israel’s current policies.” Indeed, King’s views may have evolved alongside many other spiritually grounded thinkers, like Rabbi Brian Walt, who has spoken publicly about the reasons that he abandoned his faith in what he viewed as political Zionism. (…) During more than 20 visits to the West Bank and Gaza, he saw horrific human rights abuses, including Palestinian homes being bulldozed while people cried — children’s toys strewn over one demolished site — and saw Palestinian lands being confiscated to make way for new illegal settlements subsidized by the Israeli government. He was forced to reckon with the reality that these demolitions, settlements and acts of violent dispossession were not rogue moves, but fully supported and enabled by the Israeli military. For him, the turning point was witnessing legalized discrimination against Palestinians — including streets for Jews only — which, he said, was worse in some ways than what he had witnessed as a boy in South Africa. (…) Jewish Voice for Peace, for example, aims to educate the American public about “the forced displacement of approximately 750,000 Palestinians that began with Israel’s establishment and that continues to this day.” (…) In view of these developments, it seems the days when critiques of Zionism and the actions of the State of Israel can be written off as anti-Semitism are coming to an end. There seems to be increased understanding that criticism of the policies and practices of the Israeli government is not, in itself, anti-Semitic. (…) the Rev. Dr. William J. Barber II (…) declared in a riveting speech last year that we cannot talk about justice without addressing the displacement of native peoples, the systemic racism of colonialism and the injustice of government repression. In the same breath he said: “I want to say, as clearly as I know how, that the humanity and the dignity of any person or people cannot in any way diminish the humanity and dignity of another person or another people. To hold fast to the image of God in every person is to insist that the Palestinian child is as precious as the Jewish child.” Guided by this kind of moral clarity, faith groups are taking action. In 2016, the pension board of the United Methodist Church excluded from its multibillion-dollar pension fund Israeli banks whose loans for settlement construction violate international law. Similarly, the United Church of Christ the year before passed a resolution calling for divestments and boycotts of companies that profit from Israel’s occupation of Palestinian territories. Even in Congress, change is on the horizon. For the first time, two sitting members, Representatives Ilhan Omar, Democrat of Minnesota, and Rashida Tlaib, Democrat of Michigan, publicly support the Boycott, Divestment and Sanctions movement. In 2017, Representative Betty McCollum, Democrat of Minnesota, introduced a resolution to ensure that no U.S. military aid went to support Israel’s juvenile military detention system. Israel regularly prosecutes Palestinian children detainees in the occupied territories in military court. None of this is to say that the tide has turned entirely or that retaliation has ceased against those who express strong support for Palestinian rights. To the contrary, just as King received fierce, overwhelming criticism for his speech condemning the Vietnam War — 168 major newspapers, including The Times, denounced the address the following day — those who speak publicly in support of the liberation of the Palestinian people still risk condemnation and backlash. Bahia Amawi, an American speech pathologist of Palestinian descent, was recently terminated for refusing to sign a contract that contains an anti-boycott pledge stating that she does not, and will not, participate in boycotting the State of Israel. In November, Marc Lamont Hill was fired from CNN for giving a speech in support of Palestinian rights that was grossly misinterpreted as expressing support for violence. Canary Mission continues to pose a serious threat to student activists. And just over a week ago, the Birmingham Civil Rights Institute in Alabama, apparently under pressure mainly from segments of the Jewish community and others, rescinded an honor it bestowed upon the civil rights icon Angela Davis, who has been a vocal critic of Israel’s treatment of Palestinians and supports B.D.S. But that attack backfired. Within 48 hours, academics and activists had mobilized in response. The mayor of Birmingham, Randall Woodfin, as well as the Birmingham School Board and the City Council, expressed outrage at the institute’s decision. The council unanimously passed a resolution in Davis’ honor, and an alternative event is being organized to celebrate her decades-long commitment to liberation for all. I cannot say for certain that King would applaud Birmingham for its zealous defense of Angela Davis’s solidarity with Palestinian people. But I do. In this new year, I aim to speak with greater courage and conviction about injustices beyond our borders, particularly those that are funded by our government, and stand in solidarity with struggles for democracy and freedom. My conscience leaves me no other choice. Michelle Alexander

“I think it’s a trope that has certainly been seen in Hollywood films for decades. Think about the white teacher in the inner city school. The Michelle Pfeiffer one [in Dangerous Minds]. The Principal. Music of the Heart, where Meryl Streep was a music teacher. Wildcats. I think these stories probably read well in a pitch meeting: ‘Goldie Hawn coaching an inner city football team.’“They make it look like Japan would not have made it out of the feudal period without Tom Cruise.” And the west wouldn’t have been tamed and we’d have no civilization if Kevin Costner didn’t ride into town.Laurence Lerman

Belle becomes empowered to challenge the white characters that view themselves as her savior on their veiled racism, which marks a welcome departure from one of Hollywood’s most enduring cinematic tropes: the white savior. When it comes to race-relations dramas—and slavery narratives, in particular—the white savior has become one of Hollywood’s most reliably offensive clichés. The black servants of The Help needed a perky, progressive Emma Stone to shed light on their plight; the football bruiser in The Blind Side couldn’t have done it without fiery Sandra Bullock; the black athletes in Cool Runnings and The Air Up There needed the guidance of their white coach; and in 12 Years A Slave, Solomon Northup, played by Chiwetel Ejiofor, is liberated at the eleventh hour by a Jesus-looking Brad Pitt (in a classic Deus Ex Machina). The issue, according to Lerman, is more complex given the nature of Hollywood and the various power structures at play. While there are plenty of important stories to tell featuring people of color, there are only a small number of people of color in Hollywood with the clout to get a film green-lit—especially since we’re living in an age where international box office trumps domestic. This troubling disparity often results in a white star needing to be featured in a film with a predominantly minority cast to secure the necessary financing—as was the case with Pitt’s appearance in 12 Years A Slave, a film produced by his company, Plan B. And who can forget the controversy over the outrageous Italian movie posters for 12 Years A Slave, which prominently featured the film’s white movie stars—Pitt and Michael Fassbender—in favor of the movie’s real star, Chiwetel Ejiofor. Without ruining the film for you, part of what makes Belle so refreshing is that its portrayal of black characters, namely Belle, is one of dignity. They aren’t the typical uneducated blacks you see in films that need to be shown the light by a white knight, for they’re blessed with more intellect and class than many of their white subjugators, who soon come to realize that Belle, through her grace and wisdom, is their savior. “Her family thought they were giving her great love, but until she’s able to take that freedom for herself and find self-love and feel comfortable in her own skin, that’s when she’s ready to challenge them,” says Mbatha-Raw. “It just felt like a story that needed to be told.”The Daily Beast

“Driving Miss Daisy” (…) “The Upside” (…) “Green Book” (…) symbolize a style of American storytelling in which the wheels of interracial friendship are greased by employment, in which prolonged exposure to the black half of the duo enhances the humanity of his white, frequently racist counterpart. All the optimism of racial progress — from desegregation to integration to equality to something like true companionship — is stipulated by terms of service. Thirty years separate “Driving Miss Daisy” from these two new films, but how much time has passed, really? The bond in all three is conditionally transactional, possible only if it’s mediated by money. “The Upside” has the rich, quadriplegic author Phillip Lacasse (Cranston) hire an ex-con named Dell Scott (Hart) to be his “life auxiliary.” “Green Book” reverses the races so that some white muscle (Mortensen) drives the black pianist Don Shirley (Ali) to gigs throughout the Deep South in the 1960s. It’s “The Upside Down.” These pay-for-playmate transactions are a modern pastime, different from an entire history of popular culture that simply required black actors to serve white stars without even the illusion of friendship. It was really only possible in a post-integration America, possible after Sidney Poitier made black stardom loosely feasible for the white studios, possible after the moral and legal adjustments won during the civil rights movements, possible after the political recriminations of the black power and blaxploitation eras let black people regularly frolic among themselves for the first time since the invention of the Hollywood movie. Possible, basically, only in the 1980s, after the movements had more or less subsided and capitalism and jokey white paternalism ran wild. On television in this era, rich white sitcom families vacuumed up little black boys, on “Diff’rent Strokes,” on “Webster.” On “Diff’rent Strokes,” the adopted boys are the orphaned Harlem sons of Phillip Drummond’s maid. Not only was money supposed to lubricate racial integration; it was perhaps supposed to mitigate a history of keeping black people apart and oppressed. (…) The sitcoms weren’t officially social experiments, but they were light advertisements for the civilizing (and alienating) benefits of white wealth on black life. (…) Any time a white person comes anywhere close to the rescue of a black person the academy is primed to say, “Good for you!,” whether it’s “To Kill a Mockingbird,” “Mississippi Burning,” “The Blind Side,” or “The Help.” The year “Driving Miss Daisy” won those Oscars, Morgan Freeman also had a supporting role in a drama (“Glory”) that placed a white Union colonel at its center and was very much in the mix that night. (…) And Spike Lee lost the original screenplay award for “Do the Right Thing,” his masterpiece about a boiled-over pot of racial animus in Brooklyn. (…) Lee’s movie dramatized a starker truth — we couldn’t all just get along. For what it’s worth, Lee is now up for more Oscars. His film “BlacKkKlansman” has six nominations. Given the five for “Green Book,” basically so is “Driving Miss Daisy.” Which is to say that 2019 might just be 1990 all over again. (…) One headache with these movies, even one as well done as “Driving Miss Daisy,” is that they romanticize their workplaces and treat their black characters as the ideal crowbar for closed white minds and insulated lives. Who knows why, in “The Upside,” Phillip picks the uncouth, underqualified Dell to drive him around, change his catheter and share his palatial apartment. But by the time the movie’s over, they’re paragliding together to Aretha Franklin. We’re told that this is based on a true story. It’s not. It’s a remake of a far more nauseating French megahit — “Les Intouchables” — and that claimed to be based on a true story. “The Upside” seems based on one of those paternalistic ’80s movies, “Disorderlies,” the one where the Fat Boys wheel an ailing Ralph Bellamy around his mansion. (…) Most of these black-white-friendship adventures were foretold by Mark Twain. Somebody is white Huck and somebody else is his amusingly dim black sidekick, Jim. This movie is just a little more flagrant about it. There’s a way of looking at the role reversal in “Green Book” as an upgrade. Through his record company, Don hires a white nightclub bouncer named Tony Vallelonga. (Most people call him Tony Lip.) We don’t meet Don for about 15 minutes, because the movie needs us to know that Tony is a sweet, Eye-talian tough guy who also throws out perfectly good glassware because his wife let black repairmen drink from it. By this point, you might have heard about the fried chicken scene in “Green Book.” It comes early in their road trip. Tony is shocked to discover that Don has never had fried chicken. He also appears never to have seen anybody eat fried chicken, either. (“What do we do about the bones?”) So, with all the greasy alacrity and exuberant crassness that Mortensen can conjure, Tony demonstrates how to eat it while driving. As comedy, it’s masterful — there’s tension, irony and, when the car stops and reverses to retrieve some litter, a punch line that brings down the house. But the comedy works only if the black, classical-pop fusion pianist is from outer space (and not in a Sun Ra sort of way). You’re meant to laugh because how could this racist be better at being black than this black man who’s supposed to be better than him? (…) The movie’s tagline is “based on a true friendship.” But the transactional nature of it makes the friendship seem less true than sponsored. So what does the money do, exactly? The white characters — the biological ones and somebody supposedly not black enough, like fictional Don — are lonely people in these pay-a-pal movies. The money is ostensibly for legitimate assistance, but it also seems to paper over all that’s potentially fraught about race. The relationship is entirely conscripted as service and bound by capitalism and the fantastically presumptive leap is, The money doesn’t matter because I like working for you. And if you’re the racist in the relationship: I can’t be horrible because we’re friends now. That’s why the hug Sandra Bullock gives Yomi Perry, the actor playing her maid, Maria, at the end of “Crash,” remains the single most disturbing gesture of its kind. It’s not friendship. Friendship is mutual. That hug is cannibalism. Money buys Don a chauffeur and, apparently, an education in black folkways and culture. (Little Richard? He’s never heard him play.) Shirley’s real-life family has objected to the portrait. Their complaints include that he was estranged from neither black people nor blackness. Even without that thumbs-down, you can sense what a particularly perverse fantasy this is: that absolution resides in a neutered black man needing a white guy not only to protect and serve him, but to love him, too. Even if that guy and his Italian-American family and mob associates refer to Don and other black people as eggplant and coal. In the movie’s estimation, their racism is preferable to its nasty, blunter southern cousin because their racism is often spoken in Italian. And, hey, at least Tony never asks Don to eat his fancy dinner in a supply closet. Mahershala Ali is acting Shirley’s isolation and glumness, but the movie determines that dining with racists is better than dining alone. The money buys Don relative safety, friendship, transportation and a walking-talking black college. What the money can’t buy him is more of the plot in his own movie. It can’t allow him to bask in his own unique, uniquely dreamy artistry. It can’t free him from a movie that sits him where Miss Daisy sat, yet treats him worse than Hoke. He’s a literal passenger on this white man’s trip. Tony learns he really likes black people. And thanks to Tony, now so does Don.Wesley Morris (NYT)

Today, our thousands of travelers, if they be thoughtful enough to arm themselves with a Green Book, may free themselves of a lot of worry and inconvenience as they plan a trip. Victor Hugo Green

Victor Hugo Green remains a mysterious figure about whom we know very little. He rarely spoke directly to Green Book readers, instead publishing testimonial letters in what the historian Cotten Seiler describes as an act of promotional “ventriloquism.” The debut edition did not exhort black travelers to boycotts or include demands for equal rights. Instead, Green represented the guide as a benign compilation of “facts and information connected with motoring, which the Negro Motorist can use and depend upon.” The coolly reasoned language put white readers at ease and allowed the Green Book to attract generous corporate and government sponsorship. Green nevertheless practiced the African-American art of coded communication, addressing black readers in messages that went over white peoples’ heads. Consider the passage: “Today, our thousands of travelers, if they be thoughtful enough to arm themselves with a Green Book, may free themselves of a lot of worry and inconvenience as they plan a trip.” White readers viewed this as a common-sense statement about vacation planning. For African-Americans who read in black newspapers about the fates that befell people like Ms. Derricotte, the notion of “arming” oneself with the guide referred to taking precautions against racism on the road. The Green Book was subversive in another way as well. It promoted an image of African-Americans that white Americans rarely saw — and that Hollywood deliberately avoided in films for fear of offending racist Southerners. The guide’s signature image, shown on the cover of the 1948 edition — and used as stationery logo for Victor Green, Inc. — consisted of a smiling, well-dressed couple striding toward their car carrying expensive suitcases. Green believed exposing white Americans to the black elite might persuade white business owners that black consumer spending was significant enough to make racial discrimination imprudent. Like the black elite itself, he subscribed to the view that affluent travelers of color could change white minds about racism simply by venturing to places where black people had been unseen. As it turned out, black travelers had a democratizing effect on the country. Like many African-American institutions that thrived during the age of extreme segregation, the Green Book faded in influence as racial barriers began to fall. It ceased publication not long after the Supreme Court ruled that the Civil Rights Act of 1964 outlawed racial discrimination in public accommodations. Nevertheless, the guide’s three decades of listings offer an important vantage point on black business ownership and travel mobility in the age of Jim Crow. In other words, the Green Book has a lot more to say about the time when it was the Negro traveler’s bible.Grant Staples

[The New York Times and Oculus are presenting a virtual-reality film, “Traveling While Black,” related to this Opinion essay. To view it, you can watch on the Oculus platform or download the NYT VR app on your mobile device.]

Imagine trudging into a hotel with your family at midnight — after a long, grueling drive — and being turned away by a clerk who “loses” your reservation when he sees your black face.

This was a common hazard for members of the African-American elite in 1932, the year Dr. B. Price Hurst of Washington, D.C., was shut out of New York City’s Prince George Hotel despite having confirmed his reservation by telegraph.

Hurst would have planned his trip differently had he been headed to the South, where “whites only” signs were ubiquitous and well-to-do black travelers lodged in homes owned by others in the black elite. Hurst was a member of Washington’s “Colored Four Hundred” — as the capital’s black upper crust once was known — and was familiar with having to plan his life around hotels, restaurants and theaters in the city, and throughout the Jim Crow South, that screened out people of color.

Hurst expected better of New York City. He did not let the matter rest after the Prince George turned his travel-weary family into the streets. He wrote an anguished letter to Walter White, then executive secretary of the N.A.A.C.P., explaining how he had been rejected by four hotels before shifting his search to the black district of Harlem. He then sued the Prince George for violating New York State’s civil rights laws, winning a settlement that put the city’s hotels on notice that discrimination could carry a financial cost.

African-Americans who embraced automobile travel to escape filthy, “colored-only” train cars learned quickly that the geography of Jim Crow was far more extensive than they had imagined. The motels and rest stops that deprived them of places to sleep were just the beginning.

While driving, these families were often forced to relieve themselves in roadside ditches because the filling stations that sold them gas barred them from using “whites only” bathrooms.

“Sundown Towns” across the country banned African-Americans from the streets after dark, a constant reminder that the reach of white supremacy was vast indeed.

As still happens today, police officers who pulled over motorists of color for “driving while black” raised the threat that black passengers would be arrested, battered or even killed during the encounter.

The Negro Traveler’s Bible

The Hurst case was a cause célèbre in 1936 when a Harlem resident and postal worker named Victor Hugo Green began soliciting material for a national travel guide that would steer black motorists around the humiliations of the not-so-open road and point them to businesses that were more than happy to accept colored dollars. As the historian Gretchen Sullivan Sorin writes in her revelatory study of “The Negro Motorist Green Book,” the guide became “the bible of every Negro highway traveler in the 1950s and early 1960s.”

Green, who died in 1960, is experiencing a renaissance thanks to heightened interest from filmmakers: The 2018 feature film “Green Book” won three Golden Globes earlier this month, and the documentary “Driving While Black” is scheduled for broadcast by PBS next year.

Then there is The New York Times opinion section’s Op-Doc film “Traveling While Black,” which debuts this Friday at the Sundance Film Festival. The brief film offers a revealing view of the Green Book era as told through Ben’s Chili Bowl, a black-owned restaurant in Washington, and reminds us that the humiliations heaped upon African-Americans during that time period extended well beyond the one Hurst suffered in New York City.

Sandra Butler-Truesdale, born in the capital in the 1930s, references an often-forgotten trauma — and one of the conceptual underpinnings of the Jim Crow era — when she recalls that Negroes who shopped in major stores were not allowed to try on clothing before they bought it. Store owners at the time offered a variety of racist rationales, including that Negroes were insufficiently clean. At bottom, the practice reflected the irrational belief that anything coming in contact with African-American skin — including clothing, silverware or bed linens — was contaminated by blackness, rendering it unfit for use by whites.

This had deadly implications in places where emergency medical services were assigned on the basis of race. Of all the afflictions devised in the Jim Crow era, medical racism was the most lethal. African-American accident victims could easily be left to die because no “black” ambulance was available. Black patients taken to segregated hospitals, where they sometimes languished in basements or even boiler rooms, suffered inferior treatment.

In a particularly telling case in 1931, the light-skinned father of Mr. White, the N.A.A.C.P. leader, was struck by a car and mistakenly admitted to the beautifully equipped “white” wing of Grady Memorial Hospital in Atlanta. When relatives who were recognizably black came looking for him, hospital employees dragged the victim from the examination table to the decrepit Negro ward across the street, where he later died.

That same year, Juliette Derricotte, the celebrated African-American educator and dean of women at Fisk University, succumbed to injuries suffered in a car accident near Dalton, Ga., after a white hospital refused her treatment.

Advertising to the Black Elite

Victor Hugo Green remains a mysterious figure about whom we know very little. He rarely spoke directly to Green Book readers, instead publishing testimonial letters in what the historian Cotten Seiler describes as an act of promotional “ventriloquism.” The debut edition did not exhort black travelers to boycotts or include demands for equal rights. Instead, Green represented the guide as a benign compilation of “facts and information connected with motoring, which the Negro Motorist can use and depend upon.”

The coolly reasoned language put white readers at ease and allowed the Green Book to attract generous corporate and government sponsorship. Green nevertheless practiced the African-American art of coded communication, addressing black readers in messages that went over white peoples’ heads. Consider the passage: “Today, our thousands of travelers, if they be thoughtful enough to arm themselves with a Green Book, may free themselves of a lot of worry and inconvenience as they plan a trip.”

White readers viewed this as a common-sense statement about vacation planning. For African-Americans who read in black newspapers about the fates that befell people like Ms. Derricotte, the notion of “arming” oneself with the guide referred to taking precautions against racism on the road.

The Green Book was subversive in another way as well. It promoted an image of African-Americans that white Americans rarely saw — and that Hollywood deliberately avoided in films for fear of offending racist Southerners. The guide’s signature image, shown on the cover of the 1948 edition — and used as stationery logo for Victor Green, Inc. — consisted of a smiling, well-dressed couple striding toward their car carrying expensive suitcases.

Green believed exposing white Americans to the black elite might persuade white business owners that black consumer spending was significant enough to make racial discrimination imprudent. Like the black elite itself, he subscribed to the view that affluent travelers of color could change white minds about racism simply by venturing to places where black people had been unseen. As it turned out, black travelers had a democratizing effect on the country.

Like many African-American institutions that thrived during the age of extreme segregation, the Green Book faded in influence as racial barriers began to fall. It ceased publication not long after the Supreme Court ruled that the Civil Rights Act of 1964 outlawed racial discrimination in public accommodations. Nevertheless, the guide’s three decades of listings offer an important vantage point on black business ownership and travel mobility in the age of Jim Crow.

In other words, the Green Book has a lot more to say about the time when it was the Negro traveler’s bible.

“Driving Miss Daisy” is the sort of movie you know before you see it. The whole thing is right there in the poster. White Jessica Tandy is giving black Morgan Freeman a stern look, and he looks amused by her sternness. They’re framed in a rearview mirror, which occupies only about 20 percent of the space. You can make out his chauffeur’s cap and that she’s in the back seat. The rest is three actors’ names, a tag line, a title, tiny credits, and white space.

That rearview-mirror image isn’t a still from the movie but a warmly painted rendering of one, this vague nuzzling of Norman Rockwell Americana. And its warmth evokes a very particular past. If you’ve ever seen the packaging for Cream of Wheat or a certain brand of rice, if you’ve even seen some Shirley Temple movies, you knew how Miss Daisy would be driven: gladly.

As movie posters go, it’s ingeniously concise. But whoever designed it knew the concision was possible because we’d know the shorthand of an eternal racial dynamic. I got off the subway last month and saw a billboard of black Kevin Hart riding on the back of white Bryan Cranston’s motorized wheelchair. They’re both ecstatic. And maybe they’re obligated to be. Their movie is called “The Upside.” A few months before that, I was out getting a coffee when I saw a long, sexy billboard of white Viggo Mortensen driving black Mahershala Ali in a minty blue car for a movie called “Green Book.”

Not knowing what these movies were “about” didn’t mean it wasn’t clear what they were about. They symbolize a style of American storytelling in which the wheels of interracial friendship are greased by employment, in which prolonged exposure to the black half of the duo enhances the humanity of his white, frequently racist counterpart. All the optimism of racial progress — from desegregation to integration to equality to something like true companionship — is stipulated by terms of service. Thirty years separate “Driving Miss Daisy” from these two new films, but how much time has passed, really? The bond in all three is conditionally transactional, possible only if it’s mediated by money. “The Upside” has the rich, quadriplegic author Phillip Lacasse (Cranston) hire an ex-con named Dell Scott (Hart) to be his “life auxiliary.” “Green Book” reverses the races so that some white muscle (Mortensen) drives the black pianist Don Shirley (Ali) to gigs throughout the Deep South in the 1960s. It’s “The Upside Down.”

These pay-for-playmate transactions are a modern pastime, different from an entire history of popular culture that simply required black actors to serve white stars without even the illusion of friendship. It was really only possible in a post-integration America, possible after Sidney Poitier made black stardom loosely feasible for the white studios, possible after the moral and legal adjustments won during the civil rights movements, possible after the political recriminations of the black power and blaxploitation eras let black people regularly frolic among themselves for the first time since the invention of the Hollywood movie. Possible, basically, only in the 1980s, after the movements had more or less subsided and capitalism and jokey white paternalism ran wild.

On television in this era, rich white sitcom families vacuumed up little black boys, on “Diff’rent Strokes,” on “Webster.” On “Diff’rent Strokes,” the adopted boys are the orphaned Harlem sons of Phillip Drummond’s maid. Not only was money supposed to lubricate racial integration; it was perhaps supposed to mitigate a history of keeping black people apart and oppressed.

The sitcoms weren’t officially social experiments, but they were light advertisements for the civilizing (and alienating) benefits of white wealth on black life. The plot of “Trading Places,” from 1983, actually was an experiment, a pungent, complicated one, in which conniving white moneybags install a broke and hustling Eddie Murphy in disgraced Dan Aykroyd’s banking job. The scheme creates an accidental friendship between the duped pair and they both wind up rich.

But that Daddy Warbucks paternalism was how, in 1982, the owner of the country’s most ferocious comedic imagination — Richard Pryor — went from desperate janitor to live-in amusement for the bratty son of a rotten businessman (Jackie Gleason). You have to respect the bluntness of that one. The movie was called “The Toy,” and it’s simultaneously dumb, wild and appalling. I was younger than its little white protagonist (he’s “Master” Eric Bates) when I saw it, but I can still remember the look of embarrassed panic on Pryor’s face while he’s trapped in something called the Wonder Wheel. It’s a look that never quite goes away as he’s made to dress in drag, navigate the Ku Klux Klan and make Gleason feel good about his racism and terrible parenting.

These were relationships that continued the rules of the past, one in which Poitier was frequently hired to turn bigots into buddies. The rules didn’t need to be disguised by yesterday. These arrangements could flourish in the present. So maybe that was the alarming appeal of “Driving Miss Daisy.” It went there. It went back there. And people went for it. The movie came out at the end of 1989, won four Oscars (best picture, actress, adapted screenplay, makeup), got besotted reviews and made a pile of money. Why wasn’t a mystery.

Any time a white person comes anywhere close to the rescue of a black person the academy is primed to say, “Good for you!,” whether it’s “To Kill a Mockingbird,” “Mississippi Burning,” “The Blind Side,” or “The Help.” The year “Driving Miss Daisy” won those Oscars, Morgan Freeman also had a supporting role in a drama (“Glory”) that placed a white Union colonel at its center and was very much in the mix that night. (Denzel Washington won his first Oscar for playing a slave-turned-Union soldier in that movie.) And Spike Lee lost the original screenplay award for “Do the Right Thing,” his masterpiece about a boiled-over pot of racial animus in Brooklyn. I was 14 then, and the political incongruity that night was impossible not to feel. “Driving Miss Daisy” and “Glory” were set in the past and the people who loved them seemed stuck there. The giddy reception for “Miss Daisy” seemed earnest. But Lee’s movie dramatized a starker truth — we couldn’t all just get along.

For what it’s worth, Lee is now up for more Oscars. His film “BlacKkKlansman” has six nominations. Given the five for “Green Book,” basically so is “Driving Miss Daisy.” Which is to say that 2019 might just be 1990 all over again. And yet viewed separately from the cold shower of “Do the Right Thing,” “Driving Miss Daisy” does operate with more finesse, elegance and awareness than my teenage self wanted to see. It’s still not the best movie of 1989. But it does know the southern caste system and the premium that system placed on propriety.

The movie turns the 25-year relationship between Daisy, an elderly Jewish white widow from Atlanta, and Hoke, her elderly, widowed black driver, into both this delicate, modest, tasteful thing — a love letter, a corsage — and something amusingly perverse. Proud old prejudiced Daisy says she doesn’t want to be driven anywhere. But doesn’t she? Hoke treats her pride like a costume. He stalks her with her own new car until she succumbs and lets him drive her to the market. What passes between them feels weirdly kinky: southern-etiquette S&M.

Bruce Beresford directed the movie and Alfred Uhry based it on his Pulitzer Prize-winning play, which he said was inspired by his grandmother and her chauffeur, and it does powder over the era’s upheavals, uprisings and blowups. But it doesn’t sugarcoat the history fueling the regional and national climes, either. Daisy’s fortune comes from cotton, and Hoke, with ruthless affability, keeps reminding her that she’s rich. When she says things are a-changing, he tells her not that much.

Platonic love blossoms, obviously. But the movie’s one emotional gaffe would seem to come near the end when Daisy grabs Hoke’s hand and tells him so. “You’re my best friend,” she creaks. But her admission arises not from one of their little S&M drives but after a bout of dementia. And in a wide shot, he stands above her, a little stooped, halfway in, halfway out, moved yet confused. And in his posture resides an entire history of national racial awkwardness: He has to mind his composure even as she’s losing her mind.

One headache with these movies, even one as well done as “Driving Miss Daisy,” is that they romanticize their workplaces and treat their black characters as the ideal crowbar for closed white minds and insulated lives.

Who knows why, in “The Upside,” Phillip picks the uncouth, underqualified Dell to drive him around, change his catheter and share his palatial apartment. But by the time the movie’s over, they’re paragliding together to Aretha Franklin. We’re told that this is based on a true story. It’s not. It’s a remake of a far more nauseating French megahit — “Les Intouchables” — and that claimed to be based on a true story. “The Upside” seems based on one of those paternalistic ’80s movies, “Disorderlies,” the one where the Fat Boys wheel an ailing Ralph Bellamy around his mansion.

Phillip’s largess and tolerance take Dell from opera-phobic to opera-curious to opera queen, leading to Dell’s being able to afford to transport his ex and their son out of the projects, and permitting Dell to take his boss’s luxury cars for a spin whether or not he’s riding shotgun. And Dell provides entertainment (and drugs) that ease Phillip’s sense of isolation and self-consciousness. But this is also a movie that needs Dell to steal one of Phillip’s antique first-editions as a surprise gift to his estranged son, and not a copy of some Judith Krantz or Sidney Sheldon novel, either. He swipes “Adventures of Huckleberry Finn” (and to reach it, his hand has to skip past a few Horatio Alger books, too). Most of these black-white-friendship adventures were foretold by Mark Twain. Somebody is white Huck and somebody else is his amusingly dim black sidekick, Jim. This movie is just a little more flagrant about it.

There’s a way of looking at the role reversal in “Green Book” as an upgrade. Through his record company, Don hires a white nightclub bouncer named Tony Vallelonga. (Most people call him Tony Lip.) We don’t meet Don for about 15 minutes, because the movie needs us to know that Tony is a sweet, Eye-talian tough guy who also throws out perfectly good glassware because his wife let black repairmen drink from it.

By this point, you might have heard about the fried chicken scene in “Green Book.” It comes early in their road trip. Tony is shocked to discover that Don has never had fried chicken. He also appears never to have seen anybody eat fried chicken, either. (“What do we do about the bones?”) So, with all the greasy alacrity and exuberant crassness that Mortensen can conjure, Tony demonstrates how to eat it while driving. As comedy, it’s masterful — there’s tension, irony and, when the car stops and reverses to retrieve some litter, a punch line that brings down the house. But the comedy works only if the black, classical-pop fusion pianist is from outer space (and not in a Sun Ra sort of way). You’re meant to laugh because how could this racist be better at being black than this black man who’s supposed to be better than him?

The movie Peter Farrelly directed and wrote, with Brian Currie and Tony’s son Nick, is suspiciously like “Driving Miss Daisy,” but same-sex, with Don as Daisy and Tony as Hoke. Indeed, “Miss Daisy” features a fried chicken scene, too, a delicate one, in which Hoke tells her the flame is too high on the skillet and she waves him off. Once he’s left the kitchen, she furtively, begrudgingly adjusts the burner. It’s like Farrelly watched that scene and thought it needed a stick of cartoon dynamite.

Before they head out, a white character from Don’s record company gives Tony a listing of black-friendly places to house Don: The Green Book. The idea for “The Negro Motorist Green Book” belongs to Victor Hugo Green, a postal worker, who introduced it in 1936. It guided black road trippers to stress-free gas, food and lodging in the segregated South. The story of its invention, distribution and updating is an amusing, invigorating, poignant and suspenseful story of an astonishing social network, and warrants a movie in itself. In the meantime, what does Tony need a Green Book for? He is the Green Book.

The movie’s tagline is “based on a true friendship.” But the transactional nature of it makes the friendship seem less true than sponsored. So what does the money do, exactly? The white characters — the biological ones and somebody supposedly not black enough, like fictional Don — are lonely people in these pay-a-pal movies. The money is ostensibly for legitimate assistance, but it also seems to paper over all that’s potentially fraught about race. The relationship is entirely conscripted as service and bound by capitalism and the fantastically presumptive leap is, The money doesn’t matter because I like working for you. And if you’re the racist in the relationship: I can’t be horrible because we’re friends now. That’s why the hug Sandra Bullock gives Yomi Perry, the actor playing her maid, Maria, at the end of “Crash,” remains the single most disturbing gesture of its kind. It’s not friendship. Friendship is mutual. That hug is cannibalism.

Money buys Don a chauffeur and, apparently, an education in black folkways and culture. (Little Richard? He’s never heard him play.) Shirley’s real-life family has objected to the portrait. Their complaints include that he was estranged from neither black people nor blackness. Even without that thumbs-down, you can sense what a particularly perverse fantasy this is: that absolution resides in a neutered black man needing a white guy not only to protect and serve him, but to love him, too. Even if that guy and his Italian-American family and mob associates refer to Don and other black people as eggplant and coal. In the movie’s estimation, their racism is preferable to its nasty, blunter southern cousin because their racism is often spoken in Italian. And, hey, at least Tony never asks Don to eat his fancy dinner in a supply closet.

Mahershala Ali is acting Shirley’s isolation and glumness, but the movie determines that dining with racists is better than dining alone. The money buys Don relative safety, friendship, transportation and a walking-talking black college. What the money can’t buy him is more of the plot in his own movie. It can’t allow him to bask in his own unique, uniquely dreamy artistry. It can’t free him from a movie that sits him where Miss Daisy sat, yet treats him worse than Hoke. He’s a literal passenger on this white man’s trip. Tony learns he really likes black people. And thanks to Tony, now so does Don.

Lately, the black version of these interracial relationships tends to head in the opposite direction. In the black version, for one thing, they’re not about money or a job but about the actual emotional, psychological work of being black among white people. Here, the proximity to whiteness is toxic, a danger, a threat. That’s the thrust of Jeremy O. Harris’s stage drama “Slave Play,” in which the traumatic legacy of plantation life pollutes the black half of the show’s interracial relationships. That’s a particularly explicit, ingenious example. But scarcely any of the work I’ve seen in the last year by black artists — not Jackie Sibblies Drury’s equally audacious play “Fairview,” not Boots Riley’s “Sorry to Bother You,” not “Blindspotting,” which Daveed Diggs co-wrote and stars in, not Barry Jenkins’s “If Beale Street Could Talk” or Ryan Coogler’s “Black Panther” — emphasizes the smoothness and joys of interracial friendship and certainly not through employment. The health of these connections is iffy, at best.

In 1989, Lee was pretty much on his own as a voice of black racial reality. His rankled pragmatism now has company and, at the Academy Awards, it’s also got stiff competition. He helped plant the seeds for an environment in which black artists can look askance at race. But a lot of us still need the sense of fantastical racial contentment that movies like “The Upside” and “Green Book” are slinging. I’ve seen “Green Book” with paying audiences, and it cracks people up the way any of Farrelly’s comedies do. The kind of closure it offers is like a drug that Lee’s never dealt. The Charlottesville-riot footage that he includes as an epilogue in “BlacKkKlansman” might bury the loose, essentially comedic movie it’s attached to in furious lava. Lee knows the past too well to ever let the present off the hook. The volcanoes in this country have never been dormant.

The academy’s embrace of Lee at this stage of his career (this is his first best director nomination) suggests that it’s come around to what rankles him. Of course, “BlacKkKlansman” is taking on the unmistakable villainy of the KKK in the 1970s. But what put Lee on the map 30 years ago was his fearlessness about calling out the universal casual bigotry of the moment, like Daisy’s and Tony’s. It’s hot as hell in “Do the Right Thing,” and in the heat, almost everybody has a problem with who somebody is. The pizzeria owned by Sal (Danny Aiello) comes to resemble a house of hate. Eventually Sal’s delivery guy, Mookie (played by Lee), incites a melee by hurling a trash can through the store window. He’d already endured a conversation with Pino (John Turturro), Sal’s racist son, in which he tells Mookie that famous black people are “more than black.”

Closure is impossible because the blood is too bad, too historically American. Lee had conjured a social environment that’s the opposite of what “The Upside,” “Green Book,” and “Driving Miss Daisy” believe. In one of the very last scenes, after Sal’s place is destroyed, Mookie still demands to be paid. To this day, Sal’s tossing balled-up bills at Mookie, one by one, shocks me. He’s mortally offended. Mookie’s unmoved. They’re at a harsh, anti-romantic impasse. We’d all been reared on racial-reconciliation fantasies. Why can’t Mookie and Sal be friends? The answer’s too long and too raw. Sal can pay Mookie to deliver pizzas ‘til kingdom come. But he could never pay him enough to be his friend.

A version of this article appears in print on Jan. 27, 2019, on Page AR1 of the New York edition with the headline: Friendship Or Fantasy ?

The black characters in films like ‘The Help’ and ’12 Years A Slave’ always seem to need a white knight. But the black protagonist in ‘Belle,’ a new film about racism and slavery in England, takes matters into her own hands.

Keli Goff

The Daily Beast

05.04.14

The film Belle, which opens this weekend in limited release stateside, is inspired by a true story, deals with the horrors of the African slave trade, and its director is black and British. For these reasons, comparisons to the recent recipient of the Best Picture Oscar, 12 Years a Slave, are inevitable.

But there are some notable differences.

Among them, Belle is set in England, while 12 Years a Slave is set in America. 12 Years a Slave depicts—in unflinching detail—the brutalities of slavery, while Belle merely hints at its physical and psychological toll. But the most significant deviation is this: whereas 12 Years a Slave faced criticism for being yet another film to perpetuate the “white savior” cliché in cinema, in Belle, the beleaguered black protagonist does something novel: she saves herself.

“Belle marks the first film I’ve seen in which a black woman with agency stands at the center of the plot as a full, eloquent human being who is neither adoring foil nor moral touchstone for her better spoken white counterparts,” the novelist and TV producer Susan Fales-Hill told The Daily Beast.

Directed by the Amma Asante, the film is inspired by the 1779 painting of Dido Elizabeth Belle, a mixed race woman in a turban hauling fruit, and her white cousin, Lady Elizabeth Murray. The artwork was commissioned by William Murray, acting Lord Chief Justice of England, and depicts the two nieces smiling with Murray’s hand resting on Belle’s waist—a gesture suggesting equality, not subservience. While its artist is unknown, the portrait hung in England’s Kenwood House, alongside works by Vermeer and Rembrandt, until 1922.

The painting’s mysterious subject, Belle, was the daughter of an African slave known as Maria Belle and Admiral Sir John Lindsay, an English aristocrat. She was ultimately raised by Lindsay’s uncle, William Murray, the aforementioned Lord Chief Justice and 1st Earl of Mansfield, with many of the privileges befitting a woman of her family’s high standing. Since not much is known of Belle’s life inside the Mansfield estate, Asante and screenwriter Misan Sagay took some artistic license in dramatizing the dehumanizing racial prejudice their protagonist endured that even her social standing and wealth could not erase.

For instance, while not permitted to dine with the servants of her home since they were considered beneath her, she was also not permitted to dine with her family when guests were present since she was considered beneath them. This racial balancing act makes Belle one of the most genteel yet uncomfortable depictions of racism ever to grace the screen. Here, the racism isn’t as black-and-white—those providing Belle with her luxury attire, emotional affection, and protection from the racial brutality of the outside world also see her as a lesser being.

“For me, this point of view is so refreshing,” Gugu Mbatha-Raw, who plays Belle, told The Daily Beast. “I’d never seen a period drama like this with a woman of color as the lead who wasn’t being brutalized, wasn’t being raped, was going through this personal evolution but was also in a privileged world and articulate and educated. I just hadn’t seen that on film before.”

Indeed, Belle becomes empowered to challenge the white characters that view themselves as her savior on their veiled racism, which marks a welcome departure from one of Hollywood’s most enduring cinematic tropes: the white savior.

When it comes to race-relations dramas—and slavery narratives, in particular—the white savior has become one of Hollywood’s most reliably offensive clichés. The black servants of The Help needed a perky, progressive Emma Stone to shed light on their plight; the football bruiser in The Blind Side couldn’t have done it without fiery Sandra Bullock; the black athletes in Cool Runnings and The Air Up There needed the guidance of their white coach; and in 12 Years A Slave, Solomon Northup, played by Chiwetel Ejiofor, is liberated at the eleventh hour by a Jesus-looking Brad Pitt (in a classic Deus Ex Machina).

“I think it’s a trope that has certainly been seen in Hollywood films for decades,” longtime film critic Laurence Lerman, formerly of Variety, says. “Think about the white teacher in the inner city school. The Michelle Pfeiffer one [in Dangerous Minds]. The Principal. Music of the Heart, where Meryl Streep was a music teacher. Wildcats. I think these stories probably read well in a pitch meeting: ‘Goldie Hawn coaching an inner city football team.’”

But, as he went on to explain, the execution often leaves something to be desired and doesn’t always reflect well on the communities it depicts—ones rooted in chaos that need a white savior to restore order. Lerman further noted that this cinematic trope is not limited to the depiction of inner cities or black people. Of the Last Samurai he said, “They make it look like Japan would not have made it out of the feudal period without Tom Cruise.” And the worst offender, in his opinion, is Dances with Wolves. “The west wouldn’t have been tamed and we’d have no civilization if Kevin Costner didn’t ride into town,” he says sarcastically.

The issue, according to Lerman, is more complex given the nature of Hollywood and the various power structures at play. While there are plenty of important stories to tell featuring people of color, there are only a small number of people of color in Hollywood with the clout to get a film green-lit—especially since we’re living in an age where international box office trumps domestic. This troubling disparity often results in a white star needing to be featured in a film with a predominantly minority cast to secure the necessary financing—as was the case with Pitt’s appearance in 12 Years A Slave, a film produced by his company, Plan B. And who can forget the controversy over the outrageous Italian movie posters for 12 Years A Slave, which prominently featured the film’s white movie stars—Pitt and Michael Fassbender—in favor of the movie’s real star, Chiwetel Ejiofor.

Without ruining the film for you, part of what makes Belle so refreshing is that its portrayal of black characters, namely Belle, is one of dignity. They aren’t the typical uneducated blacks you see in films that need to be shown the light by a white knight, for they’re blessed with more intellect and class than many of their white subjugators, who soon come to realize that Belle, through her grace and wisdom, is their savior.

“Her family thought they were giving her great love, but until she’s able to take that freedom for herself and find self-love and feel comfortable in her own skin, that’s when she’s ready to challenge them,” says Mbatha-Raw. “It just felt like a story that needed to be told.”

As part of our Historian at the Movies series, James Walvin OBE, professor emeritus of the University of York, reviews Belle, a true story film about Dido Elizabeth Belle, the illegitimate mixed-race daughter of Admiral Sir John Lindsay (Matthew Goode) and an African slave woman.

**Please be aware that this review contains spoilers**

Histrory extra

July 2, 2014

Q: Did you enjoy the film?

A: I ought to have enjoyed this film, but watched it, twice, with mounting dissatisfaction.

Belle hit the screens in the UK on 13 June amid a massive publicity campaign. The main star’s face (Gugu Mbatha-Raw) adorned the London underground, ads festooned the newspapers, and the media in general fell over themselves to provide free, and largely adulatory publicity.

Here, it seemed, is a film for our times. It is the story of slavery and the law, of beauty and the beast, and of Britain at a late 18th-century major turning point. It also speaks one of my special interests: the history of black people in Britain, and slavery.

It tells the dramatic true story of the daughter of an African slave woman and an English sailor, raised in the company of the Lord Chief Justice Mansfield (at the time when he was adjudicating major slave cases – Somerset and the Zong. [In the 1783 Zong case, the owners of the Zong slave ship made a claim to their insurers for the loss of the hundreds of slaves thrown overboard by the crew as disease and malnutrition ravaged the ship. Insurers refused to pay, but the case was taken to court and they lost. Lord Mansfield, the Lord Chief Justice for the case, compared the loss of the ‘slave cargo’ to the loss of horses, viewing the enslaved as property.]

The film is also the story of a beautiful woman celebrated in a major portrait. It is sumptuous, eye-watering and glossy: think Downton Abbey meets the slave trade. Yet for all the hype, for all the overblown praise and self-promotion of those involved, I disliked it.

There are some fine performances by a number of prominent actors, but even their skills and efforts can’t deflect the film’s basic flaws.

Q: Is the film historically accurate?

A: It is always hard for an historian to assess a film that is based on real events. After all, the makers need to weave a compelling story and a visual treat from evidence that is often sparse and unyielding.

In this case, much of the historical evidence is there – though festooned in the film with imaginary relishes and fictional tricks. Partly accurate, the whole thing reminded me of the classic Morecombe and Wise sketch with Andre Previn (Eric bashing away on the piano): all the right notes – but not necessarily in the right order.

Q: What did the film get right?

A: The film was a bold statement about the black presence in British history, and was good at revealing the social and racial tensions of Belle’s presence in the wider world of Mansfield’s Kenwood House. Here was a world, thousands of miles away from slavery, but enmeshed in its consequences.

The message, however, was delivered with thunderous and didactic simplicity: Belle is often given lines that sound as if they’ve been nicked from an abolitionist’s sermon. Her suitor (later her husband), Mr Davinier, offers a wincing portrayal of outraged humanity.

Q: What did it miss?

A: The real difficulty is that we know very little about Belle. To overcome that problem, the filmmakers had available a major event to bulk out a fading story: they hitch the fragments known about Belle onto the story of the massacre on the Zong slave ship.

The second half of the film is the story of Belle’s fictional involvement in that case. It portrays her growing outrage (following the simpering lead of her would-be suitor), and her activity as abolitionist mole in the Mansfield house. The aim is to illustrate Belle wooing Mansfield over to the abolitionist cause. To do this, the filmmakers make free with recently published material on the Zong. In truth, Belle is nowhere to be found in the Zong affair – except that is, in the film.

Tom Wilkinson’s Mansfield finds his cold legal commercial heart softened, and edged towards abolition by the eyelash-fluttering efforts of his stunning great niece. And lo! It works! In an expectant crowded courtroom scene (which could have been called 112 Angry Men), Mansfield’s adjudication becomes, not a point of law, but the first bold assertion towards the end of slavery. In reality, he merely stated that there should be another hearing of the Zong case – this time with evidence not known at the earlier hearing.

With freedom (for three quarters of a million slaves) beckoning over the horizon, Belle and her suitor step outside, find love, and Mansfield’s blessing – in the form of a knowing smile from Tom Wilkinson.

The film has all the ingredients for success. Lachrymose sentimentality, delivered to the screen by bucket-loads of opulent abundance. It has beauty at every turn (the brute ugliness of slavery remains a mere noise off-stage). Humanity and justice finally win out – all aided and propelled forward by female beauty.

I left the cinema asking myself: who would be spinning faster in their respective graves: Lord Mansfield or Dido Elizabeth Belle?

The gospel revelation gradually destroys the ability to sacralize and valorize violence of any kind, even for Americans in pursuit of the good. (…) At the heart of the cultural world in which we live, and into whose orbit the whole world is being gradually drawn, is a surreal confusion. The impossible Mother Teresa-John Wayne antinomy Times correspondent (Lance) Morrow discerned in America’s humanitarian 1992 Somali operation is simply a contemporary manifestation of the tension that for centuries has hounded those cultures under biblical influence.Gil Bailie

Just over 50 years ago, the poet W.H. Auden achieved what all writers envy: making a prophecy that would come true. It is embedded in a long work called For the Time Being, where Herod muses about the distasteful task of massacring the Innocents. He doesn’t want to, because he is at heart a liberal. But still, he predicts, if that Child is allowed to get away, « Reason will be replaced by Revelation. Instead of Rational Law, objective truths perceptible to any who will undergo the necessary intellectual discipline, Knowledge will degenerate into a riot of subjective visions . . . Whole cosmogonies will be created out of some forgotten personal resentment, complete epics written in private languages, the daubs of schoolchildren ranked above the greatest masterpieces. Idealism will be replaced by Materialism. Life after death will be an eternal dinner party where all the guests are 20 years old . . . Justice will be replaced by Pity as the cardinal human virtue, and all fear of retribution will vanish . . . The New Aristocracy will consist exclusively of hermits, bums and permanent invalids. The Rough Diamond, the Consumptive Whore, the bandit who is good to his mother, the epileptic girl who has a way with animals will be the heroes and heroines of the New Age, when the general, the statesman, and the philosopher have become the butt of every farce and satire. »What Herod saw was America in the late 1980s and early ’90s, right down to that dire phrase « New Age. »(…) Americans are obsessed with the recognition, praise and, when necessary, the manufacture of victims, whose one common feature is that they have been denied parity with that Blond Beast of the sentimental imagination, the heterosexual, middle-class white male. The range of victims available 10 years ago — blacks, Chicanos, Indians, women, homosexuals — has now expanded to include every permutation of the halt, the blind and the short, or, to put it correctly, the vertically challenged. (…) Since our newfound sensitivity decrees that only the victim shall be the hero, the white American male starts bawling for victim status too. (…) European man, once the hero of the conquest of the Americas, now becomes its demon; and the victims, who cannot be brought back to life, are sanctified. On either side of the divide between Euro and native, historians stand ready with tarbrush and gold leaf, and instead of the wicked old stereotypes, we have a whole outfit of equally misleading new ones. Our predecessors made a hero of Christopher Columbus. To Europeans and white Americans in 1892, he was Manifest Destiny in tights, whereas a current PC book like Kirkpatrick Sale’s The Conquest of Paradise makes him more like Hitler in a caravel, landing like a virus among the innocent people of the New World.Robert Hughes (24.06.2001)

The Secure Fence Act of 2006, which was passed by a Republican Congress and signed by President George W. Bush, authorized about 700 miles of fencing along certain stretches of land between the border of the United States and Mexico. (…) At the time the act was being considered, Barack Obama, Hillary Clinton and Chuck Schumer were all members of the Senate. (…) Obama, Clinton, Schumer and 23 other Democratic senators voted in favor of the act when it passed in the Senate by a vote of 80 to 19. (…) Currently, 702 miles of fencing separates the United States from Mexico, according to U.S. Customs and Border Protection. Trump plans for the wall are vague, but here’s what we know. He said the wall doesn’t need to run the nearly 2,000 miles of the border, but about 1,000 miles because of natural barriers. He said it could cost between $8 billion and $12 billion, be made of precast concrete, and rise 35 to 40 feet, or 50 feet, or higher. Experts have repeatedly told PolitiFact that the differences in semantics between a wall and a fence are not too significant because both block people. (…) A 2016 Associated Press report from the border described « rust-colored thick bars » that form « teeth-like slats » 18 feet high. « There are miles of gaps between segments and openings in the fence itself, » the report said. Trump criticized the 2006 fence as too modest during the 2016 campaign. (…) It’s also worth noting that the political context surrounding the 2006 vote was different, too. Democrats normally in favor of looser immigration laws saw the Secure Fence Act of 2006 as the lesser of two evils, according to a Boston Globe report that detailed the legislative process. Around that same time, the House passed legislation that would make any undocumented immigrant a felon. « It didn’t have anywhere near the gravity of harm, » Angela Kelley, who in 2006 was the legislative director for the National Immigration Forum, told the Boston Globe. « It was hard to vote against it because who is going to vote against a secure fence? And it was benign compared with what was out there. » Politifact

No country can exist without borders. Hillary and Obama have all but destroyed them; Trump must remind us how he will restore them. Walls throughout history have been part of the solution, from Hadrian’s Wall to Israel’s fence with the Palestinians. “Making Mexico pay for the wall” is not empty rhetoric, when $26 billion in remittances go back to Mexico without taxes or fees, largely sent from those here illegally, and it could serve as a source of funding revenue.Trump can supersede “comprehensive immigration” with a simple program: Secure and fortify the borders first; begin deporting those with a criminal record, and without a work history. Fine employers who hire illegal aliens. Any illegal aliens who choose to stay, must be working, crime-free, and have two years of residence. They can pay a fine for having entered the U.S. illegally, learn English, and stay while applying for a green card — that effort, like all individual applications, may or may not be approved. He should point out that illegal immigrants have cut in line in front of legal applicants, delaying for years any consideration of entry. That is not an act of love. Sanctuary cities are a neo-Confederate idea, and should have their federal funds cut off for undermining U.S. law. The time-tried melting pot of assimilation and integration, not the bankrupt salad bowl of identity politics, hyphenated nomenclature, and newly accented names should be our model of teaching new legal immigrants how to become citizens. Victor Davis Hanson

Securing national borders seems pretty orthodox. In an age of anti-Western terrorism, placing temporary holds on would-be immigrants from war-torn zones until they can be vetted is hardly radical. Expecting “sanctuary cities” to follow federal laws rather than embrace the nullification strategies of the secessionist Old Confederacy is a return to the laws of the Constitution. Using the term “radical Islamic terror” in place of “workplace violence” or “man-caused disasters” is sensible, not subversive. Insisting that NATO members meet their long-ignored defense-spending obligations is not provocative but overdue. Assuming that both the European Union and the United Nations are imploding is empirical, not unhinged. Questioning the secret side agreements of the Iran deal or failed Russian reset is facing reality. Making the Environmental Protection Agency follow laws rather than make laws is the way it always was supposed to be. Unapologetically siding with Israel, the only free and democratic country in the Middle East, used to be standard U.S. policy until Obama was elected. (…) Expecting the media to report the news rather than massage it to fit progressive agendas makes sense. In the past, proclaiming Obama a “sort of god” or the smartest man ever to enter the presidency was not normal journalistic practice. (…) Half the country is having a hard time adjusting to Trumpism, confusing Trump’s often unorthodox and grating style with his otherwise practical and mostly centrist agenda. In sum, Trump seems a revolutionary, but that is only because he is loudly undoing a revolution.Victor Davis Hanson

There was likely never going to be “comprehensive immigration reform” or any deal amnestying the DACA recipients in exchange for building the wall. Democrats in the present political landscape will not consent to a wall. For them, a successful border wall is now considered bad politics in almost every manner imaginable. Yet 12 years ago, Congress, with broad bipartisan support, passed the Secure Fence of Act of 2006. The bill was signed into law by then-President George W. Bush to overwhelming public applause. The stopgap legislation led to some 650 miles of a mostly inexpensive steel fence while still leaving about two-thirds of the 1,950-mile border unfenced. In those days there were not, as now, nearly 50 million foreign-born immigrants living in the United States, perhaps nearly 15 million of them illegally. Sheer numbers have radically changed electoral politics. Take California. One out of every four residents in California is foreign-born. Not since 2006 has any California Republican been elected to statewide office. The solidly blue states of the American Southwest, including Colorado, Nevada and New Mexico, voted red as recently as 2004 for George W. Bush. Progressives understandably conclude that de facto open borders are good long-term politics. Once upon a time, Democrats such as Hillary and Bill Clinton and Barack Obama talked tough about illegal immigration. They even ruled out amnesty while talking up a new border wall. In those days, progressives saw illegal immigration as illiberal — or at least not as a winning proposition among union households and the working poor. Democratic constituencies opposed importing inexpensive foreign labor for corporate bosses. Welfare rights groups believed that massive illegal immigration would swamp social services and curtail government help to American poor of the barrios and the inner city. So, what happened? Again, numbers. Hundreds of thousands of undocumented immigrants have flocked into the United States over the last decade. In addition, the Obama administration discouraged the melting-pot assimilationist model of integrating only legal immigrants. Salad-bowl multiculturalism, growing tribalism and large numbers of unassimilated immigrants added up to politically advantageous demography for Democrats in the long run. In contrast, a wall would likely reduce illegal immigration dramatically and with it future Democratic constituents. Legal, meritocratic, measured and diverse immigration in its place would likely end up being politically neutral. And without fresh waves of undocumented immigrants from south of the border, identity politics would wane. A wall also would radically change the optics of illegal immigration. Currently, in unsecured border areas, armed border patrol guards sometimes stand behind barbed wire. Without a wall, they are forced to rely on dogs and tear gas when rushed by would-be border crossers. They are easy targets for stone-throwers on the Mexican side of the border. A high wall would end that. Border guards would be mostly invisible from the Mexican side of the wall. Barbed wire, dogs and tear gas astride the border — the ingredients for media sensationalism — would be unnecessary. Instead, footage of would-be border crossers trying to climb 30-foot walls would emphasize the degree to which some are callously breaking the law. Such imagery would remind the world that undocumented immigrants are not always noble victims but often selfish young adult males who have little regard for the millions of aspiring immigrants who wait patiently in line and follow the rules to enter the United State lawfully. More importantly, thousands of undocumented immigrants cross miles of dangerous, unguarded borderlands each year to walk for days in the desert. Often, they fall prey to dangers ranging from cartel gangs to dehydration. Usually, the United States is somehow blamed for their plight, even though a few years ago the Mexican government issued a comic book with instructions on how citizens could most effectively break U.S. law and cross the border. The wall would make illegal crossings almost impossible, saving lives. Latin American governments and Democratic operatives assume that lax border enforcement facilitates the outflow of billions of dollars in remittances sent south of the border and helps flip red states blue. All prior efforts to ensure border security — sanctions against employers, threats to cut off foreign aid to Mexico and Central America, and talk of tamper-proof identity cards — have failed. Instead, amnesties, expanded entitlements and hundreds of sanctuary jurisdictions offer incentives for waves of undocumented immigrants. The reason a secure border wall has not been — and may not be — built is not apprehension that it would not work, but rather real fear that it would work only too well.Victor Davis Hanson

New House majority leader Nancy Pelosi reportedly spent the holidays at the Fairmont Orchid on Kona, contemplating future climate-change legislation and still adamant in opposing the supposed vanity border wall. But in a very different real world from the Fairmont Orchid or Pacific Heights, other people each day deal with the results of open borders and sanctuary jurisdictions. The results are often nihilistic and horrific. (…)These incidents, and less violent ones like them, are not all that rare in rural California. The narratives are tragically similar and hinge on our society’s assumptions of tolerance and its belief that entering and residing illegally in the United States are not really crimes. Fraudulent identification and fake names are not really felonious behaviors. Driving under the influence is no reason for deportation — all crimes that can ruin careers and have expensive consequences for citizens. Statisticians argue that immigrants commit fewer crimes than the native born, but never quite calibrate illegal immigrants into the equation (in part because no one has any idea who, where, or how many they are, as estimates range from 11 to 20 million) or note that second-generation native-born children of immigrants have much higher violent-crime rates than do their immigrant parents, and in circular fashion add to the general pool of violent Americans who then are used to contrast immigrants as less violent. Immorality is undermining, in Confederate fashion, federal law, and normalizing exemptions that allow felons such as Garcia and Arriaga to wreak havoc on the innocent and defenseless. Too often the architects of open borders and sanctuary jurisdictions are not on the front lines where the vulnerable suffer the all-too-real consequences of distant others, who can rely on their own far greater safety nets when their grand abstractions become all too concrete. And, finally, we forget that so often the victims of illegal aliens are (in California where one in four residents was not born in the U.S.) legal immigrants like officer Singh, and members of the Hispanic community like the late Mr. Soto. Polls show that support for open borders is not popular and most Americans want an end to illegal immigration and catch and release, as well as stricter enforcement of current federal immigration laws. Victor Davis Hanson

Why is our age of walls also the most open age in humanity’s history? Why is the march of globalisation now being kept company by re-activated nationalisms? Samanth Subramanian

I learnt early on in Bosnia, to understand the terrain in order to understand the story. There’s two things often, even in conflict zones, that some journalists don’t do. One is understanding religion, I mean really understand it. When all this started [the Arab uprisings] there was a whole generation of journalists who because they come from a secular society, thought religion was not a major factor. I think they found it hard to believe that these people actually do believe what they say, whereas I always knew to take them at their word. They believe this stuff, which is their right. I think some people just couldn’t bring themselves to believe people believe this in the 21st century. The other one is terrain. I was also influenced (and I acknowledge it) by Robert Kaplan’s Revenge of Geography. So I took all these ideas that have been swirling around for so long and packed in work to write. Then we start talking about identity, about national symbols and the emotional buttons they press [see Worth Dying For: The Power and Politics of Flags, 2016]. In all my travels, I would always ask, “Who is that statue of? Why is your flag the colour it is?”. You would learn the emotional buttons that are pushed in populations. I do see my latest three books as a trilogy because it all comes together. This last one I wanted to call Us and Them, but that’s been done, so Divided is the title. It’s realistic but depressing stuff, but I do think it’s a fair reflection of where we are, and I think slowly dawning on the Western peoples is the realisation that advancement is not a given. Progression is not a given. (…) It is somewhat deterministic in that yes, these things do, partially determine what happens, but that’s the key word, partially. I’ve had a great response to it, half a million sales, and some very nice reviews. Where it has been criticised, is that it is “too deterministic”. I think that ignores the six or seven times I say in the book, ‘this is a determining factor, not the determining factor’. There is obviously ideas, technology, politics, great leaders. All this stuff goes into make up [international politics], but the one that is overlooked is [physical] geography. That is precisely because intellectuals have a problem with anything deterministic because it is something beyond their control. The new book features a lot on borders. The ‘Open Borders’ theory is right in its idea of oneness, which I happen to agree with, we are one. However, for a whole bunch of reasons, including geography, we are divided from each other. That includes rivers, oceans and mountains, which have divided us from each other and made us different from each other, to the extent I would argue that I cannot see, in the foreseeable future us actually being one. Nor do I think dropping borders would make us one people; I think it would make us kill even more of each other than we already do. I’m reasonably utilitarian on this – the fewest people get killed, that’s good with me. I think their way [‘open border’ scholars] would get a lot more people killed than there already are, and there’s a lot. It’s a utopian idea that I like the idea of, but I’m not convinced it works. These divisions appear to be endemic. This might be a bit trite -and an academic would find it trite- but go up to someone you know and like, and who knows and likes you, and put your nose closer and closer [to their face]. At a certain point, that person is uncomfortable with it, with you in their space. That to me is a starting point, extrapolate from that. We need space, and self-identifying groups require space. Religions have tried to make us one, but it hasn’t quite worked, maybe it’s impossible precisely because we’re human. I suppose I am [pro-borders]. I dislike borders, however I think the way humanity is, and always has been structured, they are inevitable. If you try to get rid of that you’re going to open up a horrible can of worms. This is very unfashionable: I think the nation-state is probably the best unit for organising peoples. Without nation-states, of course there wouldn’t be interstate wars, but we’d be back to fiefdoms before you know it. (…) Divided is (…) about walls and divisions and fences going up all over the place. There’s a chapter on the Indian subcontinent, the walls, barriers and internal divisions in Pakistan, Bangladesh, Myanmar and India. Then a chapter on the USA, starting on walls and moving to racial divisions. Chapters on Europe, Israel, the Middle East, the UK – Brexit is part of it. That little strip of water called the Channel I think has a huge physical and psychological effect on the British. Without it, we wouldn’t have voted for Brexit, for two reasons. One, psychologically, we would feel less distinct, and secondly because of that our history would be very different: we might well have suffered the shock and trauma of the Second World War to the extent that continental Europe did. I read something just yesterday which struck a chord; the British experience of Hitler was such that we could make him a figure of fun, but the Russian experience was such that they don’t do that, it’s too traumatic. I’m interested in something that I completely disagree with: the open borders movement, which in academia is a ‘thing’. I’ve got a problem with ‘no borders’. There’s a very nice guy who helped me on the book called Professor Reece Jones from the University of Hawaii (author of Violent Borders, 2016). He gave me a few quotes for the book and I really like him, but some of his colleagues in this spectrum argue completely to bring borders down, almost overnight. They don’t factor in what will happen to the politics of the countries. We’ve seen with the movement we’ve had already, what’s happening to the politics of Europe, Austria as an example, Germany, France, Sweden, the Netherlands. Magnify that several times if you have no borders – it’s a utopian view.Tim Marshall

This is a mammoth subject and not just because Donald Trump based much of his success in the US electoral college (if not the US popular vote) by claiming at every opportunity that he would “build that wall”. So Marshall explores how different societies have responded to the changes wrought by our globalised world and how they rise to the challenge of maintaining national identity. Trump’s America, he argues, is “the only major power that can absorb the potential losses of withdrawing from globalisation without seriously endangering itself in the short term”. But Trump’s border wall is a rhetorical device that plays on a fear of other peoples. It is unlikely ever to be built, not least because about two-thirds of southern borderland property and land is in private ownership, but it reassures his core voters. Next Marshall turns his attentions to China, home of the Great Wall, where the state has responded to global upheaval by restricting its citizens’ access to the internet. This is his cue to explore cyber security and “the Great Firewall of China”. As Marshall argues, “internet censorship does restrict China’s economic potential” but that is a price that the Chinese Communist Party is willing to pay to maintain both its power and national unity. Subsequent chapters examine Israel and Palestine where walls are a necessity but they are “containing the violence – for now”. In the wider Middle East, Marshall argues that “ironically, another wall is needed… between religion and politics” if the region is to escape its troubled past. The Indian subcontinent contains the longest border fence in the world which runs for 2,500 miles between India and Bangladesh. But the area is still struggling to cope with mass migration as well as climate change. Seven out of 10 of the world’s most unequal countries are to be found in Africa. Marshall focuses on the legacy of colonialism and influences of globalisation which, he argues, “has lifted hundreds of millions of people out of poverty” while widening the gap “between the rich and not rich”. The final two chapters focus on Europe and the UK with Marshall exploring “the new realities of mass immigration and the moral necessities to take in refugees”. He shows how population pressures have led to the rise of nationalism and the Far-Right. Nonetheless he argues that we still need our nation states because “communities need to be bound together in shared experience”. Walls, Marshall concedes, have their place and we need not necessarily “decry the trend of wall-building… they can also provide temporary and partial alleviation of problems, even as countries work towards more lasting solutions, especially in areas of conflict”.Huston Gilmore

According to Tim Marshall, the fall of the Berlin Wall was the exception rather than the rule. ‘We are seeing walls being built along borders everywhere,’ he writes. The numbers support his argument. Fortified borders have increased from almost zero at the end of WWII to around 70 today, with the vast majority having been built since 2000. The divides continue to steer geopolitics and national identities, and countries appear to be goading each other into more wall building. ‘These are the fault lines that will shape our world for years to come,’ says Marshall. In that sense, President Trump’s campaign border wall seems less a shocking new policy than a repeating pattern. As one of the most high-profile border issues, Marshall devotes an early chapter to the Mexico/US divide and uses it to lay the foundations for what makes hard borders persuasive in popular politics – even if they are ineffective at preventing illegal immigration. Marshall puts it bluntly: ‘they make people who want something to be done feel that something is being done… Ultimately, very few barriers are impenetrable. People are resourceful, and those desperate enough will find a way around.’ Marshall takes us on a tour of some of the most relevant border divides in the world: India’s borders with Pakistan and Bangladesh, the Israel and Palestine border in the West Bank, the new borders across the Middle East and those running across Europe. (…) Where Divided is in its most revelatory, however, is where it looks at borders on an internal level, such as gated communities in South Africa and the US. Here Marshall shows how levels of exclusivity can spiral inward from the international to the regional to the local. ‘The new model of urban and suburban living is designed to be exclusionary: you can only get to the town square if you can get through the security surrounding the town. This lack of interaction may shrink the sense of civic engagement, encourage group-think among those on the inside and lead to a psychological division, with poorer people left feeling like “outsiders”, as though they have been walled off.’ In China, he argues, it is the entire population who are excluded. The ‘Great Firewall’ of China keeps the country’s 700 million users (roughly one-quarter of the world’s online population) excluded from the foreign media, meanwhile, internal firewalls and censorship keep the users from connecting too much with each other. ‘The party particularly fears social media being used to organise like-minded groups who might then gather in public places to demonstrate, which in turn could lead to rioting,’ he writes. Laura Cole

There is now a loose consortium of influential academics, pundits and businesspeople known as “New Optimists” dedicated to promoting the proposition that we are living in the best of times. If they are all correct, how do we explain what looks and feels like the world’s collective descent into chaos over the past decade-and-a-half? The optimists overlook the experience of a substantial mass of humanity for whom the world – even after being purged of the ills of the past centuries and endowed with modern technology – remains a forbidding place. The optimists’ exaltation of modernity is accompanied by the myth that modernity has created benefits for all. (…) The majority are “more divided than ever”, as Tim Marshall, who is a contributor to The National, notes in his new book. (…) Everywhere there is evidence of people retreating into narrow identities. Marshall, unlike the western commentators who rushed to pronounce this the Chinese century, is not sed­uced by the glitz of Shanghai’s skyscrapers. His eye is trained on the human cost of China’s progress: the disparities generated by it, the exodus from village to city, the loss of individual dignity. Beijing is altering the demographics of Buddhist Tibet, which it violently subsumed in the 1950s, and Muslim Xinjiang by flooding them with Han Chinese. It is in Beijing’s ethnic engineering that Marshall espies “the greatest threat to the prospects of long-term prosperity and unity in China”. Looking at India, Marshall contends that the subcontinent has not fully recovered from the invasions of the past millennium. The people on the peripheries continue to be haunted by the division of India to create Pakistan and the subsequent partition of Pakistan to birth Bangladesh. Bengalis in India resent the influx of migrants from Bangladesh because they are mostly Muslim. India has erected state-of-the-art fences on its eastern border. But as vast swathes of Bangladesh are poised to sink into the waters as sea levels rise, where will the climate refugees of the future go? Marshall’s chapter on the European Union is the most powerful. Ever since Britain voted to leave Europe, extraordinary claims have been made for the EU. But if the EU is the nec plus ultra of political co-operation, why did so many people choose to turn away from it? “The EU,” Marshall writes, “has never really succeeded in replacing the nation state in the hearts of most Europeans.” The EU hierarchs’ revulsion for nationalism doesn’t negate the importance many attach to national identity. As Marshall warns in his chapter on Britain, to “dismiss people who enjoyed their relatively homogeneous cultures and who are now unsure of their place in the world merely drives them into the arms of those who would exploit their anxieties – the real bigots”. By magnifying religion and culture as the causes of division, Marshall exposes himself to the charge of advancing a deterministic view of the world. Yet this is where Divided draws its strength from. As Raymond Aron said in response to French intellectuals who sought to blunt Algerian demands for independence with talk of progress under French rule, “it is a denial of the experience of our century to suppose that men will sacrifice their passions to their interests”. Marshall can’t be faulted for identifying the sources of those passions. He has written frankly about the world. We deny this at our own peril. Kapil Komireddi

What kind of a president would build a wall to keep out families dreaming of a better life? It’s a question that has been asked world over, especially after the outrage last week over migrant children at the American border. Donald Trump’s argument, one which his supporters agree with, is that the need to split parents from children at the border strengthens his case for a hardline immigration policy. Failure to patrol the border, he says, encourages tens of thousands to cross it illegally — with heartbreaking results. His opponents think he is guilty, and that his wall is a symbol of America closing in on itself… In fact, building a wall would make Trump the norm, not the exception. Those who denounced as crazy Trump’s campaign promise to build a wall did not appreciate how popular such a policy would be, nor how common. Nation states have started to matter again, and people care about borders — not just on the Texan side of the Rio Grande. Today more than 65 countries now wall or fence themselves off from their neighbours — a third of all nation states. And this is no historical legacy. Of all the border walls and fences constructed since the second world war, more than half have been built this century. It wasn’t supposed to be this way. Thirty years ago a wall came down, ushering in what looked like a new era of openness. In 1987 Ronald Reagan went to Berlin and called out to his opposite number in the Soviet Union, ‘Mr. Gorbachev — tear down this wall!’ Two years later it fell. In those heady times some intellectuals predicted an end of history. History had other ideas. (…) At the turn of the century migration sped up and that began to tear down hopes of a borderless world. We’ve grown used to the new barriers that European nations have erected — between Greece and Turkey, for instance, or Serbia and Hungary, or Slovenia and Croatia — but many more are being built. To the east, Estonia, Latvia and Lithuania are working on defensive fortifications on their borders with Russia. These measures are more to do with a perceived Russian military threat than with mass migration, but they are part of the overall trend — reinforcing the physical boundaries of the nation state — and contribute to the hard border which runs from the Baltic to the Black Sea. Saudi Arabia has fenced off its border with Iraq. Turkey has constructed a 700-mile concrete wall to separate it from Syria. The Iranian/Pakistan border, all 435 miles of it, is now fenced. In Central Asia, Uzbekistan, despite being landlocked, has closed itself off from its five neighbours. On the story goes, through the barriers separating Brunei and Malaysia, Pakistan and India, India and Bangladesh and so on around the world. The India/Bangladesh fence is instructive in showing us how the era of wall-building is not just about people in the developing world moving to the industrialised nations. The barrier runs the entire length of the 2,500-mile frontier and is New Delhi’s response to 15 million Bangladeshis moving into the Indian border states this century. This has led to ethnic clashes and many deaths. Wherever this mass movement of peoples happens at pace it seems to assist a retreat into identity. Almost all recent election results in Europe bear this out. Concurrent is the rise of extremes. Following the Dutch and French elections in 2016, there was an assumption in the media that Europe had halted the rise of the right. This was a complacent attitude at odds with the evidence. In the Netherlands, Geert Wilders increased both vote share and parliamentary seats. The French election in particular was used to show that President Emmanuel Macron’s ‘open society’ model was triumphing against the ‘closed society’ model of his opponent Marine Le Pen. However, what Le Pen achieved as to almost double the far-right vote to 34 per cent, compared with when her father (Jean-Marie) stood against President Jacques Chirac in 2002. He won 5.25 million votes; last year 10.6 million voters supported the Front National. Austria’s choice of president, the entry of the AfD into the Bundestag, Hungary’s right-wing landslide and Italy’s new government all point to a rightward direction of travel in European politics. In all cases, concern about mass migration is among the driving forces. Voters are worried and tend to support parties which voice their concerns. This is true of Trump’s presidential victory and public support for his wall. To an extent we are dealing with psychology here. It is not true to say that ‘walls don’t work’ — some do, some don’t — but they do give the psychological impression, via their physicality, that ‘something is being done’. They address concerns about migrant invasions in a way that rhetoric about ‘getting tough’ on immigration does not. (…) The headlines afforded Trump’s ‘anti-immigrant’ stance detract from the bigger picture. It is easier to have the big bad wolf to huff and puff against than it is to see him as part of a global phenomenon. Concentrating on the Donald’s evils allows the Mexican government to quietly get on with deporting far more Central Americans from its country each year than does the United States. Granted, the US assists Mexico in this, but last year Mexico deported 165,000 central Americans, while the US expelled 75,000. The tales of hardship crossings, exploitation and human rights violations on the almost ignored Mexican/Guatemala border are, if anything, more harrowing than those on the border 900 miles to the north. (…) The new wall-building is driven by recent events. The cry ‘tear down this wall’ is losing the argument against ‘fortress mentality’. It is struggling to be heard, unable to compete with the frightening heights of mass migration, the backlash against globalisation, the resurgence of nationalism, the collapse of communism and the 2008 financial crash. On the other hand, our ability to cooperate, to think, and to build, also gives us the capacity to fill the spaces between the walls with hope and to build bridges. However, first must come an acceptance of the situation, and a very open and honest discussion of how we got here. Key to that is the debate on migration and identity and that requires a reaching out across the divides on all sides. Tim Marshall

Building a wall makes Donald Trump the rule, not the exception, among world leaders

Tim Marshall

The Spectator

30 June 2018

What kind of a president would build a wall to keep out families dreaming of a better life? It’s a question that has been asked world over, especially after the outrage last week over migrant children at the American border. Donald Trump’s argument, one which his supporters agree with, is that the need to split parents from children at the border strengthens his case for a hardline immigration policy. Failure to patrol the border, he says, encourages tens of thousands to cross it illegally — with heartbreaking results. His opponents think he is guilty, and that his wall is a symbol of America closing in on itself…

In fact, building a wall would make Trump the norm, not the exception. Those who denounced as crazy Trump’s campaign promise to build a wall did not appreciate how popular such a policy would be, nor how common. Nation states have started to matter again, and people care about borders — not just on the Texan side of the Rio Grande. Today more than 65 countries now wall or fence themselves off from their neighbours — a third of all nation states. And this is no historical legacy. Of all the border walls and fences constructed since the second world war, more than half have been built this century.

It wasn’t supposed to be this way. Thirty years ago a wall came down, ushering in what looked like a new era of openness. In 1987 Ronald Reagan went to Berlin and called out to his opposite number in the Soviet Union, ‘Mr. Gorbachev — tear down this wall!’ Two years later it fell. In those heady times some intellectuals predicted an end of history. History had other ideas.

This does not mean Hillary Clinton was wrong when in 2012 she predicted that in the 21st century ‘nations will be divided not between east and west, or along religious lines, but between open and closed societies’. Still, so far she is not right either.

At the turn of the century migration sped up and that began to tear down hopes of a borderless world. We’ve grown used to the new barriers that European nations have erected — between Greece and Turkey, for instance, or Serbia and Hungary, or Slovenia and Croatia — but many more are being built. To the east, Estonia, Latvia and Lithuania are working on defensive fortifications on their borders with Russia. These measures are more to do with a perceived Russian military threat than with mass migration, but they are part of the overall trend — reinforcing the physical boundaries of the nation state — and contribute to the hard border which runs from the Baltic to the Black Sea.

Saudi Arabia has fenced off its border with Iraq. Turkey has constructed a 700-mile concrete wall to separate it from Syria. The Iranian/Pakistan border, all 435 miles of it, is now fenced. In Central Asia, Uzbekistan, despite being landlocked, has closed itself off from its five neighbours.

On the story goes, through the barriers separating Brunei and Malaysia, Pakistan and India, India and Bangladesh and so on around the world. The India/Bangladesh fence is instructive in showing us how the era of wall-building is not just about people in the developing world moving to the industrialised nations. The barrier runs the entire length of the 2,500-mile frontier and is New Delhi’s response to 15 million Bangladeshis moving into the Indian border states this century. This has led to ethnic clashes and many deaths.

Wherever this mass movement of peoples happens at pace it seems to assist a retreat into identity. Almost all recent election results in Europe bear this out. Concurrent is the rise of extremes.

Following the Dutch and French elections in 2016, there was an assumption in the media that Europe had halted the rise of the right. This was a complacent attitude at odds with the evidence. In the Netherlands, Geert Wilders increased both vote share and parliamentary seats. The French election in particular was used to show that President Emmanuel Macron’s ‘open society’ model was triumphing against the ‘closed society’ model of his opponent Marine Le Pen. However, what Le Pen achieved as to almost double the far-right vote to 34 per cent, compared with when her father (Jean-Marie) stood against President Jacques Chirac in 2002. He won 5.25 million votes; last year 10.6 million voters supported the Front National. Austria’s choice of president, the entry of the AfD into the Bundestag, Hungary’s right-wing landslide and Italy’s new government all point to a rightward direction of travel in European politics. In all cases, concern about mass migration is among the driving forces. Voters are worried and tend to support parties which voice their concerns.

This is true of Trump’s presidential victory and public support for his wall. To an extent we are dealing with psychology here. It is not true to say that ‘walls don’t work’ — some do, some don’t — but they do give the psychological impression, via their physicality, that ‘something is being done’. They address concerns about migrant invasions in a way that rhetoric about ‘getting tough’ on immigration does not. Hence, despite the evidence, many Americans appear to believe still that the wall with Mexico will be built and that it will work. This belief ignores the fact that there is a treaty between the two countries in which both agree they will not build on the Rio Grande flood plain, and that despite (somewhat half-hearted) efforts by the President, Congress has not agreed to fund his plan.

The headlines afforded Trump’s ‘anti-immigrant’ stance detract from the bigger picture. It is easier to have the big bad wolf to huff and puff against than it is to see him as part of a global phenomenon. Concentrating on the Donald’s evils allows the Mexican government to quietly get on with deporting far more Central Americans from its country each year than does the United States. Granted, the US assists Mexico in this, but last year Mexico deported 165,000 central Americans, while the US expelled 75,000. The tales of hardship crossings, exploitation and human rights violations on the almost ignored Mexican/Guatemala border are, if anything, more harrowing than those on the border 900 miles to the north.

The walls and fences built this century mirror the divides which have also grown in political discourse and especially on social media. A decade ago Mark Zuckerberg believed social media would unite us all. He now says ‘the world is today more divided than I would have expected for the level of openness and connection that we have’. In some ways he was right — we are more connected and there are many positive aspects to this, but what surprised him is how many of us use that connectedness to abuse the ‘other’. The internet has allowed us to divide into social media tribes howling into a void, an echo chamber or across the divides at each other. This level of abuse has crawled out of the worldwide web and into worldwide politics — Mr Trump being the best-known beneficiary.

The Chinese led the way in great wall- building and are becoming world leaders in using the internet as a wall. We all know of the ‘great firewall of China’, which they call the ‘golden shield’. This is intended to block the outside world from infecting the Middle Kingdom with harmful ideas such as democracy. Less well known are the internal firewalls within China.

Beijing likes to ensure that people in the restless province of Xinjiang, a Turkic-speaking Muslim state, cannot easily converse with those in Tibet. Both have independence movements, and allowing them to form cybernetworks might be detrimental to the unity of the People’s Republic, so they have extra firewalls around them. China is probably the world’s leader in using new technology to build virtual walls. The Russians are the leaders in working inside other countries’ social media to sow division and use disinformation to muddy debate. It used to be argued that the internet would undermine the nation state as citizens of the world simply bypassed governments in a free-flow exchange of ideas and information. Again, this may come true, but it might also be that as the years pass more legislation will be enacted allowing the state to control the net.

We seem to have always divided ourselves one way or another. From the moment we stopped being hunter-gathers about 12,000 years ago, we began to build walls. We ploughed the fields and didn’t scatter. Instead we waited around for the results. More and more of us needed to build barriers: walls and roofs to house ourselves and our livestock, fences to mark our territory, fortresses to retreat to if the territory was overrun. The age of walls was upon us and has gripped our imagination ever since. We still tell stories of the walls of Troy, Constantinople, the Inca in Peru and many others.

The new wall-building is driven by recent events. The cry ‘tear down this wall’ is losing the argument against ‘fortress mentality’. It is struggling to be heard, unable to compete with the frightening heights of mass migration, the backlash against globalisation, the resurgence of nationalism, the collapse of communism and the 2008 financial crash.

On the other hand, our ability to cooperate, to think, and to build, also gives us the capacity to fill the spaces between the walls with hope and to build bridges.

However, first must come an acceptance of the situation, and a very open and honest discussion of how we got here. Key to that is the debate on migration and identity and that requires a reaching out across the divides on all sides.

Tim Marshall is the author of Divided: Why We’re Living In An Age Of Walls, Elliott and Thompson £16.99.

While Barack Obama once claimed that we are living in ‘the best of times’, many across the world would beg to differ. A perceptive new book unravels the consequences of this pessimistic mood

Kapil Komireddi

The National

March 25, 2018

“If you had to choose a moment in history to be born,” Barack Obama told an audi­ence in Athens during his final overseas visits as president of the United States in November 2016, “you’d choose now”. Obama’s optimism was out of step with his surroundings. Riot police were busy restraining thousands of Greek protesters as Obama proclaimed confidently that the world had never “been wealthier, better educated, healthier, less violent than it is today”.

It is a message amplified by the Harvard psychologist Steven Pinker in his books The Better Angels of Our Nature (2015) and Enlightenment Now (2018), and there is now a loose consortium of influential academics, pundits and businesspeople known as “New Optimists” dedicated to promoting the proposition that we are living in the best of times. If they are all correct, how do we explain what looks and feels like the world’s collective descent into chaos over the past decade-and-a-half?

The optimists overlook the experience of a substantial mass of humanity for whom the world – even after being purged of the ills of the past centuries and endowed with modern technology – remains a forbidding place. The optimists’ exaltation of modernity is accompanied by the myth that modernity has created benefits for all. Consider, for instance, the frequently repeated claim by the optimists that we live in the most open age in human history: it presupposes that all humans have access to this open world, when only a relatively small portion do.

The majority are “more divided than ever”, as Tim Marshall, who is a contributor to The National, notes in his new book. The pessimism that leaps from the pages of Divided shouldn’t be mistaken for the author’s attitude. It is, rather, the mood of the world as it stands. In eight chapters on China, the United States, Israel and Palestine, West Asia, India, Africa, Europe and the United Kingdom, Marshall examines the walls – physical, religious, ethnic, psychological – that fence people off or, at times, pen them in.

Everywhere there is evidence of people retreating into narrow identities. Marshall, unlike the western commentators who rushed to pronounce this the Chinese century, is not sed­uced by the glitz of Shanghai’s skyscrapers. His eye is trained on the human cost of China’s progress: the disparities generated by it, the exodus from village to city, the loss of individual dignity. Beijing is altering the demographics of Buddhist Tibet, which it violently subsumed in the 1950s, and Muslim Xinjiang by flooding them with Han Chinese. It is in Beijing’s ethnic engineering that Marshall espies “the greatest threat to the prospects of long-term prosperity and unity in China”.

Looking at India, Marshall contends that the subcontinent has not fully recovered from the invasions of the past millennium. The people on the peripheries continue to be haunted by the division of India to create Pakistan and the subsequent partition of Pakistan to birth Bangladesh. Bengalis in India resent the influx of migrants from Bangladesh because they are mostly Muslim. India has erected state-of-the-art fences on its eastern border. But as vast swathes of Bangladesh are poised to sink into the waters as sea levels rise, where will the climate refugees of the future go?

Marshall’s chapter on the European Union is the most powerful. Ever since Britain voted to leave Europe, extraordinary claims have been made for the EU. But if the EU is the ne plus ultra of political co-operation, why did so many people choose to turn away from it? “The EU,” Marshall writes, “has never really succeeded in replacing the nation state in the hearts of most Europeans.”

The EU hierarchs’ revulsion for nationalism doesn’t negate the importance many attach to national identity. As Marshall warns in his chapter on Britain, to “dismiss people who enjoyed their relatively homogeneous cultures and who are now unsure of their place in the world merely drives them into the arms of those who would exploit their anxieties – the real bigots”.

By magnifying religion and culture as the causes of division, Marshall exposes himself to the charge of advancing a deterministic view of the world. Yet this is where Divided draws its strength from. As Raymond Aron said in response to French intellectuals who sought to blunt Algerian demands for independence with talk of progress under French rule, “it is a denial of the experience of our century to suppose that men will sacrifice their passions to their interests”. Marshall can’t be faulted for identifying the sources of those passions. He has written frankly about the world. We deny this at our own peril.

According to Tim Marshall, the fall of the Berlin Wall was the exception rather than the rule. ‘We are seeing walls being built along borders everywhere,’ he writes.

The numbers support his argument. Fortified borders have increased from almost zero at the end of WWII to around 70 today, with the vast majority having been built since 2000. The divides continue to steer geopolitics and national identities, and countries appear to be goading each other into more wall building. ‘These are the fault lines that will shape our world for years to come,’ says Marshall.

In that sense, President Trump’s campaign border wall seems less a shocking new policy than a repeating pattern. As one of the most high-profile border issues, Marshall devotes an early chapter to the Mexico/US divide and uses it to lay the foundations for what makes hard borders persuasive in popular politics – even if they are ineffective at preventing illegal immigration. Marshall puts it bluntly: ‘they make people who want something to be done feel that something is being done… Ultimately, very few barriers are impenetrable. People are resourceful, and those desperate enough will find a way around.’

Marshall takes us on a tour of some of the most relevant border divides in the world: India’s borders with Pakistan and Bangladesh, the Israel and Palestine border in the West Bank, the new borders across the Middle East and those running across Europe. The effectiveness of barriers are explored but more important to the author is the desire for divide – ‘us and them thinking’ – and where it gets us in the 21st century.

Readers of Prisoners of Geography, Marshall’s previous work, will be familiar with his global sweep explained through history and geography. Occasionally, his strokes are too broad. For example, only a single chapter is given to the whole continent of Africa, which suffers for it.

Where Divided is in its most revelatory, however, is where it looks at borders on an internal level, such as gated communities in South Africa and the US. Here Marshall shows how levels of exclusivity can spiral inward from the international to the regional to the local. ‘The new model of urban and suburban living is designed to be exclusionary: you can only get to the town square if you can get through the security surrounding the town. This lack of interaction may shrink the sense of civic engagement, encourage group-think among those on the inside and lead to a psychological division, with poorer people left feeling like “outsiders”, as though they have been walled off.’

In China, he argues, it is the entire population who are excluded. The ‘Great Firewall’ of China keeps the country’s 700 million users (roughly one-quarter of the world’s online population) excluded from the foreign media, meanwhile, internal firewalls and censorship keep the users from connecting too much with each other. ‘The party particularly fears social media being used to organise like-minded groups who might then gather in public places to demonstrate, which in turn could lead to rioting,’ he writes.

Divided also shines a light on the future of borders. ‘The technology becomes more sophisticated each year,’ Marshall warns. ‘The barriers along the majority of the thousands of miles of frontiers are now being built higher and wider and are becoming more technologically sophisticated… such barriers don’t stop people from attempting to cross anyway – many don’t have any other choice but to try – and increasingly violent policing of borders can lead to terrible human consequences.’ With border deaths at the highest numbers in history, it begs the question, what will more efficient borders – utilising drones, motion sensors and higher walls – mean to the people near to them?

Answers are where Divided leaves us hanging. Perhaps this is because of the global scope of the book – there is probably no one-size-fits-all solution to the wall-building spree – but also because of the tricky nature of barriers themselves. Walls can prevent violence, but they can cause it too. Having heard, however, about some of the most entrenched borders in the world, the reader has a natural appetite for solutions to remove them, or at least to stem the rate of barriers rising elsewhere. Something Marshall is surprisingly on the fence about.

We live in a time of openness, globalisation — and walls. A study of the world’s fraught borderlands seeks to explain why

Of all the walls ever raised, my favourite remains the Indian Salt Hedge, built not of stone — or indeed of salt — but of the thorniest vegetation India could provide. The British, always avid about their gardening, tended to the hedge from the 1840s to 1879, using it to cramp the smuggling of untaxed salt. At its most prosperous, the hedge was 12 feet high and 14 feet thick, jagging for 2,500 miles from India’s left hipbone to right shoulder. It was, like all such barriers, a geopolitical form of Freudian repression. The salt tax was both unfair and unwise, and the British had little moral right to impose it, but they ignored these troublesome truths by walling them away.

Time sheared down the Indian Salt Hedge. Most of the walls we’ve built have crumbled, yet we keep putting up new ones, as if panicked that the planet will run out. By early February, the Berlin Wall had been down longer than it was up, and Europe might have commended itself if so many of its countries hadn’t been busily fencing each other off. We inhabit an age of walls, the journalist Tim Marshall observes in Divided. Half of all border barriers erected around the world since 1945 have appeared in this century. “Within a few years, the European nations could have more miles of walls, fences and barriers on their borders than there were at the height of the Cold War.” We seem to loathe each other more than at any point in living memory — a rebuke both to the evangelists for unfettered globalisation and to the techno-optimists who find so much to cheer in our time.

Any reader of Prisoners of Geography, Marshall’s 2015 bestseller, will recognise his approach here. He first lights upon an indisputable thesis: that the destiny of nations is hewn by their geography, or that humans are dividing themselves from each other. Then he tours the map with that thesis, describing how it applies in as many countries as possible. The tour in Divided is, unfortunately, figurative. Marshall has reported from dozens of countries, often when they were passing through moments of howling drama, but few of those tales filter in. Instead, the case studies seem to draw more on dry policy journals and faraway newspapers than his own first-hand observation.

Marshall opens each of his eight geographically demarcated chapters by discussing a barrier: the Great Wall of China; the Moroccan Wall, a berm of sand slaloming though Western Sahara; the double-layered fence separating India from Bangladesh; the slices of concrete between Israel and the West Bank. These barriers are only physical manifestations of deeper disunities, though, and our world is rife with these.

In China, invisible fissures set apart rural people from urban, the Han from other ethnicities, and older generations from younger. These are new tears in the fabric, wrought by the way China has changed over the past half-century. Elsewhere, Marshall subscribes to the much-derided notion of ancient hatreds, animosities that have boiled forever. The theory suggests that people — usually in the developing world — cleave to a one-dimensional identity, defending it with atavistic violence. Marshall decides that in Africa, the faultlines are tribal, and in the Middle East, they’re religious. He will yield only a minor role for poverty and poor education: “Neither factor can be ignored; however, too much importance is attached to them.” He limits the pernicious effects of colonialism merely to thoughtlessly drawn borders, a final act of haste before the European powers vacated the premises.

The most-deliberated wall over the past year is one that doesn’t yet exist. Donald Trump’s proposed blockade of the US-Mexico border is a ruse, a kneading of white anxieties about the economic and demographic transitions eddying around the country. The older rupture of racism has yet to be sealed. “In this febrile atmosphere Trump’s rhetoric about the wall plays on historical and new divisions within the nation, speaking of a narrow definition of ‘American’,” Marshall writes.

A giant paradox undergirds Marshall’s book, but he never quite looks it in the eye. Why is our age of walls also the most open age in humanity’s history? Why is the march of globalisation now being kept company by re-activated nationalisms? Divided exhibits a deterministic streak that feels wearying and shallow in the face of such questions. The world being what it is, states have no choice but to act in certain ways. To draw borders and defend them is simply “human nature”, he writes in his conclusion. That must mean that every age — and not just this one — is an age of walls. It must also mean, unhappily, that as long as we’re human, this is what we will be: wall-builders, fence-erectors, architects of schisms between ourselves and the rest of our species.

Divided: Why We’re Living in an Age of Walls, by Tim Marshall, Elliott & Thompson, RRP£16.99, 272 pages Samanth Subramanian is author of ‘This Divided Island: Stories from the Sri Lankan War’ (Atlantic)

Voir de plus:

Divided review: A readable primer on the world’s biggest problems
The world is divided by more physical walls than at any time since the Second World War. And, according to this informative and timely account of division in the 21st century, written by the author of the bestselling Prisoners Of Geography, these “physical divisions are mirrored by those in the mind”.
Huston Gilmore
The Express
Mar 9, 2018This is a mammoth subject and not just because Donald Trump based much of his success in the US electoral college (if not the US popular vote) by claiming at every opportunity that he would “build that wall”.So Marshall explores how different societies have responded to the changes wrought by our globalised world and how they rise to the challenge of maintaining national identity.Trump’s America, he argues, is “the only major power that can absorb the potential losses of withdrawing from globalisation without seriously endangering itself in the short term”.But Trump’s border wall is a rhetorical device that plays on a fear of other peoples. It is unlikely ever to be built, not least because about two-thirds of southern borderland property and land is in private ownership, but it reassures his core voters.Next Marshall turns his attentions to China, home of the Great Wall, where the state has responded to global upheaval by restricting its citizens’ access to the internet.This is his cue to explore cyber security and “the Great Firewall of China”. As Marshall argues, “internet censorship does restrict China’s economic potential” but that is a price that the Chinese Communist Party is willing to pay to maintain both its power and national unity.Subsequent chapters examine Israel and Palestine where walls are a necessity but they are “containing the violence – for now”.In the wider Middle East, Marshall argues that “ironically, another wall is needed… between religion and politics” if the region is to escape its troubled past.The Indian subcontinent contains the longest border fence in the world which runs for 2,500 miles between India and Bangladesh.

But the area is still struggling to cope with mass migration as well as climate change.

Seven out of 10 of the world’s most unequal countries are to be found in Africa. Marshall focuses on the legacy of colonialism and influences of globalisation which, he argues, “has lifted hundreds of millions of people out of poverty” while widening the gap “between the rich and not rich”.

The final two chapters focus on Europe and the UK with Marshall exploring “the new realities of mass immigration and the moral necessities to take in refugees”.

He shows how population pressures have led to the rise of nationalism and the Far-Right. Nonetheless he argues that we still need our nation states because “communities need to be bound together in shared experience”.

Walls, Marshall concedes, have their place and we need not necessarily “decry the trend of wall-building… they can also provide temporary and partial alleviation of problems, even as countries work towards more lasting solutions, especially in areas of conflict”.

The book closes with suggested solutions to the world’s problems, including “a 21st-century Marshall Plan for the developing world to harness the riches of the G20 group of nations in a global redistribution of wealth”.

Some of these ideas are intriguing but Marshall barely gives them room to breathe and his conclusion feels rushed.

However he has delivered a readable primer to many of the biggest problems facing the world.

Tim Marshall is the author of Prisoners of Geography (2015), a New York Times and Sunday Times bestseller. Originally from Yorkshire, Marshall started his career in journalism in London at LBC and the BBC, and then spent three years as IRN’s correspondent in Paris. Marshall then joined Sky News, as a Middle East correspondent based in Jerusalem and later as Diplomatic Editor, covering twelve wars and three US presidential elections. He has written for several national newspapers, including the Times, the Guardian, the Daily Telegraph and the Sunday Times, and frequently appears as a guest commentator on global events for the BBC and Sky News. In 2016, Marshall published Worth Dying For: The Power and Politics of Flags. The next title in Marshall’s geographical trilogy, Divided: Why We’re Living in an Age of Walls, is due to be released in March 2018.

As you mentioned in a recent talk organised by the Diplomacy Society at King’s College London,you left school at 16 and went straight into the world of work. How did you start your career as a journalist?

Tim Marshall: “I’d wanted to be a journalist since I was about 11, but it just wasn’t on the radar. I left school at 16, I was a painter and decorator. I always read a lot, and had always been interested in history, [but] it just wasn’t on the cards. So I joined up and when I was in the Forces, I went to night school and got myself a couple of O-levels. On the strength of two O-levels, I then got into a college of higher education and did a degree in American Politics and History. And then I was unemployed in London, and took a French conversation course at night school in the Ken Livingstone era -when things were free- and I met the newsdesk assistant at IRN and LBC. I gave her a very badly typed CV, probably full of mistakes, which would have gone in the bin if I’d sent it in the post. But because it was by hand, to the woman in charge of the research department at LBC, she gave me a chance. She got me in for an interview and gave me three days’ work which turned into 30 years.”

Did your military experience affect how you reported from the war zone? What is it like to report on conflicts at the frontline?

TM: “I was a telegraphist in the RAF, a radio operator. I thought, ‘I’ve seen that in the films, I’ll do that’. And I did for four years, at Strike Command and later in what was then West Germany. It got me out of my environment. It definitely gave me a discipline I didn’t have – if you want to get something done, do it. It gave me an understanding of military life, which became very useful down the line when I had to work a lot in military situations in Northern Ireland, Gaza/West Bank, Iraq, Afghanistan, Croatia, Bosnia, Kosovo, Macedonia, Libya, Tunisia, Syria…

I don’t really tell war stories because I would go for a couple of weeks and then go home. But people there would live it. However, there were several extremely close situations, one of which, following the death of a colleague, persuaded me I was going to pack it in and not do it any more. I thought, ‘I’m running out of luck here’. I know about twelve colleagues that have been killed over the years, and the last one was a friend, Micky Deane, and shortly after that (he was killed in Cairo), I had a narrow escape in Syria, and I thought, ‘I’ve had enough of this’.”

What inspired you to write PrisonersofGeography, and to produce a trilogy of ‘popular geography’ books?

TM: “I learnt early on in Bosnia, to understand the terrain in order to understand the story. There’s two things often, even in conflict zones, that some journalists don’t do. One is understanding religion, I mean really understand it. When all this started [the Arab uprisings] there was a whole generation of journalists who because they come from a secular society, thought religion was not a major factor. I think they found it hard to believe that these people actually do believe what they say, whereas I always knew to take them at their word. They believe this stuff, which is their right. I think some people just couldn’t bring themselves to believe people believe this in the 21st century. The other one is terrain. I was also influenced (and I acknowledge it) by Robert Kaplan’s Revenge of Geography. So I took all these ideas that have been swirling around for so long and packed in work to write.

Then we start talking about identity, about national symbols and the emotional buttons they press [see WorthDyingFor: The Power and Politics of Flags, 2016]. In all my travels, I would always ask, “Who is that statue of? Why is your flag the colour it is?”. You would learn the emotional buttons that are pushed in populations. I do see my latest three books as a trilogy because it all comes together. This last one I wanted to callUs and Them, but that’s been done, so Divided is the title. It’s realistic but depressing stuff, but I do think it’s a fair reflection of where we are, and I think slowly dawning on the Western peoples is the realisation that advancement is not a given. Progression is not a given.”

The theory behind Prisoners of Geography is deterministic, would you agree?

TM: “It is somewhat deterministic in that yes, these things do, partially determine what happens, but that’s the key word, partially. I’ve had a great response to it, half a million sales, and some very nice reviews. Where it has been criticised, is that it is “too deterministic”. I think that ignores the six or seven times I say in the book, ‘this is a determining factor, not the determining factor’. There is obviously ideas, technology, politics, great leaders. All this stuff goes into make up [international politics], but the one that is overlooked is [physical] geography. That is precisely because intellectuals have a problem with anything deterministic because it is something beyond their control.

The new book features a lot on borders. The ‘Open Borders’ theory is right in its idea of oneness, which I happen to agree with, we are one. However, for a whole bunch of reasons, including geography, we are divided from each other. That includes rivers, oceans and mountains, which have divided us from each other and made us different from each other, to the extent I would argue that I cannot see, in the foreseeable future us actually being one. Nor do I think dropping borders would make us one people; I think it would make us kill even more of each other than we already do. I’m reasonably utilitarian on this – the fewest people get killed, that’s good with me. I think their way [‘open border’ scholars] would get a lot more people killed than there already are, and there’s a lot. It’s a utopian idea that I like the idea of, but I’m not convinced it works.

These divisions appear to be endemic. This might be a bit trite -and an academic would find it trite- but go up to someone you know and like, and who knows and likes you, and put your nose closer and closer [to their face]. At a certain point, that person is uncomfortable with it, with you in their space. That to me is a starting point, extrapolate from that. We need space, and self-identifying groups require space. Religions have tried to make us one, but it hasn’t quite worked, maybe it’s impossible precisely because we’re human. I suppose I am [pro-borders]. I dislike borders, however I think the way humanity is, and always has been structured, they are inevitable. If you try to get rid of that you’re going to open up a horrible can of worms. This is very unfashionable: I think the nation-state is probably the best unit for organising peoples. Without nation-states, of course there wouldn’t be interstate wars, but we’d be back to fiefdoms before you know it.”

Have you got any particularly memorable border experiences?

TM: “Going through to Gaza is quite an intense experience. You go past a massive wall, and through two or threecheckpoints. You’re all by yourself in this empty echoing steel and concrete corridor, and there’s cameras everywhere. Suddenly you hear a click, and the door swings open. It’s like a dystopian sci-fi film. The door swings open and you’re now in Gaza. You walk down another 200 yards of corridor and then out into the open but its scrubland, no-mans-land. Another 600 yards and then you meet a Hamas checkpoint. It’s just this weird, cold experience.

Crossing from Tajikistan into Afghanistan, was pretty interesting. A Russian soldier aimed his rifle through our truck window because we were getting impatient to go through the border fence. Then, when we got across, there was a river. It was pitch-black and we went across on a raft with all our kit, with an exchange of mortars going on around us. Then the Northern Alliance were on the other side of the Tajikistan river to greet us. That was intense. Crossing borders is always fun.

Iraq – when we used to go during the Saddam years, they hit upon this great money-making thing at the border with Jordan. You had a choice. They [Iraqi border patrol] would get out this huge rusty knitting needle with a syringe on the end of it and say, “You can have your AIDS test”, and we would say, “Ah, maybe there’s a facility fee, a special tax we can pay?”. So you’d pay $50 and not get jabbed in the backside with this thing. It’s just a money-making thing. Another one, you can leave a bottle of whiskey on the dashboard while they check the car. Then when you come back it wasn’t there anymore. Ok, it’s corruption, but you weren’t getting into Iraq without it – and that was more important.”

What is your next book about?

TM: “Divided is coming out in March; it’s about walls and divisions and fences going up all over the place. There’s a chapter on the Indian subcontinent, the walls, barriers and internal divisions in Pakistan, Bangladesh, Myanmar and India. Then a chapter on the USA, starting on walls and moving to racial divisions. Chapters on Europe, Israel, the Middle East, the UK – Brexit is part of it. That little strip of water called the Channel I think has a huge physical and psychological effect on the British. Without it, we wouldn’t have voted for Brexit, for two reasons. One, psychologically, we would feel less distinct, and secondly because of that our history would be very different: we might well have suffered the shock and trauma of the Second World War to the extent that continental Europe did. I read something just yesterday which struck a chord; the British experience of Hitler was such that we could make him a figure of fun, but the Russian experience was such that they don’t do that, it’s too traumatic.

I’m interested in something that I completely disagree with: the open borders movement, which in academia is a ‘thing’. I’ve got a problem with ‘no borders’. There’s a very nice guy who helped me on the book called Professor Reece Jones from the University of Hawaii (author of Violent Borders, 2016). He gave me a few quotes for the book and I really like him, but some of his colleagues in this spectrumargue completely to bring borders down, almost overnight. They don’t factor in what will happen to the politics of the countries. We’ve seen with the movement we’ve had already, what’s happening to the politics of Europe, Austria as an example, Germany, France, Sweden, the Netherlands. Magnify that several times if you have no borders – it’s a utopian view.”

A high wall would end the border patrol’s reliance on dogs and tear gas when rushed by would-be border crossers throwing stones. There was likely never going to be “comprehensive immigration reform” or any deal amnestying the DACA recipients in exchange for building the wall. Democrats in the present political landscape will not consent to a wall. For them, a successful border wall is now considered bad politics in almost every manner imaginable.

Yet 12 years ago, Congress, with broad bipartisan support, passed the Secure Fence of Act of 2006. The bill was signed into law by then-President George W. Bush to overwhelming public applause. The stopgap legislation led to some 650 miles of a mostly inexpensive steel fence while still leaving about two-thirds of the 1,950-mile border unfenced.

In those days there were not, as now, nearly 50 million foreign-born immigrants living in the United States, perhaps nearly 15 million of them illegally.

Sheer numbers have radically changed electoral politics. Take California. One out of every four residents in California is foreign-born. Not since 2006 has any California Republican been elected to statewide office.

The solidly blue states of the American Southwest, including Colorado, Nevada and New Mexico, voted red as recently as 2004 for George W. Bush. Progressives understandably conclude that de facto open borders are good long-term politics.

Once upon a time, Democrats such as Hillary and Bill Clinton and Barack Obama talked tough about illegal immigration. They even ruled out amnesty while talking up a new border wall.

In those days, progressives saw illegal immigration as illiberal — or at least not as a winning proposition among union households and the working poor.

Democratic constituencies opposed importing inexpensive foreign labor for corporate bosses. Welfare rights groups believed that massive illegal immigration would swamp social services and curtail government help to American poor of the barrios and the inner city.

So, what happened? Again, numbers.

Hundreds of thousands of undocumented immigrants have flocked into the United States over the last decade. In addition, the Obama administration discouraged the melting-pot assimilationist model of integrating only legal immigrants.

Salad-bowl multiculturalism, growing tribalism and large numbers of unassimilated immigrants added up to politically advantageous demography for Democrats in the long run.

In contrast, a wall would likely reduce illegal immigration dramatically and with it future Democratic constituents. Legal, meritocratic, measured and diverse immigration in its place would likely end up being politically neutral. And without fresh waves of undocumented immigrants from south of the border, identity politics would wane.

A wall also would radically change the optics of illegal immigration. Currently, in unsecured border areas, armed border patrol guards sometimes stand behind barbed wire. Without a wall, they are forced to rely on dogs and tear gas when rushed by would-be border crossers. They are easy targets for stone-throwers on the Mexican side of the border.

A high wall would end that. Border guards would be mostly invisible from the Mexican side of the wall. Barbed wire, dogs and tear gas astride the border — the ingredients for media sensationalism — would be unnecessary. Instead, footage of would-be border crossers trying to climb 30-foot walls would emphasize the degree to which some are callously breaking the law.

Such imagery would remind the world that undocumented immigrants are not always noble victims but often selfish young adult males who have little regard for the millions of aspiring immigrants who wait patiently in line and follow the rules to enter the United State lawfully.

More importantly, thousands of undocumented immigrants cross miles of dangerous, unguarded borderlands each year to walk for days in the desert. Often, they fall prey to dangers ranging from cartel gangs to dehydration.

Usually, the United States is somehow blamed for their plight, even though a few years ago the Mexican government issued a comic book with instructions on how citizens could most effectively break U.S. law and cross the border.

The wall would make illegal crossings almost impossible, saving lives.

Latin American governments and Democratic operatives assume that lax border enforcement facilitates the outflow of billions of dollars in remittances sent south of the border and helps flip red states blue.

All prior efforts to ensure border security — sanctions against employers, threats to cut off foreign aid to Mexico and Central America, and talk of tamper-proof identity cards — have failed.

New House majority leader Nancy Pelosi reportedly spent the holidays at the Fairmont Orchid on Kona, contemplating future climate-change legislation and still adamant in opposing the supposed vanity border wall.

But in a very different real world from the Fairmont Orchid or Pacific Heights, other people each day deal with the results of open borders and sanctuary jurisdictions. The results are often nihilistic and horrific. Here in California’s Central Valley over the holidays we were reminded of the wages of illegal immigration in general — and of California’s sanctuary-city laws in particular, which restrict formal cooperation between local and state law enforcement with federal immigration authorities in matters of deporting illegal aliens under detention.

In the first case, one Gustavo Garcia, a previously deported 36-year-old illegal alien, murdered a 51-year-old Visalia resident on December 17, gratuitously shooting his random victim, Rocky Jones, at a gas station. He apparently had been arrested two days prior and released.

Garcia entered the U.S. illegally in 1998 and was deported for a second time in 2014. He has been charged with at least three immigration violations since illegally returning to the U.S., and has been a convicted felon since at least 2002 for assaults with a deadly weapon, contributing to the delinquency of a minor, possession of a controlled substance, etc. In addition to the murder of Jones, Garcia shot a farmworker who was on a ladder working, and followed a woman to her car at a Motel 6 and shot her too. At the beginning of his violent spree, he seems also to have murdered Rolando Soto, 38, of nearby Lindsay.

Indeed, Garcia was a suspect in a number of prior shootings and thefts. During his final rampage, inter alia, Garcia tried to shoot his ex-girlfriend, then stole a truck from farmworkers and led police on a chase, deliberately veering into opposing traffic, and by intent injuring four more innocents, one critically. During the chase, he fired on police, who returned fire, before Garcia finally wrecked the stolen vehicle and perished in the crash.

The local sheriff of Tulare County, in understated fashion, labeled Garcia’s violent spasm of shootings and car wrecks a “reign of terror.” Garcia had an accomplice who is still at large.

Local law enforcement blamed state sanctuary restrictions on their inability to notify ICE that the felonious illegal alien Garcia was about to be released among the general public. Or as the sheriff put it, “Gustavo Garcia would have been turned over to ICE officials. That’s how we’ve always done it, day in and day out. But after SB 54, we no longer have the power to do that. Under the new state law, we must have a ‘federally signed warrant’ in order to do that. We didn’t honor the detainer because state law doesn’t allow us to.”

Less than two weeks later, there was yet another example of Central Valley illegal-immigration mayhem. To the north in Newman, another twice-deported illegal alien, Gustavo Perez Arriaga (he apparently had a number of aliases), stands accused of shooting and killing Newman policeman Ronil Singh, who pulled him over on suspicion of drunk driving (Arriaga also had two prior DUIs).

Arriaga fled after murdering Officer Singh and evaded law enforcement for a few days thanks to at least seven enablers (brothers, girlfriend, friends, etc.), some of them confirmed also to be illegal aliens. They either gave police officials false information about Arriaga’s whereabouts or helped him on his planned flight to Mexico, finally aborted 200 miles to the south near Bakersfield.

The suspect’s brother, 25-year-old Adrian Virgen, and a co-worker, 32-year-old Erik Razo Quiroz, were arrested on “accessory after the fact” charges for attempting to protect Arriaga. Authorities report both men are also in the country illegally. Arriaga was at large for five days, also in part because he had so many fake identities and aliases that no one knew really who he was.

Stanislaus County sheriff Adam Christianson noted that SB54 prevents departments “from sharing any information with ICE about this criminal gang member.” He added, “this is a criminal illegal alien with prior criminal activity that should have been reported to ICE.” Christianson finished, “Law enforcement was prohibited because of sanctuary laws and that led to the encounter with Officer Singh. I’m suggesting that the outcome could have been different if law enforcement wasn’t restricted, prohibited or had their hands tied because of political interference.”

These incidents, and less violent ones like them, are not all that rare in rural California. The narratives are tragically similar and hinge on our society’s assumptions of tolerance and its belief that entering and residing illegally in the United States are not really crimes. Fraudulent identification and fake names are not really felonious behaviors. Driving under the influence is no reason for deportation — all crimes that can ruin careers and have expensive consequences for citizens. Statisticians argue that immigrants commit fewer crimes than the native born, but never quite calibrate illegal immigrants into the equation (in part because no one has any idea who, where, or how many they are, as estimates range from 11 to 20 million) or note that second-generation native-born children of immigrants have much higher violent-crime rates than do their immigrant parents, and in circular fashion add to the general pool of violent Americans who then are used to contrast immigrants as less violent.

We should redefine the entire morality of multifaceted illegal immigration.

Immorality is undermining, in Confederate fashion, federal law, and normalizing exemptions that allow felons such as Garcia and Arriaga to wreak havoc on the innocent and defenseless. Too often the architects of open borders and sanctuary jurisdictions are not on the front lines where the vulnerable suffer the all-too-real consequences of distant others, who can rely on their own far greater safety nets when their grand abstractions become all too concrete.

And, finally, we forget that so often the victims of illegal aliens are (in California where one in four residents was not born in the U.S.) legal immigrants like officer Singh, and members of the Hispanic community like the late Mr. Soto. Polls show that support for open borders is not popular and most Americans want an end to illegal immigration and catch and release, as well as stricter enforcement of current federal immigration laws.

(I took a break from writing this on a Sunday afternoon to talk about the volatile Central Valley landscape with an immigrant from India, whose stolen and stripped spray rig I discovered last night in our orchard.)

White House budget director Mick Mulvaney said he doesn’t understand Democratic opposition to funding the border wall because top Democrats voted for it just over 10 years ago.

During an April 23 segment on Fox News Sunday, Mulvaney talked down concerns about a government shutdown, but scolded Democrats for obstructing action on Trump’s border wall. Mulvaney pointed to the voting record of top Democrats in 2006 to explain his confusion.

« We want our priorities funded and one of the biggest priorities during the campaign was border security, keeping Americans safe, and part of that was a border wall, » he said.

« We still don’t understand why the Democrats are so wholeheartedly against it. They voted for it in 2006. Then-Sen. Obama voted for it. Sen. Schumer voted for it. Sen. Clinton voted for it. So we don’t understand why Democrats are now playing politics just because Donald Trump is in office. »

Mulvaney is referencing their votes on an act that authorized a fence, but as we’ve noted severaltimesin the past, the 2006 fence was less ambitious than the wall Trump is proposing.

The Secure Fence Act of 2006

The Secure Fence Act of 2006, which was passed by a Republican Congress and signed by President George W. Bush, authorized about 700 miles of fencing along certain stretches of land between the border of the United States and Mexico.

The act also authorized the use of more vehicle barriers, checkpoints and lighting to curb illegal immigration, and the use of advanced technology such as satellites and unmanned aerial vehicles.

At the time the act was being considered, Barack Obama, Hillary Clinton and Chuck Schumer were all members of the Senate. (Schumer of New York is now the Senate minority leader.)

Obama, Clinton, Schumer and 23 other Democratic senators voted in favor of the act when it passed in the Senate by a vote of 80 to 19.

Originally, the act called on the Department of Homeland Security to install at least two layers of reinforced fencing along some stretches of the border. That was amended later, however, through the Consolidated Appropriations Act of 2008, which got rid of the double-layer requirement.

Currently, 702 miles of fencing separates the United States from Mexico, according to U.S. Customs and Border Protection.

He said the wall doesn’t need to run the nearly 2,000 miles of the border, but about 1,000 miles because of natural barriers. He said it could cost between $8 billion and $12 billion, be made of precast concrete, and rise 35 to 40 feet, or 50 feet, or higher.

Experts have repeatedly told PolitiFact that the differences in semantics between a wall and a fence are not too significant because both block people.

Still, there are obvious differences between the fence and Trump’s wall proposal.

A 2016 Associated Press report from the border described « rust-colored thick bars » that form « teeth-like slats » 18 feet high. « There are miles of gaps between segments and openings in the fence itself, » the report said.

Trump criticized the 2006 fence as too modest during the 2016 campaign.

« Now we got lucky because it was such a little wall, it was such a nothing wall, no, they couldn’t get their environmental — probably a snake was in the way or a toad, » Trump said. (Actually, the project didn’t face environmental hurdles; we rated that part of the claim Mostly False.)

It’s also worth noting that the political context surrounding the 2006 vote was different, too.

Democrats normally in favor of looser immigration laws saw the Secure Fence Act of 2006 as the lesser of two evils, according to a Boston Globe report that detailed the legislative process. Around that same time, the House passed legislation that would make any undocumented immigrant a felon.

« It didn’t have anywhere near the gravity of harm, » Angela Kelley, who in 2006 was the legislative director for the National Immigration Forum, told the Boston Globe. « It was hard to vote against it because who is going to vote against a secure fence? And it was benign compared with what was out there. »

Democrats have described Trump’s wall proposal as overkill and too expensive. Recently, Democrats penned a letter to Senate GOP saying border funding should not be included in the latest budget agreement to keep the government open.

Our ruling

Mulvaney said that Obama, Schumer and Clinton voted for a border wall in 2006.

They did vote for the Secure Fence Act of 2006, which authorized building a fence along about 700 miles of the border between the United States and Mexico.

Still, the fence they voted for is not as substantial as the wall Trump is proposing. Trump himself called the 2006 fence a « nothing wall. »

Mulvaney’s statement is partially accurate, but ignores important context. We rate it Half True.

Jim Acosta was called on by the president to ask a question. He was called on by Donald Trump to ask whatever question he liked. And when he’d finished asking one, he then asked another – with interruption follow-ups in between. It was only when he attempted his third question – or possibly fourth depending on how you define the follow-ups – that the president got angry and asked him to sit down. There ensued a tussle with the mic. the scene was an incredible bit of theatre. We couldn’t take our eyes off it. It just went on and on. You could argue the president came looking for it – he does well, electorally, when he’s berating the press. But make no mistake. The media also does well when they are baiting the bear. The urge to poke can sometimes seem irresistible. (…) What happened in that room was not the ultimate fight for press freedom. This wasn’t someone risking life and limb against a regime where freedom of speech is forbidden. This was a bloke sitting in a room full of colleagues who were all trying to ask questions too. This was a man who’d had his turn and had been told he couldn’t hog the whole time. (…) The president took CNN’s question and then took more. And when he tried to move on, he couldn’t. Once the Acosta incident was over, he went on to take questions from journalists from all over the world – for a total of 90 minutes. What worries me is the wider question of how Trump and the media interact. When you watch the US morning shows – and evening shows come to that – what you notice is how things have changed. Even those who were not originally taking sides are now nailing their colours to the mast. Fox and MSNBC have always played to their own bases. But now CNN, too, has editorialised its evening slot with Chris Cuomo – who gives us an essay, a comment piece, on whatever is getting him fired up. It’s a good watch actually. And makes you engaged. But make no mistake – it’s the same game that Trump is playing. The one they pretend to despise. If DJT can rally his base – then – goes the logic – why shouldn’t TV do it too. It works for viewing figures in the same way it works for electoral success. It works, in other words, for those who like their chambers echoed – but it’s an odd place for news to sit. Emily Maitlis

Had Acosta phrased his question in a more neutral tone, he likely would have had more information for his audience to digest. Acosta asked the president if Trump had demonized the caravan of Central Americans trekking toward the United States, ending his exchange by stating, “It is not an invasion.” If Acosta had asked “What about that seems like an invasion?” he could have both sought an answer and avoided becoming bigger than the event he was covering. If you look closely at the video, when Acosta was asking questions, his exchange with the president was on track and normal. Acosta asked. “Do you think that you demonize immigrants?” To which the president answered, “No.” A better question might have been, “How do you respond to the criticism that you are demonizing certain types of immigrants, namely poor immigrants?” But then Acosta’s questions ended and his statements began. “Your campaign had an ad showing migrants climbing over walls,” he said. And then, “They are hundreds of miles away, that’s not an invasion.” The heated exchange grew from there. Things got uncomfortable when Acosta refused to turn over the microphone to an intern who reached out to remove it from him, and then stood up to continue his banter without the microphone. This was a White House event and he was talking to the president of the United States. A briefing is not the same as a cable news wrestling match, where sides shout at each other. Acosta should have handed over the microphone. President Trump deftly used the Acosta incident to play the victim of unfair press treatment. Journalists should not give more fuel to such accusations. Ask tough questions, avoid making statements or arguing during a press event and report the news, don’t become the news. Poynter

Trump (…) is one of the few political leaders in America that recognises the frustration that exists in large parts of Ohio, Pennsylvania, eastern Kentucky and so forth. [But] The part that is forward-looking and answers the question ‘What do we do now?’ — it’s just not there yet. (…) I wasn’t as critical of my party in 2016 as I was of the person. But when I look at tax reform, when I look at healthcare reform, I see Trump as the least worrisome part of the Republican party’s problem, which is that we are basically living in the 1980s. We are constantly trying to resurrect domestic policies from the 1980s. (…) Let’s cut taxes for the wealthy! Let’s cut the social safety net! . . . The fundamental thesis that underlined basic Republican policies in the early 1980s, which is right, is that you had an economy which was simultaneously stagnating and experiencing high inflation. I don’t think the primary problem facing the American economy right now is that. It is that the opportunities that are out there require an adjustment in skills, an adjustment in training. (…) Mamaw would not have voted for Trump, had she been alive, because of his history as a philanderer. Yet the vulgarity that turns a lot of people off, Mamaw would have appreciated and thought was hilarious. (…) I think, like a lot of folks, [my grandfather] would have voted against Hillary Clinton. That sort of condescending elitism that the Clinton campaign came to represent would have turned my grandfather off. (…) The elite Republican view of why people voted for Donald Trump is that Trump voters are stupid. I think the elite Democratic view is that Trump people were bigoted and immoral. And that’s probably still very much reflected in popular culture. (…) There is this level of comfort that, I think, is completely weird. I understood for the first time what the Bible means when it talks about the difficulty of a rich man entering Heaven. It’s really tough to be a virtuous person when everyone is constantly taking care of you. (…) There are a lot of entrepreneurs [in Silicon Valley] developing the next app for clothes shopping who say, not ironically, that ‘we are changing the world’. You’re not changing the world. The guy that’s developing a new therapy that’s non-opioid analgesic pain relief? That guy’s changing the world. He’s going to save thousands of lives. (…) I’d say I’m a short-term realist, a long-term optimist. I do really believe in the power of identification and recognition. We’re in this period where everyone is starting to wake up, whether it’s because they know someone who has just had a heroin overdose or whether they are a policy expert and they have read this paper by [Nobel laureate] Angus Deaton [and his wife and fellow economist Anne Case] about dying in poor white America . . . That recognition gives me a lot of optimism. (…) I do think that whatever is happening right now is really transformational and the postwar order is probably going to have to change in some fundamental way. But I am still an optimist on that front. I think that my theory for what is happening is not that classical liberalism has failed. It’s not that western democracy has failed. It’s not that the postwar consensus has failed. It’s that the people who have been calling the shots for 20-30 years really screwed up.J.D. Vance

Our differences — on immigration, race, the role of work, the value of America itself — are intensifying. Slavery was the issue that blew up America in 1861 and led to the Civil War. (…) Something similar to that array of differences is slowly intensifying America’s traditional liberal–conservative and Democratic–Republican divides. (…) Globalization is accentuating two distinct cultures, not just economically but also culturally and geographically. Anywhere industries based on muscular labor could be outsourced, they often were. Anywhere they could not be so easily outsourced — such as Wall Street, Silicon Valley, the entertainment industry, the media, and academia — consumer markets grew from 300 million to 7 billion. The two coasts with cosmopolitan ports on Asia and Europe thrived. (…) Never in the history of civilization had there been such a rapid accumulation of global wealth in private hands as has entered the coffers of Amazon, Apple, Facebook, Google, Microsoft, and hundreds of affiliated tech companies. Never have private research marquee universities had such huge multibillion-dollar endowments. Never had the electronic media and social media had such consumer reach. Never has Wall Street had such capital. The result has been the creation of a new class of millions of coastal hyper-wealthy professionals with salaries five and more times higher than those of affluent counterparts in traditional America. The old working-class Democrat ethos was insidiously superseded by a novel affluent progressivism. Conservationism morphed into radical green activism. Warnings about global warming transmogrified into a fundamentalist religious doctrine. Once contested social issues such as gay marriage, abortion, gun control, and identity politics were now all-or-nothing litmus tests of not just ideological but moral purity. A strange new progressive profile supplanted the old caricature of a limousine liberal, in that many of the new affluent social-justice warriors rarely seemed to be subject to the ramifications of their own ideological zealotry. New share-the-wealth gentry were as comfortable as right-wing capitalists with private prep schools, expansive and largely apartheid gated neighborhoods, designer cars, apprentices, and vacations. For the other half of America, cause and effect were soon forgotten, and a new gospel about “losers” (deplorables, irredeemables, crazies, clingers, wacko birds) explained why the red-state interior seemed to stagnate both culturally and economically — as if youth first turned to opioids and thereby drove industry away rather than vice versa. Half the country, the self-described beautiful and smart people, imagined a future of high-tech octopuses, financial investments, health-care services, and ever more government employment. The other half still believed that America could make things, farm, mine, produce gas and oil — if international trade was fair and the government was a partner rather than indifferent or hostile. (…) As was true in 1861 or 1965, geography often intensified existing discord. The old consensus about immigration eroded, namely that while European and British commonwealth immigration was largely declining, it mattered little given that immigration from Latin America, Asia, and Africa would be diverse, meritocratic, measured — and legal. (…) Indeed, the professed views of Bill and Hillary Clinton, Joe Biden, Barack Obama, and Harry Reid before 2009 about illegal immigration were identical to those of Donald Trump in 2018: Secure the border; ensure that immigration was legal and meritocratic; deport many of those who had arrived illegally; and allow some sort of green-card reprieve for illegal aliens who had resided for years in the U.S., were working, and had no arrest record — all in exchange for paying a small fine, learning English, and applying for legal-resident status. The huge influxes of the 1990s and 21st century — 60 million non-native residents (citizens, illegal aliens, and green-card holders) now reside in the U.S. — destroyed that consensus, once shared across the racial and ideological spectrum, from the late civil-rights leader and Democratic representative Barbara Jordan to labor leader Cesar Chavez. Instead, a new opportunistic and progressive Democratic party assumed that the Latino population now included some 20 million illegal residents, and about that same number of first- and second-generation Hispanics. The 2008 Obama victory raised new possibilities of minority-bloc voting and seemed to offer a winning formula of galvanizing minority voters through salad-bowl identity-politics strategies. Purple states such as California, Colorado, Nevada, and New Mexico gradually turned blue, apparently due to new legions of minority-bloc voters. (…) On entry to the U.S., affluent immigrants from Mumbai, poor arrivals from Oaxaca, Chilean aristocrats, or Taiwanese dentists would all be deemed “minorities” and courted as such by political operatives. Stepping foot on American soil equated with experiencing racism, and racism generated reparational claims of an aggrieved identity. (…) Increasingly, half the country views its history and institutions as inspirational, despite prior flaws and shortcomings, and therefore deserving of reverence and continuance. The other half sees American history and tradition as a pathology that requires rejection or radical transformation. The world of post-1945 is coming to a close — after the end of the Cold War, the collapse of the Soviet Union, the unification of Germany, the creation of the European Union, the ascendance of a mercantilist and authoritarian China, and the post-9/11 rise of radical Islamic terrorism. Our closest NATO allies near the barricades of Russian aggression and radical Islam are the least likely of the alliance to prepare militarily. Yet Russia is a joke compared with the challenge of China. The European Union project is trisected by north-south financial feuding, east-west immigration discord, and Brexit — and the increasing realization that pan-European ecumenicalism requires more force and less democracy to survive than did the old caricatured nation-state. The post-war rationales for American global leadership — we would accept huge trade imbalances, unfair trading agreements, often unilateral and costly interventions given our inordinate wealth and power and fears of another 1939 — no longer persuade half the nation. (…) America is not isolationist, but an increasing number of its citizens sees overseas interventions as an artifact of globalization. Rightly or wrongly, they do not believe that the resulting rewards and costs are evenly distributed, much less in the interest of America as a whole. It is now old-hat to say that the Detroit of 1945, at the time perhaps the world’s most innovative and ascendant city, now looks like Hiroshima or Hamburg of 1945, while Hiroshima and Hamburg of 2018 resemble the equivalent of 1945 Detroit. The point is not that the post-war order itself destroyed Detroit, but that Americans see something somewhere wrong when we helped rebuild the industrial cities of the world and crafted an order under which they thrived but in the process ignored many of our own. The various ties that bind us — a collective educational experience, adherence to the verdict of elections, integration and assimilation, sovereignty between delineated borders, a vibrant popular and shared culture, and an expansive economy that makes our innate desire to become well-off far more important than vestigial tribalism — all waned. Entering a campus, watching cable news, switching on the NFL, listening to popular music, or watching a new movie is not salve but salt for our wounds.Victor Davis Hanson

Many conservatives did not see that Trump had framed the 2016 election as a choice between two mutually exclusive regimes: multiculturalism and America. What I call “multiculturalism” includes “identity politics” and “political correctness.” If multiculturalism continues to worm its way into the public mind, it will ultimately destroy America. Consequently, the election should have been seen as a contest between a woman who, perhaps without quite intending it, was leading a movement to destroy America and a man who wanted to save America. The same contest is being played out in the midterm elections. (…) Multiculturalism conceives of society as a collection of cultural identity groups, each with its own worldview, all oppressed by white males, collectively existing within permeable national boundaries. Multiculturalism replaces American citizens with so-called “global citizens.” It carves “tribes” out of a society whose most extraordinary success has been their assimilation into one people. It makes education a political exercise in the liberation of an increasing number of “others,” and makes American history a collection of stories of white oppression, thereby dismantling our unifying, self-affirming narrative—without which no nation can long survive. During the 2016 campaign, Trump exposed multiculturalism as the revolutionary movement it is. He showed us that multiculturalism, like slavery in the 1850’s, is an existential threat. Trump exposed this threat by standing up to it and its enforcement arm, political correctness. Indeed, he made it his business to kick political correctness in the groin on a regular basis. In countless variations of crassness, he said over and over exactly what political correctness prohibits one from saying: “America does not want cultural diversity; we have our culture, it’s exceptional, and we want to keep it that way.” He also said, implicitly but distinctly: the plight of various “oppressed groups” is not the fault of white males. This too violates a sacred tenet of multiculturalism. Trump said these things at a time when they were the most needful things to say, and he said them as only he could, with enough New York “attitude” to jolt the entire country. Then, to add spicy mustard to the pretzel, he identified the media as not just anti-truth, but anti-American. Trump is a walking, talking rejection of multiculturalism and the post-modern ideas that support it. Trump believes there are such things as truth and history and his belief in these things is much more important than whether he always tells the truth himself or knows his history—which admittedly is sometimes doubtful. His pungent assertion that there are “shithole” countries was an example of Trump asserting that there is truth. He was saying that some countries are better than others and America is one of the better ones, perhaps even the best. Multiculturalism says it is wrong to say this (as it was “wrong” for Reagan to call the Soviet Union “evil”). Trump is the only national political figure who does not care what multiculturalism thinks is wrong. He, and he alone, categorically and brazenly rejects the morality of multiculturalism. He is virtually the only one on our national political stage defending America’s understanding of right and wrong, and thus nearly alone in truly defending America. This why he is so valuable—so much depends on him. His shortcomings are many and some matter, but under present circumstances what matters more is that Trump understands we are at war and he is willing to fight. In conventional times, Trump might have been one of the worst presidents we ever had; but in these most unconventional times, he may be the best president we could have had. (…) Multiculturalism, not Trumpism, is the revolution. Trump’s campaign, and its defense by his intellectual supporters, was not a call for a revolution but a call to stop a revolution. Trump’s intellectual supporters did not say things could not get worse; they said without a sharp change in course there was a good chance we shall never get back home again. (…) Perhaps Trump’s most effective answer to Clinton’s and the Democrats’ multiculturalism was his attacks on political correctness, both before and after the election. Trump scolded Jeb Bush for speaking Spanish on the campaign trail. He pointed out that on 9/11 some Muslims cheered the collapse of the twin towers. He said Mexico was sending us its dregs, suggested a boycott of Starbucks after employees were told to stop saying “Merry Xmas,” told NFL owners they should fire players who did not respect the flag, expressed the view that people from what he called “shitholes” (Haiti and African countries being his examples) should not be allowed to immigrate, exposed the danger of selecting judges based on ethnicity, and said Black Lives Matter should stop blaming others. The core idea of each of these anti-P.C. blasts, when taken in aggregate, represent a commitment to America’s bourgeois culture, which is culturally “Judeo-Christian,” insists on having but one language and one set of laws, and values: among other things, loyalty, practical experience, self-reliance, and hard work. (…) Trump is hardly the ideal preacher, but in a society where people are thirsting for public confirmation of the values they hold dear, they do not require pure spring water. (…) Another [example] occurred in 2015 when Trump, after a terrorist attack, proposed a ban on all Muslims until “we figure out what the hell is going on.” Virtually everyone, the Right included, screamed “racism” and “Islamophobia.” Of course, to have defended Trump would have violated the multicultural diktat that Islam be spoken of as a religion of peace. But like Trump, the average American does not care whether Islam is or is not a religion of peace; he can see with his own eyes that it is being used as an instrument of war. When Muslim terrorists say they are doing the will of Allah, Americans take them at their word. This is nothing but common sense. (…) In exposing the dangers of multiculturalism, Trump exposed its source: radical liberal intellectuals, most of whom hang about the humanities departments (and their modern day equivalents) at our best colleges and universities, where they teach the multicultural arts and set multicultural rules. And from the academy these ideas and rules are drained into the mostly liberal, mostly unthinking opinion-forming elite who then push for open borders, diversity requirements, racism (which somehow they get us to call its opposite), and other aspects of multiculturalism.Thomas D. Klingenstein

Our differences — on immigration, race, the role of work, the value of America itself — are intensifying.Slavery was the issue that blew up America in 1861 and led to the Civil War.

But for the 85 years between the nation’s founding and that war, it had seemed that somehow America could eventually phase out the horrific institution and do so largely peacefully.

But by 1861, an array of other differences had magnified the great divide over slavery. The plantation class of the South had grown fabulously rich — and solely dependent — on King Cotton and by extension slave labor. It bragged that it was supplying the new mills of the industrial revolution in Europe and had wrongly convinced itself that not just the U.S. but also Britain could not live without Southern plantations.

Federal tariffs hurt the exporting South far more than the North. Immigration and industrialization focused on the North, often bypassing the rural, largely Scotch-Irish South, which grew increasingly disconnected culturally from the North.

By 1861, millions of Southerners saw themselves as different from their Northern counterparts, even in how they sounded and acted. And they had convinced themselves that their supposedly superior culture of spirit, chivalry, and bellicosity, without much manufacturing or a middle class, could defeat the juggernaut of Northern industrialism and the mettle of Midwestern yeomanry.

Something similar to that array of differences is slowly intensifying America’s traditional liberal–conservative and Democratic–Republican divides.

I. Globalization

Globalization is accentuating two distinct cultures, not just economically but also culturally and geographically.

Anywhere industries based on muscular labor could be outsourced, they often were. Anywhere they could not be so easily outsourced — such as Wall Street, Silicon Valley, the entertainment industry, the media, and academia — consumer markets grew from 300 million to 7 billion. The two coasts with cosmopolitan ports on Asia and Europe thrived.

Perhaps “thrived” is an understatement. Never in the history of civilization had there been such a rapid accumulation of global wealth in private hands as has entered the coffers of Amazon, Apple, Facebook, Google, Microsoft, and hundreds of affiliated tech companies. Never have private research marquee universities had such huge multibillion-dollar endowments. Never had the electronic media and social media had such consumer reach. Never has Wall Street had such capital.

The result has been the creation of a new class of millions of coastal hyper-wealthy professionals with salaries five and more times higher than those of affluent counterparts in traditional America. The old working-class Democrat ethos was insidiously superseded by a novel affluent progressivism.

Conservationism morphed into radical green activism. Warnings about global warming transmogrified into a fundamentalist religious doctrine. Once contested social issues such as gay marriage, abortion, gun control, and identity politics were now all-or-nothing litmus tests of not just ideological but moral purity.

A strange new progressive profile supplanted the old caricature of a limousine liberal, in that many of the new affluent social-justice warriors rarely seemed to be subject to the ramifications of their own ideological zealotry. New share-the-wealth gentry were as comfortable as right-wing capitalists with private prep schools, expansive and largely apartheid gated neighborhoods, designer cars, apprentices, and vacations.

For the other half of America, cause and effect were soon forgotten, and a new gospel about “losers” (deplorables, irredeemables, crazies, clingers, wacko birds) explained why the red-state interior seemed to stagnate both culturally and economically — as if youth first turned to opioids and thereby drove industry away rather than vice versa.

Half the country, the self-described beautiful and smart people, imagined a future of high-tech octopuses, financial investments, health-care services, and ever more government employment. The other half still believed that America could make things, farm, mine, produce gas and oil — if international trade was fair and the government was a partner rather than indifferent or hostile.

II. Clustering

Cheap transportation and instant communications paradoxically made the country far more familiar and fluid, even as local and distinct state cultures made Americans far more estranged from one another. The ironic result was that Americans got to know far more about states other than their own, and they now had the ability to move easily to places more compatible with their own politics. Self-selection increased, especially among retirees.

Small-government, low-tax, pro-business states grew more attractive for the middle classes. Big-government, generous-welfare, and high-tax blue states mostly drew in the poor and the wealthy. Gradually, in the last 20 years, our old differences began to be defined by geography as well.

In the old days, the legacy of frontier life had made Idaho somewhat similar to Colorado. But now immigration and migration made them quite different. East versus West, or North versus South, no longer meant much. Instead, what united a Massachusetts with a California, or an Idaho with Alabama, were their shared views of government, politics, and culture, and whether they shared (or did not share) bicoastal status. The Atlantic and Pacific coasts were set off against the noncoastal states; Portland was similar to Cambridge in the fashion that Nashville and Bozeman voted alike. As was true in 1861 or 1965, geography often intensified existing discord.

III. Open Borders

The old consensus about immigration eroded, namely that while European and British commonwealth immigration was largely declining, it mattered little given that immigration from Latin America, Asia, and Africa would be diverse, meritocratic, measured — and legal.

The old melting pot would always turn foreigners into Americans. No one seemed to care whether new arrivals increasingly did not superficially look like most Americans of European descent. After all, soon no one would be able to predict whether a Lopez or a Gonzalez was a conservative or liberal, any more than he had been able to distinguish the politics of a Cuomo from a Giuliani on the basis of shared Italian ancestry.

Indeed, the professed views of Bill and Hillary Clinton, Joe Biden, Barack Obama, and Harry Reid before 2009 about illegal immigration were identical to those of Donald Trump in 2018: Secure the border; ensure that immigration was legal and meritocratic; deport many of those who had arrived illegally; and allow some sort of green-card reprieve for illegal aliens who had resided for years in the U.S., were working, and had no arrest record — all in exchange for paying a small fine, learning English, and applying for legal-resident status.

The huge influxes of the 1990s and 21st century — 60 million non-native residents (citizens, illegal aliens, and green-card holders) now reside in the U.S. — destroyed that consensus, once shared across the racial and ideological spectrum, from the late civil-rights leader and Democratic representative Barbara Jordan to labor leader Cesar Chavez.

Instead, a new opportunistic and progressive Democratic party assumed that the Latino population now included some 20 million illegal residents, and about that same number of first- and second-generation Hispanics. The 2008 Obama victory raised new possibilities of minority-bloc voting and seemed to offer a winning formula of galvanizing minority voters through salad-bowl identity-politics strategies. Purple states such as California, Colorado, Nevada, and New Mexico gradually turned blue, apparently due to new legions of minority-bloc voters.

One way of making America progressive was not just winning the war of ideas with voters, but changing the nature and number of voters, namely by welcoming in large numbers of mostly impoverished immigrants, assuring them generous state help, appealing to their old rather than new identities, and thereby creating a new coalition of progressives committed to de facto and perpetually open borders.

IV. The Salad Bowl

Racial relations deteriorated. Affirmative action was no longer predicated on the sins of slavery and Jim Crow and aimed at reparations in hiring and admissions for African Americans, often on the implicit rational of helping the poorer to enter the middle class.

Instead, “diversity” superseded affirmative action and eventually constituted an incoherent binary of white–non-white. Yet that divide could not be logically defined either by race (hence the anomalies of everything from Elizabeth Warren’s constructed minority identity to the nomenclature gymnastics of Kevin de León), or by economic or historical oppression, or by present income and wealth.

On entry to the U.S., affluent immigrants from Mumbai, poor arrivals from Oaxaca, Chilean aristocrats, or Taiwanese dentists would all be deemed “minorities” and courted as such by political operatives. Stepping foot on American soil equated with experiencing racism, and racism generated reparational claims of an aggrieved identity.

Of course, when a third of the country was now asked to self-identify in existential fashion and for self-interested purposes as non-white rather than incidentally as Americans of Punjabi, Arab, Mexican, African, or Chinese heritage, then it was natural that those who did not fit the racial arc that supposedly always bent to predetermined justice would began to shed their own once proud ethnic heritages as Americans of Irish, Armenian, Greek, or Eastern European descent. They’d likewise start to reactively see themselves as “white” — in a way that overshadowed their prior particular ethnic fides. We were well on our way to embracing an old but also quite new force multiplier of existing difference.

Increasingly, half the country views its history and institutions as inspirational, despite prior flaws and shortcomings, and therefore deserving of reverence and continuance. The other half sees American history and tradition as a pathology that requires rejection or radical transformation.

V. The Post-War Order

The world of post-1945 is coming to a close — after the end of the Cold War, the collapse of the Soviet Union, the unification of Germany, the creation of the European Union, the ascendance of a mercantilist and authoritarian China, and the post-9/11 rise of radical Islamic terrorism. Our closest NATO allies near the barricades of Russian aggression and radical Islam are the least likely of the alliance to prepare militarily. Yet Russia is a joke compared with the challenge of China. The European Union project is trisected by north-south financial feuding, east-west immigration discord, and Brexit — and the increasing realization that pan-European ecumenicalism requires more force and less democracy to survive than did the old caricatured nation-state.

The post-war rationales for American global leadership — we would accept huge trade imbalances, unfair trading agreements, often unilateral and costly interventions given our inordinate wealth and power and fears of another 1939 — no longer persuade half the nation.

The descendants of the architects of the old order were no longer able to make the argument that warplanes over Afghanistan, Iraq, or Libya were central to U.S. security, or at least in cost-to-benefit terms aided the United States. And it did not help that the classes who made the argument for American preemptory international interventions had few answers on how to deter Iran, challenge an aggressive China, or denuclearize North Korea; further, they appeared to have weird contempt for those Americans who were asked to pay the taxes and send their daughters and sons abroad to fight and sometime die for what seemed an increasingly ungrateful “other.”

The lesson of Iraq was about more than the wisdom or folly of that intervention. It was a warning that those who advocated optional wars might not always continue to support the war when it turned ugly and unpopular — and was deemed injurious to their own careers. That fact also turned half the country off on its leadership.

America is not isolationist, but an increasing number of its citizens sees overseas interventions as an artifact of globalization. Rightly or wrongly, they do not believe that the resulting rewards and costs are evenly distributed, much less in the interest of America as a whole.

It is now old-hat to say that the Detroit of 1945, at the time perhaps the world’s most innovative and ascendant city, now looks like Hiroshima or Hamburg of 1945, while Hiroshima and Hamburg of 2018 resemble the equivalent of 1945 Detroit. The point is not that the post-war order itself destroyed Detroit, but that Americans see something somewhere wrong when we helped rebuild the industrial cities of the world and crafted an order under which they thrived but in the process ignored many of our own.

Advice from Hippocrates

The various ties that bind us — a collective educational experience, adherence to the verdict of elections, integration and assimilation, sovereignty between delineated borders, a vibrant popular and shared culture, and an expansive economy that makes our innate desire to become well-off far more important than vestigial tribalism — all waned. Entering a campus, watching cable news, switching on the NFL, listening to popular music, or watching a new movie is not salve but salt for our wounds.

In the absence of political, cultural, or social ecumenicalism, perhaps we can at least for now privately retreat to the old Hippocratic adage of “first, do no harm” to one another.

Many conservatives did not see that Trump had framed the 2016 election as a choice between two mutually exclusive regimes: multiculturalism and America. What I call “multiculturalism” includes “identity politics” and “political correctness.” If multiculturalism continues to worm its way into the public mind, it will ultimately destroy America. Consequently, the election should have been seen as a contest between a woman who, perhaps without quite intending it, was leading a movement to destroy America and a man who wanted to save America. The same contest is being played out in the midterm elections.

I realize the term “multiculturalism” is somewhat dated, but I mean to freshen it up by using it in its most comprehensive sense—as a political philosophy. Multiculturalism conceives of society as a collection of cultural identity groups, each with its own worldview, all oppressed by white males, collectively existing within permeable national boundaries. Multiculturalism replaces American citizens with so-called “global citizens.” It carves “tribes” out of a society whose most extraordinary success has been their assimilation into one people. It makes education a political exercise in the liberation of an increasing number of “others,” and makes American history a collection of stories of white oppression, thereby dismantling our unifying, self-affirming narrative—without which no nation can long survive.

During the 2016 campaign, Trump exposed multiculturalism as the revolutionary movement it is. He showed us that multiculturalism, like slavery in the 1850’s, is an existential threat. Trump exposed this threat by standing up to it and its enforcement arm, political correctness. Indeed, he made it his business to kick political correctness in the groin on a regular basis. In countless variations of crassness, he said over and over exactly what political correctness prohibits one from saying: “America does not want cultural diversity; we have our culture, it’s exceptional, and we want to keep it that way.” He also said, implicitly but distinctly: the plight of various “oppressed groups” is not the fault of white males. This too violates a sacred tenet of multiculturalism. Trump said these things at a time when they were the most needful things to say, and he said them as only he could, with enough New York “attitude” to jolt the entire country. Then, to add spicy mustard to the pretzel, he identified the media as not just anti-truth, but anti-American.

Trump is a walking, talking rejection of multiculturalism and the post-modern ideas that support it. Trump believes there are such things as truth and history and his belief in these things is much more important than whether he always tells the truth himself or knows his history—which admittedly is sometimes doubtful.

His pungent assertion that there are “shithole” countries was an example of Trump asserting that there is truth. He was saying that some countries are better than others and America is one of the better ones, perhaps even the best. Multiculturalism says it is wrong to say this (as it was “wrong” for Reagan to call the Soviet Union “evil”). Trump is the only national political figure who does not care what multiculturalism thinks is wrong. He, and he alone, categorically and brazenly rejects the morality of multiculturalism. He is virtually the only one on our national political stage defending America’s understanding of right and wrong, and thus nearly alone in truly defending America. This why he is so valuable—so much depends on him.

His shortcomings are many and some matter, but under present circumstances what matters more is that Trump understands we are at war and he is willing to fight. In conventional times, Trump might have been one of the worst presidents we ever had; but in these most unconventional times, he may be the best president we could have had.

2016 and the Meaning of America

“If we could first know where we are, and whither we are tending, we could then better judge what to do, and how to do it.”

Most conservatives did not see Trump in 2016 as a man defending America. This was in large part because they did not see that America was in need of defending. What conservatives did see was Trump’s policies (which didn’t line up with conservative ones) and his character (which didn’t line up, period), and they concluded the country was nowhere near in bad enough shape, and Hillary Clinton not enough of a danger, to justify enthusiasm for a man so manifestly unfit for the role.

In what might be a case of everybody’s-out-of-step-but-me, many conservatives have concluded that if the electorate voted into office a man so obviously unfit to be president, there must be something wrong with the electorate.

I think the explanation for Trump’s victory is actually quite straightforward and literal: Americans, plenty of whom still have common sense and are patriotic, voted for Trump for the very reason he said they should vote for him, to put America first or, as his campaign slogan had it, “to make America great again”—where “America” was not, as many conservatives imagine, code for “white people.” In other words, the impulse for electing Trump was patriotic, the defense of one’s own culture, rather than racist./p>

In a thoughtful essay in the Spring of 2017 on the future of the conservative movement, Yuval Levin expressed the view, common among conservatives, that the country was in decent shape. He was puzzled therefore why a number of thinkers associated with the Claremont school held “that things almost could not be worse” and that it was therefore necessary “to mount a total revolution.”/p>

Levin and like-minded conservatives have matters backwards. Multiculturalism, not Trumpism, is the revolution. Trump’s campaign, and its defense by his intellectual supporters, was not a call for a revolution but a call to stop a revolution. Trump’s intellectual supporters did not say things could not get worse; they said without a sharp change in course there was a good chance we shall never get back home again./p>

Trump’s entire campaign was a defense of America. The election was fought not so much over policies, character, email servers, or James Comey, as it was over the meaning of America. Trump’s wall was not so much about keeping foreigners out as it was a commitment to a distinctive country; immigration, free trade, and foreign policy were about protecting our own. In all these policies, Trump was raising the question, “Who are we as a nation?” He answered by being Trump, a man made in America, unmistakably and unapologetically American, and like most of his fellow citizens, one who does not give a hoot what Europeans or intellectuals think./p>

Clinton, in the other corner, was the great disdainer, a citizen not of America but of the world: a postmodern, entitled elitist who was just more of Obama, the man who contemptuously dismissed America’s claim to being exceptional. What she called the “deplorables” were the “anti-multiculturalists.” She was saying, in effect, that she did not recognize the “deplorables” as fellow citizens, and they were, as far as she was concerned, not part of the regime she proposed to lead./p>

Perhaps Trump’s most effective answer to Clinton’s and the Democrats’ multiculturalism was his attacks on political correctness, both before and after the election. Trump scolded Jeb Bush for speaking Spanish on the campaign trail. He pointed out that on 9/11 some Muslims cheered the collapse of the twin towers. He said Mexico was sending us its dregs, suggested a boycott of Starbucks after employees were told to stop saying “Merry Xmas,” told NFL owners they should fire players who did not respect the flag, expressed the view that people from what he called “shitholes” (Haiti and African countries being his examples) should not be allowed to immigrate, exposed the danger of selecting judges based on ethnicity, and said Black Lives Matter should stop blaming others./p>

The core idea of each of these anti-P.C. blasts, when taken in aggregate, represent a commitment to America’s bourgeois culture, which is culturally “Judeo-Christian,” insists on having but one language and one set of laws, and values: among other things, loyalty, practical experience, self-reliance, and hard work. Trump was affirming the goodness of our culture. Odd as it may sound, he was telling us how to live a worthy life. Trump is hardly the ideal preacher, but in a society where people are thirsting for public confirmation of the values they hold dear, they do not require pure spring water. Even Trump’s crass statements objectifying women did not seem to rattle Trump women voters, perhaps because it did not come as news to them that men objectify women. In other words, Trump was being a man, albeit not the model man, but what mattered was that he was not the multicultural sexless man. A similar rejection of androgyny may have been at work in the Kavanaugh hearings./p>

It was only a generation or so ago that our elite, liberals as well as conservatives, were willing to defend America’s bourgeois culture, American exceptionalism, and full assimilation for immigrants. Arthur Schlesinger expressed his view of assimilation this way: the “American Anglo-Saxon Protestant tradition … provides the standard to which other immigrant nationalities are expected to conform, the matrix into which they are to be assimilated.” That meant giving up one’s home culture, not necessarily every feature and not right away, but ultimately giving up its essential features in favor of American culture. In other words, there are no hyphenated Americans./p>

Trump understands that “diversity is our greatest strength,” which is multiculturalism boiled down to an aphorism, is exactly backwards. America’s greatest strength is having transcended race, and the one major exception was very nearly our undoing. In light of this history, the history of the world (one “tribal” war after another), and the multicultural car wreck that is Europe today, to manufacture cultural diversity is nothing less than self-immolating idiocy. Trump might not put it in these words, but he gets it. The average American gets it too, because it is not very difficult to get: it is common sense./p>

Conservatives and Republicans are Complicit/p>

Trump’s strengths are his courage, his common sense, and his rhetoric. He gets to the essential thing, the thing that no one else will say for fear of being called a “racist” or “fascist” or one of the other slurs that incite the virtue-signaling lynch mob./p>

His “shithole” remark was one example. Another occurred in 2015 when Trump, after a terrorist attack, proposed a ban on all Muslims until “we figure out what the hell is going on.” Virtually everyone, the Right included, screamed “racism” and “Islamophobia.” Of course, to have defended Trump would have violated the multicultural diktat that Islam be spoken of as a religion of peace. But like Trump, the average American does not care whether Islam is or is not a religion of peace; he can see with his own eyes that it is being used as an instrument of war. When Muslim terrorists say they are doing the will of Allah, Americans take them at their word. This is nothing but common sense./p>

Trump’s attempt to remove District Judge Gonzalo Curiel from a lawsuit in which Trump University was the defendant, in part because of the judge’s Mexican ancestry, was another instance where cries of “racism,” from the Right every bit as loud as from the Left, substituted for common sense. It was thought absurd for Trump to claim the judge was biased because of his ethnicity, yet it was the elite’s very insistence in making ethnicity a factor in the appointment of judges that invited Trump to respond in kind. We make ethnicity an essential consideration and then claim ethnicity should not matter. That is not common sense./p>

Getting to the essential, commonsensical heart of the matter is the most important element of Trump’s rhetoric, but even his often cringeworthy choice of words sometimes advances the conservative cause. This is a sad reflection of the times, but these are the times we live in, and we must judge political things accordingly. When, for example, Trump mocked Judge Kavanaugh’s accuser, he was doing something else that only he can: taking multiculturalism, and its “believe all women” narrative, head on. We should continue to cringe at Trump’s puerility, but we should appreciate when it has value./p>

In each of these instances, when conservatives joined liberals in excoriating Trump, conservatives were beating up our most important truth teller. Conservatives and Republicans should be using these instances to explain America and what is required for its perpetuation. In the examples listed above, they should have explained the importance of having one set of laws, full assimilation, and color blindness; the incompatibility of theocracy with the American way of life; that under certain circumstances we might rightly exclude some foreign immigrants, not because of their skin color but because they come from countries unfamiliar with republican government. Instead conservatives are doing the work of the multiculturalists for them: insinuating multiculturalism further into the public mind. Conservatives have, without quite realizing it, agreed to play by the multiculturalist’s rules and in so doing they have disarmed themselves; they have laid down on the ground their most powerful weapon: arguments that defend America./p>

The Kavanaugh Hearings: Multiculturalism at Work /p>

In exposing the dangers of multiculturalism, Trump exposed its source: radical liberal intellectuals, most of whom hang about the humanities departments (and their modern day equivalents) at our best colleges and universities, where they teach the multicultural arts and set multicultural rules. And from the academy these ideas and rules are drained into the mostly liberal, mostly unthinking opinion-forming elite who then push for open borders, diversity requirements, racism (which somehow they get us to call its opposite), and other aspects of multiculturalism./p>

Multicultural rules were in full force in the Kavanaugh hearings. Armed with the chapter of the multicultural creed that covers “male oppression of women,” Democrats could attack Kavanaugh with accusations conjured out of nothing. At the same time, multicultural rules required Republicans to fight with one hand behind their backs: they were forced to allow a case with no basis to go forward, could not attack the accuser, and had to use a woman to question her. Republicans reflexively accepted their assigned role as misogynists (and would have been accepting the role of racists had the accuser been black). True, Republicans had no choice; still when one is being played one needs to notice./p>

Had Trump tweeted, “I don’t give a rat’s ass about the sex or color of the questioner,” I suspect the majority of Americans would have applauded. After all, that is the American view of the matter. It’s not the average American who requires a woman questioner or a black one. We know that because Trumpsters have told us. It’s not typically the parents in our inner-city schools who demand teachers and administrators with skin color that matches that of their children. It’s not ordinary Mexican immigrants who are agitating to preserve their native culture. It’s the multiculturalists./p>

Multicultural rules flow from multiculturalism’s understanding of justice, which is based not on the equality of individuals (the American understanding) but on the equality of identity groups oppressed by white males. In the Kavanaugh hearings, the multiculturalists did not see a contest between two individuals but rather between all women who are all oppressed and all white men who are all oppressors. Americans claimed the multiculturalists violated due process and conventional rules of evidence, but from the multiculturalists’ perspective what Americans saw as violations were actually multiculturalism’s understanding of due process and rules of evidence. Americans were seeing a revolution in action./p>

We now find ourselves in a situation not unlike that which existed before the Civil War, where one side had an understanding of justice that rested on the principle of human equality, while the other side rested on the principle that all men are equal except black men. One side implied a contraction and ultimate extinction of slavery; the other, its expansion. It was a case of a ship being asked to go in two directions at once. Or to use Lincoln’s Biblical metaphor, “a house divided against itself cannot stand.” Lincoln did not mean that the country could not stand part free and part slave. It could, as long as there was agreement that slavery was bad and on the road to extinction. But once half the country thought slavery a good thing and the other thought it a bad thing the country could no longer stand. It was the different understandings of justice that were decisive because when there are two understandings of justice, as in the Civil War and now, law-abidingness breaks down. In the Civil War, this resulted in secession. Today, this results in sanctuary cities and the “resistance.” To get a sense of how close we are to a complete breakdown, imagine that the 2016 election, like the Bush-Gore election, had been decided by the Supreme Court. One shudders to think./p>

“What to do, and How to do it.”/p>

Conservatives have been dazed by Trumpism. Even those conservatives who now acknowledge that Trump has accomplished some good things are not certain what is to be learned from Trumpism that might inform the future of the conservative movement./p>

The lesson is this: get right with Lincoln. He made opposition to slavery the non-negotiable center of the Republican party, and he was prepared to compromise on all else. Conservatives should do likewise with multiculturalism. We should make our opposition to it the center of our movement. Multiculturalism should guide our rhetorical strategy, provide a conceptual frame for interpreting events, and tie together the domestic dangers we face. We must understand all these dangers as part of one overarching thing./p>

This approach, however, will not work unless conservatives begin to think about politics like Lincoln did. That they do not may explain why so many of them missed the meaning of the 2016 election. This topic is complex but I think it comes down to this: As compared to Lincoln’s thinking about politics, conservative thinking tends to be too narrow (i.e., excludes too much) and too rigid./p>

What for Lincoln was the single most important political thing—the public’s understanding of justice—many of today’s conservatives think not important at all. It should not then be surprising why they missed, or underappreciated, the political dangers of multiculturalism with its assault on the American understanding of justice. Having missed or underappreciated multiculturalism, conservatives could not see that those attributes of Trump that in conventional times would have been disqualifying were in these times just the ones needed to take on multiculturalism. Trump was not a conventional conservative, yet his entire campaign was about saving America. This is where conservatism begins./p>

Education is another area that conservatives believe is less politically important than Lincoln did. Conservatives must relearn what Lincoln knew, and what, until the mid-twentieth century, our universities and colleges also knew: the purpose of higher education, in particular elite higher education, is to train future citizens on behalf of the common good. If the elite universities are promoting multiculturalism, and if multiculturalism is undermining America, then the universities are violating their obligation to the common good no less than were they giving comfort to the enemy in time of war. In such a case, the government, the federal government if need be, can rightfully impose any remedy as long as it is commensurate with the risk posed to the country and is the least intrusive option available./p>

Reorienting the conservative movement is a formidable undertaking, but we have a few big things in our favor: for starters, most of the country, including many who are not Trumpsters, appear to object to multiculturalism and its accompanying speech codes. In addition, multiculturalism, as with abolition, has the potential to energize the conservative movement. Conservatives, who are in the business of conserving things, come to life when there is something important to conserve because this allows them to stake out a very distinctive and morally powerful position with enough room to accommodate a broad coalition. In this case, that really important “something” is our country./p>

Thomas D. Klingenstein is a principal in the investment firm of Cohen, Klingenstein, LLC and the chairman of the Board of Directors of the Claremont Institute./p>

The writer talks about elitism, the American underclass — and why his ‘Mamaw’ would have sympathised with Trump

Shawn Donnan

The Financial Times

February 2, 2018

The first rule of Lunch with the FT is that there must be lunch. So I am somewhat unnerved when I walk into Hadley’s Bar + Kitchen, which JD Vance, hillbilly bard and venture capitalist, has nominated as the venue for a late lunch. Vance is already sitting in a booth with a colleague and his son. More importantly, he is diving enthusiastically into a mountain of french fries and chicken wings.

“Are you the guy from the FT?” he asks when he spots me, wiping his chin and standing up to shake hands. We exchange pleasantries, and I venture that we’re supposed to be having lunch. He smiles, pleads hunger, and asks for a few more minutes to wrap things up. So I repair to the empty bar, where I study the menu and chat to a friendly bartender who is effusive about the restaurant’s beer-battered avocado tacos.

Hadley’s is emblematic of the sort of refashioned, Americana-laden eateries that you find increasingly in heartland cities such as Columbus, Ohio, where Vance is in the process of moving with his family. Think of it as a millennial-supervised remake of the American diner. The menu is heavy on reasonably priced burgers but loaded with urbanite flourishes (“Vegan Club Sandwich with heirloom tomatoes”) and improbably named craft beers (“Barley’s Blood Thirst”). Monday was “Vegan Monday”. Today is “Taco Tuesday”, and with $3 tacos on offer, the bill looks like a bargain.

Before there was Fire and Fury, Michael Wolff’s gossipy insider account of Donald Trump’s first year in office, arguably the most talked-about work of the Trumpian era was Hillbilly Elegy, Vance’s bestselling memoir of growing up in the heart of deindustrialised America. The book turned the 33-year-old into a national figure in the US, as the spokesman cum anthropological explainer for the downtrodden people of rural Appalachia — his family’s ancestral home — and small-town, rust belt Ohio, where he was raised. In the countdown to Trump’s election in November 2016, it was hailed as a handbook to the frustrations of the millions of voters in the white American underclass.

Vance depicts his family with deep affection, but Hillbilly Elegy also offers a clear-eyed critique of his clan’s violent and dysfunctional ways. Vance gives an unforgiving portrait of his mother, who struggled with heroin addiction; in one terrifying incident when he was 12, she got behind the wheel in a drug-induced haze, and threatened to drive the car off the road and kill them both. And all morning I’ve been thinking of “Uncle Pet”, the relative Vance introduces early in his book by recounting the time he almost skinned a man with an electric saw for insulting his mother. I’ve also been mulling Vance’s own frank confessions of his inherited Scots-Irish temper and his pugilistic social media persona and wondering what to brace for from this supposed whisperer of Trumpians.

The man who eventually sits down across from me has the conservative appearance of a politician from middle America — and behaves like one. He is wearing a dark grey suit and pressed blue shirt and tie. He is also studiously polite. If there is an inverse of the hillbilly stereotype, this is it.

The sober mien isn’t accidental. Vance features heavily in Republican discussions in Ohio. He has twice explored and decided against a run for the US Senate in the past year, although senior Republicans pressed him to jump in. When he was first considering his options last summer, Vance tells me, he and his wife decided they “would be miserable” if he pursued it. “The more I thought about it, the more I thought, if I go ahead and do this, given where my family is right now, I’m actually a bad person.”

Like so many Americans before him, Vance’s escape route from poverty came via the military. He served in Iraq before going to Yale Law School and then completed a meteoric rise with a stint in Silicon Valley. As a rising star in venture capital circles, he worked for the entrepreneur and one-time outspoken Trump supporter Peter Thiel. There is a whiff of the metropolitan elite about his life today: he now travels the country giving speeches and working with investor Steve Case, the AOL co-founder.

I see Trump as the least worrisome part of the Republican party’s problem, which is that we are basically living in the 1980s

Yet we are meeting in Columbus, the capital of Ohio, because Vance has made a deliberate choice to move with his family to low-key middle America, far from the coastal metropolises most people with his qualifications choose to inhabit. Middletown, the former factory town where he was raised largely by his maternal grandmother (“Mamaw” to his readers), is less than two hours away by car. Jackson, Kentucky, spiritual mountain home to the Vances, is four hours away. “I’ve basically been homesick ever since I left for [Marines] boot camp,” he says. “It’s always been this process of ‘When do I go back?’ ”

Vance has bigger expectations for his return, too. If Hillbilly Elegy was about documenting the broken parts of America, his goal now is to find fixes. He has already established a non-profit to address issues raised by the opioid crisis ravaging many communities. But the move to Columbus also looks like a bit of pragmatic branding. Much of his appeal, beyond his storytelling talents, lies in the intersection of several attributes: his sense of place; his loyalty to what before 2016 was often a forgotten demographic; and his status as a dedicated conservative who is critical of both Trump and his Republican party as well as opposition Democrats. Without a move home, it’s not hard to imagine Vance losing relevance.

We order: fish tacos for him and one of the same for me, paired with a deep-fried avocado taco. We pass on beer and both order Diet Cokes. Vance’s criticism-for-all ethos has already prompted a backlash from the left and the right. The left-leaning New Republic has dubbed him a “false prophet” for the white working-class. Some Republicans in Ohio mock him in private for what they see as his naked political ambition.

The truth is that his politics are complicated. A year into Trump’s presidency, Vance still has an ambivalent view of the man, melding awe and discomfort. “He is one of the few political leaders in America that recognises the frustration that exists in large parts of Ohio, Pennsylvania, eastern Kentucky and so forth,” Vance says. He has been and remains critical of Trump’s dog-whistle politics related to race and immigration. And he is sceptical about the president’s long-term strategy. “The part that is forward-looking and answers the question ‘What do we do now?’ — it’s just not there yet.”

Vance is more scathing still when he discusses a broader Republican party that he sees as intellectually ossified. It cleared the way for Trump, he argues, by blindly pushing an agenda of Reaganesque trickle-down economics and engaging in misplaced military adventures in the years before the real estate developer’s brash arrival.

“I wasn’t as critical of my party in 2016 as I was the person,” he says. “But when I look at tax reform, when I look at healthcare reform, I see Trump as the least worrisome part of the Republican party’s problem, which is that we are basically living in the 1980s. We are constantly trying to resurrect domestic policies from the 1980s.”

Such as? “Let’s cut taxes for the wealthy! Let’s cut the social safety net! . . . The fundamental thesis that underlined basic Republican policies in the early 1980s, which is right, is that you had an economy which was simultaneously stagnating and experiencing high inflation. I don’t think the primary problem facing the American economy right now is that. It is that the opportunities that are out there require an adjustment in skills, an adjustment in training . . . And if that’s the problem, I don’t necessarily see how unleashing tax cuts for the wealthy . . . ” Vance trails off as our food arrives. The tacos are small enough that I immediately order another.

In the end Vance did not vote for Trump. He voted for Evan McMullin, the conservative independent, instead. But he still has a charitable view of the man who has blown up the norms of American political discourse. That is partly because Vance believes that Trump’s crudeness — and what he sees as the prudish response it elicits from city elites — was vital to the president’s appeal in places such as Appalachia.

Mamaw would not have voted for Trump, had she been alive, because of his history as a philanderer. Yet “the vulgarity that turns a lot of people off, Mamaw would have appreciated and thought was hilarious”. His grandfather was a life-long Democrat, although he voted for Ronald Reagan in 1984. “I think, like a lot of folks, he would have voted against Hillary Clinton,” says Vance. “That sort of condescending elitism that the Clinton campaign came to represent would have turned my grandfather off.”

By this point I have bitten into my deep-fried avocado taco. While everything around it — the cabbage slaw and black bean and corn salsa — is delicious, the avocado itself is a flavourless mush.

The top-down condescension that he found so aggravating in 2016 remains alive and well in American politics, Vance argues. “The elite Republican view of why people voted for Donald Trump is that Trump voters are stupid. I think the elite Democratic view is that Trump people were bigoted and immoral. And that’s probably still very much reflected in popular culture,” he says, picking at his fish tacos.

I point out that based on his Ivy League résumé, profession and accomplished spouse — he met his wife Usha at Yale and she is currently clerking for the chief justice of the US Supreme Court — he has become a card-carrying member of the very elite he scorns. Vance laughs. “I react viscerally to this idea that I am a member of the elite, even though it’s objectively true.”

Becoming a father has made him consider this question more seriously. The arrival of his son helped him to reconcile with his now-clean mother, and Vance says he feels an urgent need to make sure his child understands his own impoverished roots.

Hadley’s Bar and Kitchen

260 S 4th St, Columbus, Ohio

Fish taco x 4 $12

Avocado taco x 1 $3

Diet Coke x 2 Free

Oreo bonanza milkshake $6

S’mores shake $6

Total (inc tax and tip) $37.11

“My greatest fear, within that context, is that, in 18 years, will [my son] feel more comfortable around our law school classmates — or will he feel more comfortable around people like my grandma? I want him to feel more comfortable around people like my grandma. But my intuition is that is going to take a lot of work,” Vance says.

In Sun Valley, Idaho, last summer with his wife and then four-week-old son for the annual Allen & Co media conference, he found himself perplexed by his luxurious surroundings. “There is this level of comfort that, I think, is completely weird,” he says. “I understood for the first time what the Bible means when it talks about the difficulty of a rich man entering Heaven. It’s really tough to be a virtuous person when everyone is constantly taking care of you.”

Discomfort is also a theme when Vance talks about Silicon Valley. In the years that he lived there, he says, he found the relentless optimism jarring. “There are a lot of entrepreneurs [in Silicon Valley] developing the next app for clothes shopping who say, not ironically, that ‘we are changing the world’. You’re not changing the world. The guy that’s developing a new therapy that’s non-opioid analgesic pain relief? That guy’s changing the world. He’s going to save thousands of lives.”

His new life in Columbus is built around a belief that many of the entrepreneurs in cities in the American heartland don’t have the access to risk capital that they deserve. He works for Revolution, Steve Case’s venture capital firm, on a campaign called “Rise of the Rest” that is intended to fill the gap. Already he has found companies to invest in, like one in Indianapolis that makes cheap home tests to allow people to check for lead in their water. Vance also embraces wholeheartedly Case’s vision of a looming “third wave” of technological change that is more industrial and about “hard tech”. Much of that wave of innovation, he believes, will come out of America’s traditional industrial centres and universities rather than places such as Silicon Valley. His faith goes beyond the future of American industry. God is a growing theme in his personal life, too. He is in the process of converting to Catholicism and mulling a book on “Christianity and social capital”, which he describes as an exploration of the role of religious institutions in society viewed through his own personal story.

Just as he is explaining that, we are interrupted by our desserts. We have each ordered what turn out to be towering milkshakes, and we share a moment of awe when they arrive.

I’m curious about the future he sees for poor white Americans. Amid all the studies of rising mortality rates and a growing drug crisis fuelled by prescription opioids, isn’t he pessimistic?

“No,” he answers. “I’d say I’m a short-term realist, a long-term optimist. I do really believe in the power of identification and recognition. We’re in this period where everyone is starting to wake up, whether it’s because they know someone who has just had a heroin overdose or whether they are a policy expert and they have read this paper by [Nobel laureate] Angus Deaton [and his wife and fellow economist Anne Case] about dying in poor white America . . . That recognition gives me a lot of optimism.”

And what of America? His book is often grouped into a genre you might call “American decline” that covers predictions of everything from the end of the American dream to the crumbling of the postwar international order. “I do think that whatever is happening right now is really transformational and the postwar order is probably going to have to change in some fundamental way,” he says. “But I am still an optimist on that front. I think that my theory for what is happening is not that classical liberalism has failed. It’s not that western democracy has failed. It’s not that the postwar consensus has failed. It’s that the people who have been calling the shots for 20-30 years really screwed up.”

“So you’re still optimistic about America?” I ask.

“I am. I am. You just have to be. I don’t want to be one of those people who thinks the next 50 years are going to be a story of decline.”

That feels like a rare bit of optimism in a polarised America, I think as I pay the bill. In a country where vitriol rules, Vance is remarkably sanguine. America, he seems to be saying, will comfortably survive Donald Trump.

The enigmatic showman Martin Couney showcased premature babies in incubators to early 20th century crowds on the Coney Island and Atlantic City boardwalks, and at expositions across the United States. A Prussian-born immigrant based on the East coast, Couney had no medical degree but called himself a physician, and his self-promoting carnival-barking incubator display exhibits actually ended up saving the lives of about 7,000 premature babies. These tiny infants would have died without Couney’s theatrics, but instead they grew into adulthood, had children, grandchildren, great grandchildren and lived into their 70s, 80s, and 90s. This extraordinary story reveals a great deal about neonatology, and about life. (…) Drawing on extraordinary archival research as well as interviews, [Raffel’s] narrative is enhanced by her own reflections as she balanced her shock over how Couney saved these premature infants and also managed to make a living by displaying them like little freaks to the vast crowds who came to see them. Couney’s work with premature infants began in Europe as a carnival barker at an incubator exposition. It was there he fell in love with preemies and met his head nurse Louise Recht. Still, even allowing for his evident affection, making the preemies incubation a public show seems exploitative. But was it? In the 21st century, hospital incubators and NICUs are taken for granted, but over a hundred years ago, incubators were rarely used in hospitals, and sometimes they did far more harm than good. Premature infants often went blind because of too much oxygen pumped into the incubators (Raffel notes that Stevie Wonder, himself a preemie, lost his sight this way). Yet the preemies Couney and his nurses — his wife Maye, his daughter Hildegard, and lead nurse Louise, known in the show as “Madame Recht” — cared for retained their vision. The reason? Couney was worried enough about this problem to use incubators developed by M. Alexandre Lion in France, which regulated oxygen flow. Today it is widely accept that every baby – premature or ones born to term – should be saved. Not so in Couney’s time. Preemies were referred to as “weaklings,” and even some doctors believed their lives were not worth saving. While Raffel’s tale is inspiring, it is also horrific. She does not shy away from people like Dr. Harry Haiselden who, unlike Couney, was an actual M.D., but “denied lifesaving treatment to infants he deemed ‘defective,’ deliberately watching them die even when they could have lived.” (…) True, he was a showman, and during most of his career, he earned a good living from his incubator babies show, but Couney, an elegant man who fluently spoke German, French and English, didn’t exploit his preemies (Hildegard was a preemie too). He gave them a chance at the lives they might not have been allowed to live. Couney used his showmanship to support all of this life-saving. He put on shows for boardwalk crowds, but he also, despite not having a medical degree, maintained his incubators according to high medical standards. In many ways, Couney’s practices were incredibly advanced. Babies were fed with breast milk exclusively, nurses provided loving touches frequently, and the babies were held, changed and bathed. (…) Yet the efforts of Dr. Couney’s his nurses went largely ignored by the medical profession and were only mentioned once in a medical journal. As Raffel writes in her book’s final page, “There is nothing at his grave to indicate that [Martin Couney] did anything of note.” The same goes for Maye, Louise and Hildegard. Louise’s name was misspelled on her shared tombstone (Louise’s remains are interred in another family’s crypt), and Hildegard, whose remains are interred with Louise’s, did not even have her own name engraved on the shared tombstone. With the exception of Chicago’s Dr. Julius Hess, who is considered the father of neonatology, the majority of the medical establishment patronized and excluded Couney. Hess, though, respected Couney’s work and built on it with his own scientific approach and research; in the preface to his book Premature and Congenitally Diseased Infants, Hess acknowledges Couney “‘for his many helpful suggestions in the preparation of the material for this book.’” But Couney cared more about the babies than professional respect. His was a single-minded focus: even when it financially devastated him to do so, he persisted, so his preemies could live.National Book Review

Carl Hagenbeck had the idea to open zoos that weren’t only filled with animals, but also people. People were excited to discover humans from abroad: Before television and color photography were available, it was their only way to see them. Anne Dreesbach

The main feature of these multiform varieties of public show, which became widespread in late-nineteenth and early-twentieth century Europe and the United States, was the live presence of individuals who were considered “primitive”. Whilst these native peoples sometimes gave demonstrations of their skills or produced manufactures for the audience, more often their role was simply as exhibits, to display their bodies and gestures, their different and singular condition. In this article, the three main forms of modern ethnic show (commercial, colonial and missionary) will be presented, together with a warning about the inadequacy of categorising all such spectacles under the label of “human zoos”, a term which has become common in both academic and media circles in recent years. Luis A. Sánchez-Gómez

Between the 29th of November 2011 and the 3rd of June 2012, the Quai de Branly Museum in Paris displayed an extraordinary exhibition, with the eye-catching title Exhibitions. L’invention du sauvage, which had a considerable social and media impact. Its “scientific curators” were the historian Pascal Blanchard and the museum’s curator Nanette Jacomijn Snoep, with Guadalupe-born former footballer Lilian Thuram acting as “commissioner general”. A popular sportsman, Thuram is also known in France for his staunch social and political commitment. The exhibition was the culmination (although probably not the end point) of a successful project which had started in Marseille in 2001 with the conference entitled Mémoire colonial: zoos humains? Corps Exotiques, corps enfermés, corps mesurés. Over time, successive publications of the papers presented at that first meeting have given rise to a genuine publishing saga, thus far including three French editions, one in Italian, one in English and another in German. This remarkable repertoire is completed by the impressive catalogue of the exhibition. All of the book titles (with the exception of the catalogue) make reference to “human zoos” as their object of study, although in none of them are the words followed by a question mark, as was the case at the Marseille conference. This would seem to define “human zoos” as a well-documented phenomenon, the essence of which has been well-established. Most significantly, despite reiterating the concept, neither the catalogue of the exhibition, nor the texts drawn up by the exhibit’s editorial authorities, provide a precise definition of what a human zoo is understood to be. Nevertheless, the editors seem to accept the concept as being applicable to all of the various forms of public show featured in the exhibition, all of which seem to have been designed with a shared contempt for and exclusion of the “other”. Therefore, the label “human zoo” implicitly applies to a variety of shows whose common aim was the public display of human beings, with the sole purpose of showing their peculiar morphological or ethnic condition. Both the typology of the events and the condition of the individuals shown vary widely: ranging from the (generally individual) presentation of persons with crippling pathologies (exotic or more often domestic freaks or “human monsters”) to singular physical conditions (giants, dwarves or extremely obese individuals) or the display of individuals, families or groups of exotic peoples or savages, arrived or more usually brought, from distant colonies. The purpose of the 2001 conference had been to present the available information about such shows, to encourage their study from an academic perspective and, most importantly, to publicly denounce these material and symbolic contexts of domination and stigmatisation, which would have had a prominent role in the complex and dense animalisation mechanisms of the colonised peoples by the “civilized West”. A scientific and editorial project guided by such intentions could not fail to draw widespread support from academic, social and journalistic quarters. Reviews of the original 2002 text and successive editions have, for the most part, been very positive, and praise for what was certainly an extraordinary exhibition (the one of 2012) has been even more unanimous. However, most commentators have limited their remarks to praising the important anti-racist content and criticisms of the colonial legacy, which are common to both undertakings. Only a few authors have drawn attention to certain conceptual and interpretative problems with the presumed object of study, the “human zoos”, problems which would undermine the project’s solidity. (…) Although the public display of human beings can be traced far back in history in many different contexts (war, funerals and sacred contexts, prisons, fairs, etc…) the configuration and expansion of different varieties of ethnic shows are closely and directly linked to two historical phenomena which lie at the very basis of modernity: exhibitions and colonialism. The former began to appear at national contests and competitions (both industrial and agricultural). These were organised in some European countries in the second half of the eighteenth century, but it was only in the century that followed that they acquired new and shocking material and symbolic dimensions, in the shape of the international or universal exhibition.The key date was 1851, when the Great Exhibition of the Works of Industry of All Nations was held in London. The triumph of the London event, its rapid and continuing success in France and the increasing participation (which will be outlined) of indigenous peoples from the colonies, paved the way from the 1880s for a new exhibition model: the colonial exhibition (whether official or private, national or international) which almost always featured the presence of indigenous human beings. However, less spectacular exhibitions had already been organised on a smaller scale for many years, since about the mid-nineteenth century. Some of these were truly impressive events, which in some cases also featured native peoples. These were the early missionary (or ethnological-missionary) exhibitions, which initially were mainly British and Protestant, but later also Catholic. Finally, the unsophisticated ethnological exhibitions which had been typical in England (particularly in London) in the early-nineteenth century, underwent a gradual transformation from the middle of the century, which saw them develop into the most popular form of commercial ethnological exhibition. These changes were initially influenced by the famous US circus impresario P.T. Barnum’s human exhibitions. Later on, from 1874, Barnum’s displays were successfully reinterpreted (through the incorporation of wild animals and groups of exotic individuals) by Carl Hagenbeck.The second factor which was decisive in shaping the modern ethnic show was imperial colonialism, which gathered in momentum from the 1870s. The propagandising effect of imperialism was facilitated by two emerging scientific disciplines, physical anthropology and ethnology, which propagated colonial images and mystifications amid the metropolitan population. This, coupled with robust new levels of consumerism amongst the bourgeoisie and the upper strata of the working classes, had a greater impact upon our subject than the economic and geostrategic consequences of imperialism overseas. In fact, the new context of geopolitical, scientific and economic expansion turned the formerly “mysterious savages” into a relatively accessible object of study for certain sections of society. Regardless of how much was written about their exotic ways of life, or strange religious beliefs, the public always wanted more: seeking participation in more “intense” and “true” encounters and to feel part of that network of forces (political, economic, military, academic and religious) that ruled even the farthest corners of the world and its most primitive inhabitants.It was precisely the convergence of this web of interests and opportunities within the new exhibition universe that had already consolidated by the end of the 1870s, and which was to become the defining factor in the transition. From the older, popular model of human exhibitions which had dominated so far, we see a reduction in the numbers of exhibitions of isolated individuals classified as strange, monstrous or simply exotic, in favour of adequately-staged displays of families and groups of peoples considered savage or primitive, authentic living examples of humanity from a bygone age. Of course, this new interest, this new desire to see and feel the “other” was fostered not only by exhibition impresarios, but by industrialists and merchants who traded in the colonies, by colonial administrators and missionary societies. In turn, the process was driven forward by the strongly positive reaction of the public, who asked for more: more exoticism, more colonial products, more civilising missions, more conversions, more native populations submitted to the white man’s power; ultimately, more spectacle. Despite the differences that can be observed within the catalogue of exhibitions, their success hinged to a great extent upon a single factor: the representation or display of human beings labelled as exotic or savage, which today strikes us as unsettling and distasteful. It can therefore be of little surprise that most, if not all, of the visitors to the Quai de Branly Museum exhibiton of 2012 reacted to the ethnic shows with a fundamental question: how was it possible that such repulsive shows had been organised? Although many would simply respond with two words, domination and racism, the question is certainly more complex. In order to provide an answer, the content and meanings of the three main models or varieties of the modern ethnic show –commercial ethnological exhibitions, colonial exhibitions and missionary exhibitions– will be studied. (…) The opposition that missionary societies encountered at nineteenth-century international exhibitions encouraged them to organise events of their own. The first autonomous missionary events were Protestant and possibly took place prior to 1851. In any case, this has been confirmed as the year that the Methodist Wesleyan Missionary Society organised a missionary exhibition (which took place at the same time as the International Exhibition). Small in size and very simple in structure, it was held for only two days during the month of June, although it provided the extraordinary opportunity to see and acquire shells, corals and varied ethnographic materials (including idols) from Tonga and Fiji. The exhibition’s aim was very specific: to make a profit from ticket sales and the materials exhibited and to seek general support for the missionary enterprise.Whether or not they were directly influenced by the international event of 1851, the modest British missionary exhibitions of the mid-nineteenth century began to evolve rapidly from the 1870s, reaching truly spectacular proportions in the first third of the twentieth century. This enormous success was due to a particular set of circumstances which were not true for the Catholic sphere. Firstly, the exhibits were a fantastic source of propaganda, and furthermore, they generated a direct and immediate cash income. This is significant considering that Protestant church societies and committees neither depended upon, nor were linked to (at least not directly or officially) civil administration and almost all revenue came from the personal contributions of the faithful. Secondly, because Protestants organised their own events, there was no reason for them to participate in the official colonial exhibitions, with which the Catholic missions became repeatedly involved once the old prejudices of government had fallen away by the later years of the nineteenth century. In this way, evangelical communities were able to maintain their independence from the imperial enterprise, yet in a manner that did not preclude them from collaborating with it whenever it was in their interests to do so.However, whether Catholic or Protestant, the main characteristic of the missionary exhibitions in the timeframe of the late-nineteenth and early-twentieth century, was their ethnological intent. The ethnographic objects of converted peoples (and of those who had yet to be converted) were noteworthy for their exoticism and rarity, and became a true magnet for audiences. They were also supposedly irrefutable proof of the “backward” and even “depraved” nature of such peoples, who had to be liberated by the redemptive missions which all Christians were expected to support spiritually and financially. But as tastes changed and the public began to lose interest, the exhibitions started to grow in size and complexity, and increasingly began to feature new attractions, such as dioramas and sculptures of native groups. Finally, the most sophisticated of them began to include the natives themselves as part of the show. It must be said that, but for rare exceptions, these were not exhibitions in the style of the famous German Völkerschauen or British ethnological exhibitions, but mere performances; in fact, the “guests” had already been baptized, were Christians, and allegedly willing to collaborate with their benefactors.Whilst the Protestant churches (British and North American alike) produced representations of indigenous peoples with the greatest frequency and intensity, it was (as far as we know) the (Italian) Catholic Church that had the dubious honour of being the first to display natives at a missionary exhibition, and did so in a clearly savagist and rudimentary fashion, which could even be described as brutal. This occurred in the religious section of the Italian-American Exhibition of Genoa in 1892. As a shocking addition to the usual ethnographic and missionary collections, seven natives were exhibited in front of the audience: four Fuegians and three Mapuches of both sexes (children, young and fully-grown adults) brought from America by missionaries. The Fuegians, who were dressed only in skins and armed with bows and arrows, spent their time inside a hut made from branches which had been built in the garden of the pavilion housing the missionary exhibition. The Mapuches were two young girls and a man; the three of them lived inside another hut, where they made handicrafts under the watchful eye of their keepers.The exhibition appears to have been a great success, but it must have been evident that the model was too simple in concept, and inhumanitarian in its approach to the indigenous people present. In fact, whilst subsequent exhibitions also featured a native presence (always Christianised) at the invitation of the clergy, the Catholic Church never again fell into such a rough presentation and representation of the obsolete and savage way of life of its converted. To provide an illustration of those times, now happily overcome by the missionary enterprise, Catholic congregations resorted to dioramas and sculptures, some of which were of superb technical and artistic quality.Although the Catholic Church may have organised the first live missionary exhibition, it should not be forgotten that they joined the exhibitional sphere much later than the evangelical churches. Also, a considerable number of their displays were associated with colonial events, something that the Protestant churches avoided. (…) Whilst it was the reformed churches that most readily incorporated native participation, they seemed to do so in a more sensitive and less brutalised manner than the Genoese Catholic Exhibition of 1892. (…) The exhibition model at these early-twentieth century Protestant events was very similar to the colonial model. Native villages were reconstructed and ethnographic collections were presented, alongside examples of local flora and fauna, and of course, an abundance of information about missionary work, in which its evangelising, educational, medical and welfare aspects were presented. Some of these were equally as attractive to the audience (irrespective of their religious beliefs) as contemporary colonial or commercial exhibitions. However, it may be noted that the participation of Christianised natives took a radically different form from those of the colonial and commercial world. Those who were most capable and had a good command of English served as guides in the sections corresponding to their places of origin, a task that they tended to carry out in traditional clothing. More frequently these new Christians assumed roles with less responsibility, such as the manufacture of handicrafts, the sale of exotic objects or the recreation of certain aspects of their previous way of life. The organisers justified their presence by claiming that they were merely actors, representing their now-forgotten savage way of life. This may very well have been the case. At the Protestant exhibitions of the 1920s and 1930s, the presence of indigens became progressively less common until it eventually disappeared. This notwithstanding, the organisers came to benefit from a living resource which complemented displays of ethnographic materials whilst being more attractive to the audience than the usual dioramas. This was a theatrical representation of the native way of life (combined with scenes of missionary interaction) by white volunteers (both men and women) who were duly made up and in some cases appeared alongside real natives. Some of these performances were short, but others consisted of several acts and featured dozens of characters on stage. Regardless of their form, these spectacles were inherent to almost any British and North American exhibition, although much less frequent in continental Europe.Since the 1960s, the Christian missionary exhibition (both Protestant and Catholic) has been conducted along very different lines from those which have been discussed here. All direct or indirect associations with colonialism have been definitively given up; it has broken with racial or ethnological interpretations of converted peoples, and strongly defends its reputed autonomy from any political groups or interests, without forgetting that the essence of evangelisation is to maximize the visibility of its educational and charitable work among the most disadvantaged. (…)The three most important categories of modern ethnic show –commercial ethnological exhibitions, colonial exhibitions and missionary exhibitions– have been examined. All three resorted, to varying degrees, to the exhibition of exotic human beings in order to capture the attention of their audience, and, ultimately, to achieve certain goals: be they success in business and personal enrichment, social, political or financial backing for the colonial enterprise, or support for missionary work. Whilst on occasion they coincided at the same point in time and within the same context of representation, the uniqueness of each form of exhibition has been emphasised. However, this does not mean that they are completely separate phenomena, or that their representation of exotic “otherness” is homogeneous.Missionary exhibitions displayed perhaps the most singular traits due to their spiritual vision. However, it is clear that many made a determined effort to produce direct, visual and emotional spectacles and some, in so doing, resorted to representations of natives which were very similar to those of colonial exhibitions. Can we speak then, of a convergence of designs and interests? I honestly do not think so. At many colonial exhibitions, organisers showed a clear intention to portray natives as fearsome, savage individuals (sometimes even describing them as cannibals) who somehow needed to be subjugated. Peoples who were considered, to a lesser or greater extent, to be civilised were also displayed (as at the interwar exhibitions). However, the purpose of this was often to publicise the success of the colonial enterprise in its campaign for “the domestication of the savage”, rather than to present a message of humanitarianism or universal fraternity. Missionary exhibitions provided information and material examples of the former way of life of the converted, in which natives demonstrated that they had abandoned their savage condition and participated in the exhibition for the greater glory of the evangelising mission. Moreover, they also became living evidence that something much more transcendent than any civilising process was taking place: that once they had been baptised, anyone, no matter how wild they had once been, could become part of the same universal Christian family.It is certainly true that the shows that the audiences enjoyed at all of these exhibitions (whether missionary, colonial or even commercial) were very similar. Yet in the case of the former, the act of exhibition took place in a significantly more humanitarian context than in the others. And while it is evident that indigenous cultures and peoples were clearly manipulated in their representation at missionary exhibitions, this did not mean that the exhibited native was merely a passive element in the game. And there is something more. The dominating and spectacular qualities present in almost all missionary exhibitions should not let us forget one last factor which was essential to their conception, their development and even their longevity: Christian faith. Without Christian faith there would have been no missionary exhibitions, and had anything similar been organised, it would not have had the same meaning. It was essential that authentic Christian faith existed within the ecclesiastical hierarchy and within those responsible for congregations, missionary societies and committees. But the faith that really made the exhibitions possible was the faith of the missionaries, of others who were involved in their implementation and, of course, of those who visited. Although it was never recognised as such, this was perhaps an uncritical faith, complacent in its acceptance of the ways in which human diversity was represented and with ethical values that occasionally came close to the limits of Christian morality. But it was a faith nonetheless, a faith which intensified and grew with each exhibition, which surely fuelled both Christian religiosity (Catholic and Protestant alike) and at least several years of missionary enterprise, years crucial for the imperialist expansionism of the West. It is an objective fact that the display of human beings at commercial and colonial shows was always much more explicit and degrading than at any missionary exhibition. To state what has just been proposed more bluntly: missionary exhibitions were not “human zoos”. However, it is less clear whether the remaining categories: are commercial and colonial exhibitions worthy of this assertion (human zoos), or were they polymorphic ethnic shows of a much greater complexity?The principal analytical obstacle to the use of the term “human zoo” is that it makes an immediate and direct association between all of these acts and contexts and the idea of a nineteenth-century zoo. The images of caged animals, growling and howling, may cause admiration, but also disgust; they may sometimes inspire tenderness, but are mainly something to be avoided and feared due to their savage and bestial condition. This was definitely the case for the organisers of the scientific and editorial project cited at the beginning of this article, so it can be no surprise that Carl Hagenbeck’s joint exhibitions of exotic animals and peoples were chosen as the frame of reference for human zoos. Although the authors state in the first edition that “the human zoo is not the exhibition of savagery but its construction” [“le zoo humain n’est pas l’exhibition de la sauvagerie, mais la construction de celle-ci”], the problem, as Blanckaert (2002) points out, is that this alleged construction or exhibitional structure was not present at most of the exhibitions under scrutiny, nor (and this is an added of mine) at those shown at the Exhibitions. Indeed, the expression “human zoo” establishes a model which does not fit with the meagre number of exhibitions of exotic individuals from the sixteenth, seventeenth or eighteenth centuries, nor with that of Saartjie Baartmann (the Hottentot Venus) of the early nineteenth century, much less with the freak shows of the twentieth century. Furthermore, this model can neither be compared to most of the nineteenth-century British human ethnological exhibitions, nor to most of the native villages of the colonial exhibitions, nor to the Wild West show of Buffalo Bill, let alone to the ruralist-traditionalist villages which were set up at many national and international exhibitions until the interwar period. Ultimately, their connection with many wandering “black villages” or “native villages” exhibited by impresarios at the end of the nineteenth century could also be disputed. Moreover, many of the shows organised by Hagenbeck number amongst the most professional in the exhibitional universe. The fact that they were held in zoos should not automatically imply that the circumstances in which they took place were more brutal or exploitative than those of any of the other ethnic shows.It is evident from all the shows which have been discussed, that the differential racial condition of the persons exhibited not only formed the basis of their exhibition, but may also have fostered and even founded racist reactions and attitudes held by the public. However, there are many other factors (political, economic and even aesthetic) which come into play and have barely been considered, which could be seen as encouraging admiration of the displays of bodies, gestures, skills, creations and knowledge which were seen as both exotic and seductive.In fact, the indiscriminate use of the very successful concept of “human zoo” generates two fundamental problems. Firstly it impedes our “true” knowledge of the object of study itself, that is, of the very varied ethnic shows which it intends to catalogue, given the great diversity of contexts, formats, persons in charge, objectives and materialisations that such enterprises have to offer. Secondly, the image of the zoo inevitably recreates the idea of an exhibition which is purely animalistic, where the only relationship is that which exists between exhibitor and exhibited: the complete domination of the latter (irrational beasts) by the former (rational beings). If we accept that the exhibited are treated merely as as more-or-less worthy animals, the consequences are twofold: a logical rejection of such shows past, present and future, and the visualization of the exhibited as passive victims of racism and capitalism in the West. It is therefore of no surprise that the research barely considers the role that these individuals may have played, the extent to which their participation in the show was voluntary and the interests which may have moved some of them to take part in these shows. Ultimately, no evaluation has been made of how these shows may have provided “opportunity contexts” for the exhibited, whether as commercial, colonial or missionary exhibitis. Whilst it is true that the exhibited peoples’ own voice is the hardest to record in any of these shows, greater effort could have been made in identifying and mapping them, as, when this happens, the results obtained are truly interesting. Before we conclude, it must be said that the proposed analysis does not intend to soften or justify the phenomenon of the ethnic show. Even in the least dramatic and exploitative cases it is evident that the essence of these shows was a marked inequality, in which every supposed “context of interaction” established a dichotomous relationship between black and white, North and South, colonisers and colonised, and ultimately, between dominators and dominated. My intention has been to propose a more-or-less classifying and clarifying approach to this varied world of human exhibitions, to make a basic inventory of their forms of representation and to determine which are the essential traits that define them, without losing sight of the contingent factors which they rely upon. Luis A. Sánchez-Gómez

Between the 29th of November 2011 and the 3rd of June 2012, the Quai de Branly Museum in Paris displayed an extraordinary exhibition, with the eye-catching title Exhibitions. L’invention du sauvage, which had a considerable social and media impact. Its “scientific curators” were the historian Pascal Blanchard and the museum’s curator Nanette Jacomijn Snoep, with Guadalupe-born former footballer Lilian Thuram acting as “commissioner general”. A popular sportsman, Thuram is also known in France for his staunch social and political commitment. The exhibition was the culmination (although probably not the end point) of a successful project which had started in Marseille in 2001 with the conference entitled Mémoire colonial: zoos humains? Corps Exotiques, corps enfermés, corps mesurés. Over time, successive publications of the papers presented at that first meeting have given rise to a genuine publishing saga, thus far including three French editions (Bancel et al., 2002, 2004; Blanchard et al., 2011), one in Italian (Lemaire et al., 2003), one in English (Blanchard et al., 2008) and another in German (Blanchard et al., 2012). This remarkable repertoire is completed by the impressive catalogue of the exhibition (Blanchard; Boëtsch y Snoep, 2011). All of the book titles (with the exception of the catalogue) make reference to “human zoos” as their object of study, although in none of them are the words followed by a question mark, as was the case at the Marseille conference. This would seem to define “human zoos” as a well-documented phenomenon, the essence of which has been well-established. Most significantly, despite reiterating the concept, neither the catalogue of the exhibition, nor the texts drawn up by the exhibit’s editorial authorities, provide a precise definition of what a human zoo is understood to be. Nevertheless, the editors seem to accept the concept as being applicable to all of the various forms of public show featured in the exhibition, all of which seem to have been designed with a shared contempt for and exclusion of the “other”. Therefore, the label “human zoo” implicitly applies to a variety of shows whose common aim was the public display of human beings, with the sole purpose of showing their peculiar morphological or ethnic condition. Both the typology of the events and the condition of the individuals shown vary widely: ranging from the (generally individual) presentation of persons with crippling pathologies (exotic or more often domestic freaks or “human monsters”) to singular physical conditions (giants, dwarves or extremely obese individuals) or the display of individuals, families or groups of exotic peoples or savages, arrived or more usually brought, from distant colonies.[1]The purpose of the 2001 conference had been to present the available information about such shows, to encourage their study from an academic perspective and, most importantly, to publicly denounce these material and symbolic contexts of domination and stigmatisation, which would have had a prominent role in the complex and dense animalisation mechanisms of the colonised peoples by the “civilized West”. A scientific and editorial project guided by such intentions could not fail to draw widespread support from academic, social and journalistic quarters. Reviews of the original 2002 text and successive editions have, for the most part, been very positive, and praise for what was certainly an extraordinary exhibition (the one of 2012) has been even more unanimous.[2] However, most commentators have limited their remarks to praising the important anti-racist content and criticisms of the colonial legacy, which are common to both undertakings. Only a few authors have drawn attention to certain conceptual and interpretative problems with the presumed object of study, the “human zoos”, problems which would undermine the project’s solidity (Blanckaert, 2002; Jennings, 2005; Liauzu, 2005: 10; Parsons, 2010; McLean, 2012). Problems which may arise from the indiscriminate use of the concept of the “human zoo” will be discussed in detail at the end of this article.Firstly, however, a revision of the complex historical process underlying the polymorphic phenomenon of the living exhibition and its configurations will provide the background for more detailed study. This will consist of an outline of three groups which, in my view, are the most relevant exhibition categories. Although the public display of human beings can be traced far back in history in many different contexts (war, funerals and sacred contexts, prisons, fairs, etc…) the configuration and expansion of different varieties of ethnic shows are closely and directly linked to two historical phenomena which lie at the very basis of modernity: exhibitions and colonialism. The former began to appear at national contests and competitions (both industrial and agricultural). These were organised in some European countries in the second half of the eighteenth century, but it was only in the century that followed that they acquired new and shocking material and symbolic dimensions, in the shape of the international or universal exhibition.The key date was 1851, when the Great Exhibition of the Works of Industry of All Nations was held in London. The triumph of the London event, its rapid and continuing success in France and the increasing participation (which will be outlined) of indigenous peoples from the colonies, paved the way from the 1880s for a new exhibition model: the colonial exhibition (whether official or private, national or international) which almost always featured the presence of indigenous human beings. However, less spectacular exhibitions had already been organised on a smaller scale for many years, since about the mid-nineteenth century. Some of these were truly impressive events, which in some cases also featured native peoples. These were the early missionary (or ethnological-missionary) exhibitions, which initially were mainly British and Protestant, but later also Catholic.[3] Finally, the unsophisticated ethnological exhibitions which had been typical in England (particularly in London) in the early-nineteenth century, underwent a gradual transformation from the middle of the century, which saw them develop into the most popular form of commercial ethnological exhibition. These changes were initially influenced by the famous US circus impresario P.T. Barnum’s human exhibitions. Later on, from 1874, Barnum’s displays were successfully reinterpreted (through the incorporation of wild animals and groups of exotic individuals) by Carl Hagenbeck.The second factor which was decisive in shaping the modern ethnic show was imperial colonialism, which gathered in momentum from the 1870s. The propagandising effect of imperialism was facilitated by two emerging scientific disciplines, physical anthropology and ethnology, which propagated colonial images and mystifications amid the metropolitan population. This, coupled with robust new levels of consumerism amongst the bourgeoisie and the upper strata of the working classes, had a greater impact upon our subject than the economic and geostrategic consequences of imperialism overseas. In fact, the new context of geopolitical, scientific and economic expansion turned the formerly “mysterious savages” into a relatively accessible object of study for certain sections of society. Regardless of how much was written about their exotic ways of life, or strange religious beliefs, the public always wanted more: seeking participation in more “intense” and “true” encounters and to feel part of that network of forces (political, economic, military, academic and religious) that ruled even the farthest corners of the world and its most primitive inhabitants.It was precisely the convergence of this web of interests and opportunities within the new exhibition universe that had already consolidated by the end of the 1870s, and which was to become the defining factor in the transition. From the older, popular model of human exhibitions which had dominated so far, we see a reduction in the numbers of exhibitions of isolated individuals classified as strange, monstrous or simply exotic, in favour of adequately-staged displays of families and groups of peoples considered savage or primitive, authentic living examples of humanity from a bygone age. Of course, this new interest, this new desire to see and feel the “other” was fostered not only by exhibition impresarios, but by industrialists and merchants who traded in the colonies, by colonial administrators and missionary societies. In turn, the process was driven forward by the strongly positive reaction of the public, who asked for more: more exoticism, more colonial products, more civilising missions, more conversions, more native populations submitted to the white man’s power; ultimately, more spectacle.Despite the differences that can be observed within the catalogue of exhibitions, their success hinged to a great extent upon a single factor: the representation or display of human beings labelled as exotic or savage, which today strikes us as unsettling and distasteful. It can therefore be of little surprise that most, if not all, of the visitors to the Quai de Branly Museum exhibiton of 2012 reacted to the ethnic shows with a fundamental question: how was it possible that such repulsive shows had been organised? Although many would simply respond with two words, domination and racism, the question is certainly more complex. In order to provide an answer, the content and meanings of the three main models or varieties of the modern ethnic show –commercial ethnological exhibitions, colonial exhibitions and missionary exhibitions– will be studied.

Commercial ethnological exhibitions were managed by private entrepreneurs, who very often acted as de facto owners of the individuals they exhibited. With the seemingly-noble purpose of bringing the inhabitants of exotic and faraway lands closer to the public and placing them under the scrutiny of anthropologists and scholarly minds, these individuals organised events with a rather carnival-like air, whose sole purpose was very simple: to make money. Such exhibitions were held more frequently than their colonial equivalents, which they predated and for which they served as an inspiration. In fact, in some countries where (overseas) colonial expansion was delayed or minimal –such as Germany (Thode-Arora, 1989; Kosok y Jamin, 1992; Klös, 2000; Dreesbach, 2005; Nagel, 2010), Austria (Schwarz, 2001) or Switzerland (Staehelin, 1993; Minder, 2008)– and even in some former colonies –such as Brazil (Sánchez-Arteaga and El-Hani, 2010)– they were regular and popular events and could still be seen in some places as late as the 1950s. Even in the case of overseas superpowers, commercial exhibitions were held more regularly than the strictly-colonial variety, although it is true that they sometimes overlapped and can be difficult to distinguish from one another. This was the case in France (Bergougniou, Clignet and David, 2001; David, s.d.) and to an even greater extent in Great Britain, with London becoming a privileged place to experience them throughout the nineteenth century (Qureshi, 2011).Almost all of these exhibitions attracted their audiences with a clever combination of racial spectacle, erotism and a few drops of anthropological science, although there was no single recipe for a successful show. Dances, leaps, chants, shouts, and the blood of sacrificed animals were the fundamental components of these events, although they were also part of colonial exhibitions. All of these acts, these strange and unusual rituals, were as incomprehensible as they were exciting; as shocking as they were repulsive to the civilised citizens of “advanced” Europe. It is unsurprising that spectators were prepared to pay the price of admission, which was not cheap, in order to gain access to such extraordinary sights as these “authentic savages”. Over time, the need to attract increasingly demanding audiences, who quickly became used to seeing “blacks and savages” of all kinds in a variety of settings, challenged the entrepreneurs to provide ever more compelling spectacles.For decades the most admired shows on European soil were organised by Carl Hagenbeck (1844–1913), a businessman from Hamburg who was a seasoned wild animal showman (Ames, 2008). His greatest success was founded on a truly spectacular innovation: the simultaneous exhibition in one space (a zoo or other outdoor enclosure) of wild animals and a group of natives, both supposedly from the same territory, in a setting that recreated the environment of their place of origin. The first exhibition of this type, organised in 1874, was a great success, despite the relatively low level of exoticism of the individuals displayed: a group of Sami (Lap) men and women accompanied by some reindeer. Whilst not all of Hagenbeck’s highly successful shows (of which there were over 50 in total) relied upon the juxtaposition of humans and animals, all presented a racial spectacle of exotic peoples typically displayed against a backdrop of huts, plants and domestic ware, and included indigenous groups from the distant territories of Africa, the Arctic, India, Ceylon, and Southeast Asia. For many scholars Hagenbeck’s Völkerschauen or Völkerausstellungen constituted the paradigmatic example of a human zoo, which is also accepted by the French historians who organised the project under the same name. They tended to combine displays of people and animals and took place in zoos, so the analogy could not be clearer. Furthermore, the performances of the exhibited peoples were limited to songs, dances and rituals, and for the most part their activities consisted of little more than day-to-day tasks and activities. Therefore, little importance was attached to their knowledge or skills, but rather to the scrutiny of their gestures, their distinctive bodies and behaviours, which were invariably exotic but not always wild.However, despite their obvious racial and largely-racist components, Hagenbeck’s shows cannot be simply dismissed as human zoos. As an entrepreneur, the German’s objective was obviously to profit from the display of animals and people alike, and yet we cannot conclude that the humans were reduced to the status of animals. In fact, the natives were always employed and seem to have received fair treatment. Likewise, their display was based upon a premise of exoticism rather than savagery, in which key ideas of difference, faraway lands and adventure were ultimately exalted. Hagenbeck’s employees were apparently healthy; sometimes slender, as were the Ethiopians, or even athletic, like the Sudanese. In some instances (for example, with people from India and Ceylon) their greatest appeal was their almost-fantastic exoticism, with their rich costumes and ritual gestures being regarded as remarkable and sophisticated.Nevertheless, on many other occasions, people were displayed for their distinctiveness and supposed primitivism, as was the case on the dramatic tour of the Inuit Abraham Ulrikab and his family, from the Labrador Peninsula, all of whom fell ill and died on their journey due to a lack of appropriate vaccination. This is undoubtedly one of the best-documented commercial exhibitions, not because of an abundance of details concerning its organisation, but owing to the existence of several letters and a brief diary written by Ulrikab himself (Lutz, 2005). As can easily be imagined, it is absolutely exceptional to find information originating from one of the very individuals who featured in an ethnic show; not an alleged oral testimony collected by a third party, but their own actual voice. The vast majority of such people did not know the language of their exhibitors and, even if they knew enough to communicate, it is highly unlikely that they would have been able to write in it. All of this, coupled with the fact that the documents have been preserved and remain accessible, is almost a miracle.However, in spite the tragic fate of Ulrikab and his family, other contemporary ethnic shows were far more exploitative and brutal. This was the case with several exhibitions that toured Europe towards the end of the 1870s, whose victims included Fuegians, Inuits, primitive Africans (especially Bushmen and Pygmies) or Australian aboriginal peoples. Some were complex and relatively sophisticated and included the recreation of native villages; in others, the entrepreneur simply portrayed his workers with their traditional clothes and weapons, emphasising their supposedly primitive condition. Slightly less dramatic than these, but more racially stigmatising than Hagenbeck’s shows, were the exhibitions held at the Jardin d’Aclimatation in Paris, between 1877 and the First World War. A highly-lucrative business camouflaged beneath a halo of anthropological scientifism, the exhibitions were organised by the director of the Jardin himself, the naturalist Albert Geoffroy Saint-Hilaire (Coutancier and Barthe, 1995; Mason, 2001: 19–54; David, n.d.; Schneider, 2002; Báez y Mason, 2006). This purported scientific and educational institution enjoyed the attention of French anthropologists for a time; however, after 1886, the Anthropological Society in Paris distanced itself from something that was little more than it appeared to be: a spectacle for popular recreation which was hard to justify from an ethical point of view. In the case of many private enterprises from the 1870s and 1880s, in particular, shows can be described as moving away from notions of fantasy, adventure and exotism and towards the most brutal forms of exploitation. However, despite what has been said about France, Qureshi (2011: 278–279) highlights the role that ethnologists and anthropologists (and their study societies) played in Great Britain in approving commercial exhibitions of this sort. This enabled exhibitions to claim legitimacy as spaces for scientific research, visitor education and, of course, the advancement of the colonial enterprise.Leaving aside the displays of isolated individuals in theatres, exhibition halls, or fairgrounds (where the alleged “savage” sometimes proved to be a fraud), photographs and surviving information about the aforementioned commercial ethnological shows speak volumes about the relations which existed between the exhibitors and the exhibited. In nearly all cases the impresario was a European or North American, who wielded almost absolute control over the lives of their “workers”. Formal contracts did exist and legal control became increasingly widespread, especially in Great Britain, (Qureshi, 2011: 273) as the nineteenth century progressed. It is also evident, nevertheless, that this contractual relationship could not mask the dominating, exploitative and almost penitentiary conditions of the bonds created. Whether Inuit, Bushmen, Australians, Pygmies, Samoans or Fuegians, it is hard to accept that all contracted peoples were aware of the implications of this legal binding with their employer. Whilst most were not captured or kidnapped (although this was documented on more than one occasion) it is reasonable to be skeptical about the voluntary nature of the commercial relationship. Moreover, those very same contracts (which they were probably unable to understand in the first place) committed the natives to conditions of travel, work and accommodation which were not always satisfactory. Very often their lives could be described as confined, not only when performances were taking place, but also when they were over. Exhibited individuals were very rarely given leave to move freely around the towns that the exhibitions visited.The exploitative and inhuman aspects of some of these spectacles were particularly flagrant when they included children, who either formed part of the initial contingent of people, or swelled the ranks of the group when they were born on tour. On the one hand, the more primitive the peoples exhibited were, the more brutal their exhibition became and the circumstances in which it took place grew more painful. Conversely, conditions seemed to improve, albeit only to a limited extent, when individuals belonged to an ethnic group which was more “evolved”, “prouder”, held warrior status, or belonged to a local elite. This was true of certain African groups who were particularly resistant to colonial domination, with the Ashanti being a case in point. In spite of this, their subordinate position did not change.There was, however, a certain type of commercial show in which the relations between the employer and the employees went beyond the merely commercial. More professionalised shows often required natives to demonstrate skills and give performances that would appeal to the audience. This was the case in some (of the more serious and elaborate) circus contexts and dramatised spectacles, the most notable of which was the acclaimed Wild West show. Directed by William Frederick Cody (1846–1917), the famous Buffalo Bill, the show featured cowboys, Mexicans, and members of various Native American ethnic groups (Kasson, 2000). This attraction, and many others that followed in the wake of its success, could be considered the predecessors of present-day theme park shows. Many of the shows which continued to endure during the interwar period were in some measure similar to those of the nineteenth century, although they were unable to match the popularity of yesteryear. Whilst the stages were still set with reproduction native villages, as had been the case in the late-nineteenth and early-twentieth century, the exhibition and presentation of natives acquired a more fair-like and circus-like character, which harked back to the spectacles of the early-nineteenth century. Although it seems contradictory, colonial exhibitions at this time were in fact much larger and more numerous, as we shall see in the following section. It was precisely then, in the mid-1930s, that Nazi Germany, a very modern country with the most intensely-racist government, produced an ethnic show which illustrates the complexity of the human zoo phenomenon. The Deutsche Afrika-Schau (German African Show) provides an excellent example of the peculiar game which was played between owners, employees and public administrators, concerning the display of exotic human beings. The show, a striking and an incongruous fusion of variety spectacle and Völkerschau, toured several German towns between 1935 and 1940 (Lewerenz, 2006). Originally a private and strictly commercial business, it soon became a peculiar semi-official event in which African and Samoan men and women, resident in Germany, were legally employed to take part. Complicated and unstable after its Nazification, the show aimed to facilitate the racial control of its participants while serving as a mechanism of ideological indoctrination and colonial propaganda. Incapable of profiting from the show, the Nazi regime would eventually abolish it.After the Second World War, ethnic shows entered a phase of obvious decline. They were no longer of interest as a platform for the wild and exotic, mainly due to increasing competition from new and more accessible channels of entertainment, ranging from cinema to the beginnings of overseas tourism within Europe and beyond. While the occasional spectacle tried to profit from the ancient curiosity about the morbid and the unusual as late as the 1950s and even the 1960s, they were little more than crude and clumsy representations, which generated little interest among the public. Nowadays, as before, there are still contexts and spaces in which unique persons are portrayed, whether this is related to ethnicity or any other factor. These spectacles often fall into the category of artistic performances or take the banal form of reality TV.

COLONIAL EXHIBITIONS: LEISURE, BUSINESS AND INDOCTRINATION

This category of exhibition was organised by either public administrations or private institutions linked to colonial enterprise, and very often featured some degree of collaboration between the two. The main aim of these events was to exhibit official colonial projects and private initiatives managed by entrepreneurs and colonial settlers, which were supposedly intended to bring the wealth and well-being of the metropolis to the colonies. The presentation also carried an educational message, intended not only to reinforce the “national-colonial conscience” among its citizens, but also to project a powerful image of the metropolis to competing powers abroad. Faced with the likelihood that such content would prove rather unexciting and potentially boring for visitors, the organisers resorted to various additions which were considered more attractive and engaging. Firstly they devised a museum of sorts, in which ethnographic materials of the colonised peoples: their traditional dress, day-to-day objects, idols and weapons, were exhibited. These exotic and unusual pieces did draw the interest of the public, but, fearing that this would not be sufficient, the organisers knew that they could potentially sell thousands of tickets by offering the live display of indigenous peoples. If the exhibition was official, the natives constituted the ideal means by which to deliver the colonial message to the masses. In the case of private exhibitions, they were seen as the fastest and safest way to guarantee a show’s financial success.Raw materials and a variety of other objects (including ethnographic exhibitions) from the colonies were already placed on show at the Great Exhibition of 1851 in London. These items were accompanied by a number of individuals originating from the same territories, either as visitors or as participants in the relevant section of the exhibition. However, such people cannot be considered as exhibits themselves; neither can similar colonial visitors at the Paris (1855) or London (1862) exhibitions; nor the Paris (1867) and (1878) exhibitions, which featured important colonial sections. It was only at the start of the 1880s that Europeans were able to enjoy the first colonial exhibitions proper, whether autonomous or connected (albeit with an identity and an entity of their own) to a universal or international exhibition. It could be argued that the Amsterdam International Colonial and Export Exhibition of 1883 acted as a letter of introduction for this model of event (Bloembergen, 2006), and it was quickly followed by the London Colonial and Indian Exhibition of 1886 (Mathur, 2000) and, to a lesser though important extent, by the Madrid Philippines Exhibition of 1887 (Sánchez-Gómez, 2003). All three housed reproductions of native villages and exhibited dozens of individuals brought from the colonies. This was precisely what attracted the thousands of people who packed the venues. Such success would not have been possible by simply assembling a display of historical documents, photographs or ethnographic materials, no matter how exotic.Thereafter, colonial exhibitions (almost all of which featured the live presence of native peoples) multiplied, whether they were autonomous or connected with national or international exhibitions. In France many municipalities and chambers of commerce began to organise their own exhibits, some of which (such as the Lyon Exhibition of 1894) were theoretically international in scope, although some of the most impressive exhibits held in the country were the colonial sections of the Paris Universal Exhibition of 1889 (Palermo, 2003; Tran, 2007; Wyss, 2010) and 1900 (Wilson, 1991; Mabire, 2000; Geppert, 2010: 62–100). Equally successful were the colonial sections of the Belgian exhibitions of the last quarter of the nineteenth century, which displayed the products and peoples of what was called the Congo Independent State (later the Belgian Congo), which until 1908 was a personal possession of King Leopold II. The most remarkable was probably the 1897 Tervuren Exhibition, an annex of the Brussels International Exposition of the same year (Wynants, 1997; Küster, 2006). In Germany, one of the European capitals of commercial ethnological shows, several colonial exhibitions were orchestrated as the overseas empire was being built between 1884 and 1918. Among them, the Erste Deutsche Kolonialausstellung or First German Colonial Exhibition, which was organised as a complement to the great Berlin Gewerbeausstellung (Industrial Exhibition) of 1896, was particularly successful (Arnold, 1995; Richter, 1995; Heyden, 2002).As far as the United States was concerned, the country’s late but impetuous arrival as a world power was almost immediately heralded by the phenomenon of the World’s Fair, and the respective colonial sections (Rydell, 1984 y 1993; Rydell, Findling y Pelle, 2000). Whilst a stunning variety of ethnic performances were already on show at the 1893 Chicago World’s Fair, it was at Omaha, (1898) Buffalo, (1901) and above all at the 1904 Saint Louis Exhibition, that hundreds of natives were enthusiastically displayed with the purpose of publicising and gathering support for the complex and “heavy” civilising task (“The White Man’s Burden”) that the North American nation had to undertake in its new overseas possessions (Kramer, 1999; Parezo y Fowler, 2007).In principle, those natives who took part in the live section of a colonial exhibition did so of their own accord, whether they were allegedly savage or civilised individuals, and regardless of whether the show had been organised through concessions to private company owners or those who indirectly depended on public agencies. Although neither violence nor kidnapping has been recorded, it is highly unlikely that most of the natives who took up the invitation were fully aware of its implications: again, the great distances they had to travel, the discomforts they would endure and the situations in which they would be involved upon arrival in the metropolis.Until the early-twentieth century, the sole purpose of native exhibitions was to attract an audience and to show, with the exemplar of a “real” image, the inferior condition of the colonised peoples and the need to continue the civilising mission in the faraway lands from which they came. In all cases their living conditions in the metropolis were unlikely to differ greatly from those of the participants in purely commercial shows: usually residing inside the exhibition venue, they were rarely free to leave without the express permission of their supervisors. However, it must be said that conditions were considerably better for the individuals exhibited when the shows were organised by government agencies, who always ensured that formal contracts were signed, and were probably unlikely to house people in the truly gruesome conditions present in some domains of the private sector. In some cases, added circumstances can be inferred which reveal a clear interest in “doing things properly”, by developing an ethical and responsible show, no matter how impossible this was in practice. Perhaps the clearest example of this kind of event is the Philippines Exhibition which was organized in Madrid in 1887.The most striking feature of this exhibition was its stated educational purpose, to present a sample of the ethnic and social diversity of the archipelago. Other colonial exhibitions attempted to do the same, but in this case the intentions of the Spanish appeared to be more authentic and credible. Of course the aim was not to provide a lesson in island ethnography, but to prove the extent to which the Catholic Church had managed to convert the native population, and to show where savage tribes still existed. Representing the latter were, among others, several Tinguian and Bontoc persons (generically known as Igorots by the Spanish) and an Aeta person, referred to as a Negrito. Several Muslim men and women from Mindanao and the Joló (Sulu) archipelago (known to the Spanish as Moros or “Moors”) also took part in the exhibition, not because they were considered savages but on account of their pagan and unredeemed condition. Finally, as an example of the benefits of the colonial enterprise, Christian Filipinos (both men and women) were invited to demonstrate their artistic skill and craftsmanship and to sell their artisan products from various structures within the venue. All were legally employed and received regular payment until their return to the Philippines, which was very unusual for an exhibition at that time.However, despite the “good intentions” of the administration, an obvious hierarchy can be inferred from the spatial pattern through which the Filipino presence in Madrid was organised. Individuals considered savage lived inside the exhibit enclosure and were under permanent control; they could visit the city but always in a scheduled and closely-directed way. Muslims, however, did not live inside the park, but in boarding houses and inns. Their movements were also restricted, but this was justified on the basis of their limited knowledge of their surroundings. Christians also lodged at inns, and although they did enjoy a certain autonomy, their status as “special guests” imposed a number of official commitments and the compulsory attendance of events. Such differences became even more obvious, especially for the audience, not just because the savages lived inside the ranchería or native village, where they were exhibited, but also because their only purpose was to dance, gesture, eat and display their half-naked bodies. Muslims were not exhibited, nor did they have a clear or specific task to perform beyond merely “representing”. Christian men and women (cigar makers and artisans) simply performed their professional tasks in front of the audience, and were expected to complete a given timetable and workload as would any other worker.In the light of the above, it may be concluded that the Philippines Exhibition of 1887 (specifically the live exhibition section) was conducted in a manner which questions the simplistic concept of a human zoo that many historians apply to these spectacles. Although there were certain similarities with commercial shows, we must admit that the Spanish government made considerable efforts to ensure that the exhibition, and above all the participation of the Filipinos, was carried out in a relatively dignified fashion. It must be reiterated that this is not intended to project a benevolent image of nineteenth-century Spanish colonialism. The position of some of the exhibited, especially those considered savages, was not only subordinate but almost subhuman (almost being the key word), in spite of the fact that they received due payment and were relatively well fed. Moreover, we cannot forget that three of the participants (a Carolino man and woman, and a Muslim woman) died from diseases which were directly related to the conditions of their stay on the exhibition premises.As the twentieth century advanced, colonial shows changed their direction and content, although it was some time before these changes took effect. The years prior to the First World War saw several national colonial exhibitions (Marseille and Paris in 1906; London in 1911),[4] two binational exhibitions (London, 1908 and 1910)[5] and a trinational (London, 1909),[6] which became benchmarks for exhibition organisers during the interwar years. The early twentieth century also saw several national colonial sections, wich had varying degrees of impact, in three universal exhibitions organised in Belgium: Liège (1905), Brussels (1910) and Ghent (1913) and in several exhibitions organised in three different Italian cities, although none of these included a native section.[7] However, it was during the 1920s and 1930s that a true eclosion of national and international exhibitions, whose main focus was colonial or which included important colonial elements, occurred.[8] The time was not only ripe for ostentatious reasons, but also because the tension originated by certain European powers, especially Italy, encouraged a vindication of overseas colonies through the propaganda that was deployed at these events.For all these reasons, and in addition to many other minor events, national colonial exhibitions were staged in Marseille (1922), Wembley (1924–25),[9] Stuttgart (1928),[10] Koln (1934), Oporto (1934), Freiburg im Breisgau (1935), Como (1937),[11] Glasgow (1938),[12] Dresden (1939), Vienna (1940) and Naples (1940).[13] At an international colonial level, the most important was the 1931 Parisian Exposition Coloniale Internationale et des Pays d’Outre Mer. In addition, although they were not specialised international colonial exhibitions, outstanding and relevant colonial sections could be found at the Turin National Exhibition of 1928, the Iberian-American Exhibition of 1929, the Brussels Universal Exhibition of 1935, the Paris International Exhibition of 1937 and the Lisbon National Exhibition of 1940.At most of these events, a revised perspective of overseas territories was projected. Although, with some exceptions, metropolises continued to import indigenous peoples and persisted in presenting them as exotic, the focus was now shifted on to the results of the civilising process, as opposed to strident representations of savagery. This meant that it was no longer necessary for exhibited peoples to live at the exhibition venue. The aim was now to show the most attractive side of empire, and displays of the skills of its inhabitants, such as singing or dancing continued, albeit in a more serious, professional fashion. In principle, natives taking part in these exhibitions could move around more freely; in addition, they were all employed as any other professional or worker would be. However, once again the ethnic factor came into play, materialising under many different guises. For example, at the at the Paris Exhibition of 1931, people who belonged to “oriental civilisations” appeared at liberty to move around the venue, they were not put on display, and devoted their time to the activities for which they had been contracted (such as traditional songs and dances, handicrafts or sale of products). Once their working day was completed, they were free to visit the exhibition or travel around Paris. However, the same could not be said for the Guineans arriving at the Seville Ibero-American Exhibition of 1929, where they were clearly depicted in a savagist context, similar to the way in which Africans had been displayed in colonial and even commercial exhibitions in the nineteenth century (Sánchez-Gómez, 2006).Another interwar colonial exhibition which was unable to free itself from nineteenth-century stereotypes was the one held in Oporto in 1934, which included several living villages inhabited by natives, children included (Serén, 2001). Their presence in the city and the fact that they were displayed and lived within the same exhibition space was something that neither the press nor contemporary politicians saw fit to criticise. In fact it was the pretos (black African men) and especially pretas (black African women) who were the main attraction for thousands of visitors who thronged to the event, which was probably related to the fact that all the natives were bare-chested. Interestingly, the Catholic Church did not take offense, perhaps interpreting the women shown as being merely “black savages” who had little to do with chaste Portuguese women. Of course they had no objections to the exhibition of human beings either.Two interwar exhibitions (Seville and Oporto) have been cited as examples where the management of indigenous participants markedly resembled the practices of the nineteenth century. However, this should not imply that other events refrained from the (more or less) sophisticated manipulation of the native presence. The most significant example was the Parisian International Colonial Exhibition of 1931.[14] Some historians highlight the fact that the general organiser, Marshall Lyautey, managed to impose his criterion that the exhibition should not include displays of the traditional “black villages” or “indigenous villages” inhabited by natives. Although it is true that the official (French and International) sections did not include this feature,[15] there can be little doubt that this was a gigantic ethnic spectacle, where hundreds of native peoples (who were present in the city as artists, artisans or simply as guests) were exhibited and manipulated as a source of propaganda of the highest order for the colonial enterprise. This is just one more example, although a particularly significant one, of the multi-faceted character that ethnic shows acquired. It is difficult to define these simply on the basis of their brutality or “animal” characteristics, their closeness to Hagenbeck’s Völkerschauen or the anthropological exhibitions that were organised at the Jardin d’Acclimatation in late-nineteenth century Paris.The last major European colonial exhibition took place in the anachronistic Belgian Congo section of the Brussels Universal Exhibition of 1958, the first to be held after the Second World War.[16] In principle, its contents were organised around a discourse which defended the moral values of interracial fraternity and which set out to convince both Belgian society and the Congolese that Belgians were only in Congo to civilise, and not to exploit. In order to prove the authenticity of this discourse, the organisers went to great pains to avoid the jingoistic exoticism which had characterised most colonial exhibits thus far. In accordance with this, the event did not include the traditional, demeaning spectacle of natives living within the exhibition space. However, it did include an exotic section, where several dozen Congolese artisans demonstrated their skills to the audience and sold the products manufactured there in a context which was intended to be purely commercial. Unfortunately, the good will of the organisers was betrayed by an element of the public, who could not help confronting the Africans in a manner reminiscent of their grandparents back in 1897. This resulted in the artisans abruptly leaving the exhibition for Congo after being shocked by the insolence and bad manners of some of the visitors.The Congolese presence in Brussels was not limited to these artisans: almost seven hundred Africans arrived, two hundred of which were tourists who had been invited with the specific purpose of visiting the exhibition. Most of them were members of the “Association of African Middle Classes”, that is, they were part of the “evolved elite”. The remaining figures were made up of people who were carrying out some sort of task in the colonial section of the exhibition, whether as specialised workers, dancers, guides or as assistants in the various sections, perhaps including some members of the Public Force, made up of natives. The presence in Brussels of the tourists, in particular, was part of a policy of association, which, according to the organisers, was intended to prepare “the Congolese population for the complete realisation of their human destiny.” The Belgian population, in turn, would have the chance to become better acquainted with these people through a “direct, personal and free contact with the civilised Congolese” (Delhalle, 1985: 44). Neither this specific measure nor any others taken to bring blacks and whites closer seem to have had any practical effect whatsoever. In fact, although the Congolese visitors were cared for relatively well (although not without differences or setbacks), their movements during their stay in Brussels were under constant scrutiny, to prevent them from being “contaminated” by the “bad habits” of the metropolitan citizens.Despite everything mentioned thus far, or perhaps even because of it, the 1958 exhibition was an enormous public success, on a par with the colonial events of the past. This time, as before, it was predicated on a largely negative image of the Congolese population. Barely any critical voices were heard against the exhibiting model or the abuses of the colonial system, not even from the political left. Finally, as with earlier colonial exhibitions, it is obvious that what was shown in Brussels had little to do with the reality of life in Congo. In fact, as the exhibition closed down, in October 1958, Patrice Lumumba founded the Congolese National Movement. On the 11th of January of 1959, repression of the struggles for independence escalated into the bloody killings of Léopoldville, the colonial capital. Barely one year later, on the 30th of June 1960, Belgium formally acknowledged the independence of the new Democratic Republic of Congo; two years later Rwanda and Burundi followed.

MISSIONARY EXHIBITIONS: DOMINATION, FAITH AND SPECTACLE

The excitement that exhibitions generated in the second half of the nineteenth century provoked reactions from many quarters, including Christian churches. Of course, the event which shook Protestant propagandist sensibilities the hardest (as Protestants were the first to take part in the exhibition game) was the 1851 London Exhibition. However, the interest which both the Anglican Church and many evangelical denominations expressed in participating in this great event was initially met with hesitation and even rejection by the organisers (Cantor, 2011). Finally their participation was accepted, but only two missionary societies were authorised to officially become an integral part of the exhibition, and they could only do so as editors of printed religious works.The problems that were documented in London in 1851 continued to affect events organised throughout the rest of the century; in fact, the presence of the Christian churches was permitted on only two occasions, both in Paris, at the exhibitions of 1867 and 1900. At the first of these, it was only Protestant organisations that participated, as the Catholic Church did not yet recognise the importance of such an event as an exhibitional showcase. By the time of the second, which was the last great exhibition of the nineteenth century and one of the most grandiose of all time, the situation had changed dramatically; both Protestants and Catholics participated and the latter (the French Church, to be precise) did so with greater success than its Protestant counterpart.[18]The opposition that missionary societies encountered at nineteenth-century international exhibitions encouraged them to organise events of their own. The first autonomous missionary events were Protestant and possibly took place prior to 1851. In any case, this has been confirmed as the year that the Methodist Wesleyan Missionary Society organised a missionary exhibition (which took place at the same time as the International Exhibition). Small in size and very simple in structure, it was held for only two days during the month of June, although it provided the extraordinary opportunity to see and acquire shells, corals and varied ethnographic materials (including idols) from Tonga and Fiji.[19] The exhibition’s aim was very specific: to make a profit from ticket sales and the materials exhibited and to seek general support for the missionary enterprise.Whether or not they were directly influenced by the international event of 1851, the modest British missionary exhibitions of the mid-nineteenth century began to evolve rapidly from the 1870s, reaching truly spectacular proportions in the first third of the twentieth century. This enormous success was due to a particular set of circumstances which were not true for the Catholic sphere. Firstly, the exhibits were a fantastic source of propaganda, and furthermore, they generated a direct and immediate cash income. This is significant considering that Protestant church societies and committees neither depended upon, nor were linked to (at least not directly or officially) civil administration and almost all revenue came from the personal contributions of the faithful. Secondly, because Protestants organised their own events, there was no reason for them to participate in the official colonial exhibitions, with which the Catholic missions became repeatedly involved once the old prejudices of government had fallen away by the later years of the nineteenth century. In this way, evangelical communities were able to maintain their independence from the imperial enterprise, yet in a manner that did not preclude them from collaborating with it whenever it was in their interests to do so.However, whether Catholic or Protestant, the main characteristic of the missionary exhibitions in the timeframe of the late-nineteenth and early-twentieth century, was their ethnological intent (Sánchez-Gómez, 2013). The ethnographic objects of converted peoples (and of those who had yet to be converted) were noteworthy for their exoticism and rarity, and became a true magnet for audiences. They were also supposedly irrefutable proof of the “backward” and even “depraved” nature of such peoples, who had to be liberated by the redemptive missions which all Christians were expected to support spiritually and financially. But as tastes changed and the public began to lose interest, the exhibitions started to grow in size and complexity, and increasingly began to feature new attractions, such as dioramas and sculptures of native groups. Finally, the most sophisticated of them began to include the natives themselves as part of the show. It must be said that, but for rare exceptions, these were not exhibitions in the style of the famous German Völkerschauen or British ethnological exhibitions, but mere performances; in fact, the “guests” had already been baptized, were Christians, and allegedly willing to collaborate with their benefactors.Whilst the Protestant churches (British and North American alike) produced representations of indigenous peoples with the greatest frequency and intensity, it was (as far as we know) the (Italian) Catholic Church that had the dubious honour of being the first to display natives at a missionary exhibition, and did so in a clearly savagist and rudimentary fashion, which could even be described as brutal. This occurred in the religious section of the Italian-American Exhibition of Genoa in 1892 (Bottaro, 1984; Perrone, n.d.). As a shocking addition to the usual ethnographic and missionary collections, seven natives were exhibited in front of the audience: four Fuegians and three Mapuches of both sexes (children, young and fully-grown adults) brought from America by missionaries. The Fuegians, who were dressed only in skins and armed with bows and arrows, spent their time inside a hut made from branches which had been built in the garden of the pavilion housing the missionary exhibition. The Mapuches were two young girls and a man; the three of them lived inside another hut, where they made handicrafts under the watchful eye of their keepers.The exhibition appears to have been a great success, but it must have been evident that the model was too simple in concept, and inhumanitarian in its approach to the indigenous people present. In fact, whilst subsequent exhibitions also featured a native presence (always Christianised) at the invitation of the clergy, the Catholic Church never again fell into such a rough presentation and representation of the obsolete and savage way of life of its converted. To provide an illustration of those times, now happily overcome by the missionary enterprise, Catholic congregations resorted to dioramas and sculptures, some of which were of superb technical and artistic quality.Although the Catholic Church may have organised the first live missionary exhibition, it should not be forgotten that they joined the exhibitional sphere much later than the evangelical churches. Also, a considerable number of their displays were associated with colonial events, something that the Protestant churches avoided. This happened, for example, at the colonial exhibitions of Lyon (1894), Berlin 1896 (although this also involved Protestant churches) and Brussels-Tervuren (1897), as well as at the National Exhibition of 1898 in Turin. Years later, the great colonial (national and international) exhibitions of the interwar period continued to receive the enthusiastic and uncritical participation of Catholic missions (although some, as in 1931, included Protestant missions too). The most remarkable examples were the Iberian-American Exhibition of Seville in 1929, the International Exhibitions held at Amberes (1930) and Paris (1931), and the Oporto (1934) and Lisbon (1937 and 1940) National Exhibitions.[20] This colonial-missionary association did not prevent the Catholic Church from organising its own autonomous exhibitions, through which it tried to emulate and even surpass its more experienced Protestant counterpart. Their belated effort culminated in two of the most spectacular Christian missionary exhibitions of all time: the Vatican Missionary Exhibition of 1925 and the Barcelona Missionary Exhibition of 1929, which was associated with the great international show of that year (Sánchez-Gómez, 2007 and 2006). Although both events documented native nuns and priests as visitors, no humans were exhibited. Again, dioramas and groups of sculptures were featured, representing both religious figures and indigenous peoples. Let us return to the Protestant world. Whilst it was the reformed churches that most readily incorporated native participation, they seemed to do so in a more sensitive and less brutalised manner than the Genoese Catholic Exhibition of 1892. We know of their presence at the first North American exhibitions: one of which was held at the Ecumenical Conference on Foreign Missions, celebrated in New York in 1909 and, most significantly, at the great interdenominational The World in Boston Exhibition, in 1911 (Hasinoff, 2011). Native participation has also been recorded at the two most important British contemporary exhibitions: The Orient in London (held by the London Missionary Society in 1908) and Africa in the East (organised by the Church Missionary Society in 1909). Both exhibitions toured a number of British towns until the late 1920s, although for the most part without indigenous participation (Coombes, 1994; Cheang, 2006–2007).[21] However, the most spectacular Protestant exhibition, with hundreds of natives, dozens of stands, countless parades, theatrical performances, the latest thrill rides and exotic animals on display, was the gigantic Centenary Exhibition of American Methodist Missions, celebrated in Columbus in 1919 and popularly known as the Methodist’s World Fair (Anderson, 2006).The exhibition model at these early-twentieth century Protestant events was very similar to the colonial model. Native villages were reconstructed and ethnographic collections were presented, alongside examples of local flora and fauna, and of course, an abundance of information about missionary work, in which its evangelising, educational, medical and welfare aspects were presented. Some of these were equally as attractive to the audience (irrespective of their religious beliefs) as contemporary colonial or commercial exhibitions. However, it may be noted that the participation of Christianised natives took a radically different form from those of the colonial and commercial world. Those who were most capable and had a good command of English served as guides in the sections corresponding to their places of origin, a task that they tended to carry out in traditional clothing. More frequently these new Christians assumed roles with less responsibility, such as the manufacture of handicrafts, the sale of exotic objects or the recreation of certain aspects of their previous way of life. The organisers justified their presence by claiming that they were merely actors, representing their now-forgotten savage way of life. This may very well have been the case.At the Protestant exhibitions of the 1920s and 1930s, the presence of indigens became progressively less common until it eventually disappeared. This notwithstanding, the organisers came to benefit from a living resource which complemented displays of ethnographic materials whilst being more attractive to the audience than the usual dioramas. This was a theatrical representation of the native way of life (combined with scenes of missionary interaction) by white volunteers (both men and women) who were duly made up and in some cases appeared alongside real natives. Some of these performances were short, but others consisted of several acts and featured dozens of characters on stage. Regardless of their form, these spectacles were inherent to almost any British and North American exhibition, although much less frequent in continental Europe.Since the 1960s, the Christian missionary exhibition (both Protestant and Catholic) has been conducted along very different lines from those which have been discussed here. All direct or indirect associations with colonialism have been definitively given up; it has broken with racial or ethnological interpretations of converted peoples, and strongly defends its reputed autonomy from any political groups or interests, without forgetting that the essence of evangelisation is to maximize the visibility of its educational and charitable work among the most disadvantaged.

FINAL WORD

The three most important categories of modern ethnic show –commercial ethnological exhibitions, colonial exhibitions and missionary exhibitions– have been examined. All three resorted, to varying degrees, to the exhibition of exotic human beings in order to capture the attention of their audience, and, ultimately, to achieve certain goals: be they success in business and personal enrichment, social, political or financial backing for the colonial enterprise, or support for missionary work. Whilst on occasion they coincided at the same point in time and within the same context of representation, the uniqueness of each form of exhibition has been emphasised. However, this does not mean that they are completely separate phenomena, or that their representation of exotic “otherness” is homogeneous.Missionary exhibitions displayed perhaps the most singular traits due to their spiritual vision. However, it is clear that many made a determined effort to produce direct, visual and emotional spectacles and some, in so doing, resorted to representations of natives which were very similar to those of colonial exhibitions. Can we speak then, of a convergence of designs and interests? I honestly do not think so. At many colonial exhibitions, organisers showed a clear intention to portray natives as fearsome, savage individuals (sometimes even describing them as cannibals) who somehow needed to be subjugated. Peoples who were considered, to a lesser or greater extent, to be civilised were also displayed (as at the interwar exhibitions). However, the purpose of this was often to publicise the success of the colonial enterprise in its campaign for “the domestication of the savage”, rather than to present a message of humanitarianism or universal fraternity. Missionary exhibitions provided information and material examples of the former way of life of the converted, in which natives demonstrated that they had abandoned their savage condition and participated in the exhibition for the greater glory of the evangelising mission. Moreover, they also became living evidence that something much more transcendent than any civilising process was taking place: that once they had been baptised, anyone, no matter how wild they had once been, could become part of the same universal Christian family.It is certainly true that the shows that the audiences enjoyed at all of these exhibitions (whether missionary, colonial or even commercial) were very similar. Yet in the case of the former, the act of exhibition took place in a significantly more humanitarian context than in the others. And while it is evident that indigenous cultures and peoples were clearly manipulated in their representation at missionary exhibitions, this did not mean that the exhibited native was merely a passive element in the game. And there is something more. The dominating and spectacular qualities present in almost all missionary exhibitions should not let us forget one last factor which was essential to their conception, their development and even their longevity: Christian faith. Without Christian faith there would have been no missionary exhibitions, and had anything similar been organised, it would not have had the same meaning. It was essential that authentic Christian faith existed within the ecclesiastical hierarchy and within those responsible for congregations, missionary societies and committees. But the faith that really made the exhibitions possible was the faith of the missionaries, of others who were involved in their implementation and, of course, of those who visited. Although it was never recognised as such, this was perhaps an uncritical faith, complacent in its acceptance of the ways in which human diversity was represented and with ethical values that occasionally came close to the limits of Christian morality. But it was a faith nonetheless, a faith which intensified and grew with each exhibition, which surely fuelled both Christian religiosity (Catholic and Protestant alike) and at least several years of missionary enterprise, years crucial for the imperialist expansionism of the West. It is an objective fact that the display of human beings at commercial and colonial shows was always much more explicit and degrading than at any missionary exhibition. To state what has just been proposed more bluntly: missionary exhibitions were not “human zoos”. However, it is less clear whether the remaining categories: are commercial and colonial exhibitions worthy of this assertion (human zoos), or were they polymorphic ethnic shows of a much greater complexity?The principal analytical obstacle to the use of the term “human zoo” is that it makes an immediate and direct association between all of these acts and contexts and the idea of a nineteenth-century zoo. The images of caged animals, growling and howling, may cause admiration, but also disgust; they may sometimes inspire tenderness, but are mainly something to be avoided and feared due to their savage and bestial condition. This was definitely the case for the organisers of the scientific and editorial project cited at the beginning of this article, so it can be no surprise that Carl Hagenbeck’s joint exhibitions of exotic animals and peoples were chosen as the frame of reference for human zoos. Although the authors state in the first edition that “the human zoo is not the exhibition of savagery but its construction” [“le zoo humain n’est pas l’exhibition de la sauvagerie, mais la construction de celle-ci”] (Bancel et al., 2002: 17), the problem, as Blanckaert (2002) points out, is that this alleged construction or exhibitional structure was not present at most of the exhibitions under scrutiny, nor (and this is an added of mine) at those shown at the Exhibitions. L’invention du sauvage exhibit.Indeed, the expression “human zoo” establishes a model which does not fit with the meagre number of exhibitions of exotic individuals from the sixteenth, seventeenth or eighteenth centuries, nor with that of Saartjie Baartmann (the Hottentot Venus) of the early nineteenth century, much less with the freak shows of the twentieth century. Furthermore, this model can neither be compared to most of the nineteenth-century British human ethnological exhibitions, nor to most of the native villages of the colonial exhibitions, nor to the Wild West show of Buffalo Bill, let alone to the ruralist-traditionalist villages which were set up at many national and international exhibitions until the interwar period. Ultimately, their connection with many wandering “black villages” or “native villages” exhibited by impresarios at the end of the nineteenth century could also be disputed. Moreover, many of the shows organised by Hagenbeck number amongst the most professional in the exhibitional universe. The fact that they were held in zoos should not automatically imply that the circumstances in which they took place were more brutal or exploitative than those of any of the other ethnic shows.It is evident from all the shows which have been discussed, that the differential racial condition of the persons exhibited not only formed the basis of their exhibition, but may also have fostered and even founded racist reactions and attitudes held by the public. However, there are many other factors (political, economic and even aesthetic) which come into play and have barely been considered, which could be seen as encouraging admiration of the displays of bodies, gestures, skills, creations and knowledge which were seen as both exotic and seductive.In fact, the indiscriminate use of the very successful concept of “human zoo” generates two fundamental problems. Firstly it impedes our “true” knowledge of the object of study itself, that is, of the very varied ethnic shows which it intends to catalogue, given the great diversity of contexts, formats, persons in charge, objectives and materialisations that such enterprises have to offer. Secondly, the image of the zoo inevitably recreates the idea of an exhibition which is purely animalistic, where the only relationship is that which exists between exhibitor and exhibited: the complete domination of the latter (irrational beasts) by the former (rational beings). If we accept that the exhibited are treated merely as as more-or-less worthy animals, the consequences are twofold: a logical rejection of such shows past, present and future, and the visualization of the exhibited as passive victims of racism and capitalism in the West. It is therefore of no surprise that the research barely considers the role that these individuals may have played, the extent to which their participation in the show was voluntary and the interests which may have moved some of them to take part in these shows. Ultimately, no evaluation has been made of how these shows may have provided “opportunity contexts” for the exhibited, whether as commercial, colonial or missionary exhibitis. Whilst it is true that the exhibited peoples’ own voice is the hardest to record in any of these shows, greater effort could have been made in identifying and mapping them, as, when this happens, the results obtained are truly interesting (Dreesbach, 2005: 78).Before we conclude, it must be said that the proposed analysis does not intend to soften or justify the phenomenon of the ethnic show. Even in the least dramatic and exploitative cases it is evident that the essence of these shows was a marked inequality, in which every supposed “context of interaction” established a dichotomous relationship between black and white, North and South, colonisers and colonised, and ultimately, between dominators and dominated. My intention has been to propose a more-or-less classifying and clarifying approach to this varied world of human exhibitions, to make a basic inventory of their forms of representation and to determine which are the essential traits that define them, without losing sight of the contingent factors which they rely upon.

NOTES

ABSTRACT
The aim of this article is to study the living ethnological exhibitions. The main feature of these multiform varieties of public show, which became widespread in late-nineteenth and early-twentieth century Europe and the United States, was the live presence of individuals who were considered “primitive”. Whilst these native peoples sometimes gave demonstrations of their skills or produced manufactures for the audience, more often their role was simply as exhibits, to display their bodies and gestures, their different and singular condition. In this article, the three main forms of modern ethnic show (commercial, colonial and missionary) will be presented, together with a warning about the inadequacy of categorising all such spectacles under the label of “human zoos”, a term which has become common in both academic and media circles in recent years.Figure 8. Postcard from the Deutsche Colonial-Ausstellung, Gewerbe Ausstellung (German Colonial Exhibition, Industrial Exhibition, Berlin 1896). Historische Bildpostkarten, Universität Osnabrück, Sammlung Prof. Dr. S. Giesbrecht (http://www.bildpostkarten.uni-osnabrueck.de).

[1]In order to avoid loading the text through the excessive use of punctuation marks, I have decided not to put words as blacks, savages or primitives in inverted commas; but by no means does this mean my acceptance of their contemporary racist connotations.

[3]Missionary exhibitions are not an integral part of the repertoire of exhibitions studied as part of the French project on “Human zoos”, nor do they appear at the great Quai de Branly exhibition of 2012.

[4]The Marseille and Paris exhibitions competed with each other. The Festival of Empire was organised in London to celebrate the coronation of George V, thus also being known as the Coronation Exhibition. For more information about these and other British colonial exhibitions, or exhibitions which had important colonial sections, organised between 1890 and 1914, see Coombes (1994: 85–108) and Mackenzie (2008).

[5]These were the Franco-British exhibition (1908) and the Japan-British Exhibition (1910); although their contents were not exclusively colonial these do make up an important part of the exhibitions. They are both private and run by the successful show businessman Imre Kiralfy. For the former, see Coombes (1994: 187–213), Leymarie (2009) and Geppert (2010: 101–133); and for the latter, Mutsu (2001).

[6]This was the International Imperial Exhibition, where the Great Britain, France and Russia took part, although other countries also had a minor presence. It was organized by the businessman Imre Kiralfy.

[7]The exhibition fever of those years even hit Japan, where colonial and anthropological exhibitions were organized in Osaka (1903) and Tokyo (1913). These showed Ainu peoples and persons from the newly incorporated territories of the Japanese Empire (Siddle, 1996; Nanta, 2011).

[8]For a good summary of the extensive colonial propaganda movement which spread around Europe during the interwar period (with detailed references to the exhibitions) see Stanard (2009).

[10]After its defeat in the Great War, the 119 Versailles Treaty article specified that Germany should give up all its overseas territories. Therefore, whenever exhibitions were celebrated during the interwar period Germany lacked any possessions whatsoever. Thus, German competitions mentioned (including Vienna) were nothing but mere patriotic exhibitions of colonial revisionism, which were celebrated during the Weimar Republic and reached their heyday in the Nazi era.

[17]The territory of Rwanda-Urundi (former German colony of Rwanda and Burundi) was administered as a trusteeship by Belgium from 1924, on accepting a League of Nations mandate which was renewed through the UN after the end of the Second World War.

[18]For the encounters and disagreements between Christian exhibitions and Universal exhibitions during the nineteenth century, see Sánchez-Gómez (2011).

Mackenzie, John M. (2008) “The Imperial Exhibitions of Great Britain”. In Human Zoos. Science and spectacle in the age of colonial empires, edited by Blanchard, P. et al., Liverpool University Press, Liverpool, pp. 259–268.

Palermo, Lynn E. (2003) “Identity under construction: Representing the colonies at the Paris Exposition Universelle of 1889”. In The color of liberty: Histories of race in France edited by Peabody, Sue and Tyler, Stovall, Duke University Press, Durham, pp. 285–300.

Wilson, Michael (1991) “Consuming History: The Nation, the Past and the Commodity at l’Exposition Universelle de 1900”. American Journal of Semiotics, 8 (4): 131–154. http://dx.doi.org/10.5840/ajs1991848

From the German Empire through the 1930s, humans were locked up and exhibited in zoos. These racist « ethnological expositions » remain a traumatizing experience for Theodor Wonja Michael.

« We went throughout Europe with circuses, and I was always traveling – from Paris to Riga, from Berne to Bucharest via Warsaw, » remembers Theodor Wonja Michael. He is the youngest son of a Cameroonian who left the then German colony at the turn of the century to live in the German Empire.

« We danced and performed along with fire-eaters and fakirs. I began hating taking part in these human zoos very early on, » says the now 92-year-old. For several years, did stopped talking about that period in his life. Then in 2013, Theodor Wonja Michael wrote about his and his family’s story in the book « Deutsch sein und schwarz dazu » (Being German and also Black).

Traveling with a human zoo

Theodor Wonja Michael’s father moved with his family from Cameroon to Europe at the end of the 19th century. In Berlin, he quickly realized that he wouldn’t be allowed to do normal jobs. The only available way of making a living was through ethnological expositions, also called human zoos.

At the time, performers of a human zoo would tour through Europe just like rock bands today. They were scheduled to do several presentations a day while visitors would gawk at them.

« In some cases, the performers had contracts, but they didn’t know what it meant to be part of Europe’s ethnological expositions, » says historian Anne Dreesbach. Most of them were homesick; some died because they didn’t manage to get vaccinated. That’s how an Inuit family, which was part of an exhibition, died of smallpox after shows in Hamburg and Berlin in 1880. Another group of Sioux Indians died of vertigo, measles and pneumonia.

Carl Hagenbeck’s exposition of ‘exotics’

A 1927 photo of Carl Hagenbeck, surrounded by the Somalians he put in a Hamburg zoo

Up until the 1930s, there were some 400 human zoos in Germany.

The first big ethnological exposition was organized in 1874 by a wild animal merchant from Hamburg, Carl Hagenbeck. « He had the idea to open zoos that weren’t only filled with animals, but also people. People were excited to discover humans from abroad: Before television and color photography were available, it was their only way to see them, » explains Anne Dreesbach, who published a book on the history of human zoos in Germany a few year ago.

An illusion of travel

The concept already existed in the early modern age, when European explorers brought back people from the new areas they had traveled to. Carl Hagenbeck took this one step further, staging the exhibitions to make them more attractive: Laplanders would appear accompanied by reindeer, Egyptians would ride camels in front of cardboard pyramids, Fuegians would be living in huts and had bones as accessories in their hair. « Carl Hagenbeck sold visitors an illusion of world travel with his human zoos, » says historian Hilke Thode-Arora from Munich’s ethnological museum.

How Theodor Wonja Michael experienced racism in Germany

« In these ethnological expositions, we embodied Europeans perception of ‘Africans’ in the 1920s and 30s – uneducated savages wearing raffia skirts, » explains Theodor Wonja Michael. He still remembers how strangers would stroke his curled hair: « They would smell me to check if I was real and talked to me in broken German or with signs. »

Hordes of visitors

Theodor Wonja Michael’s family was torn apart after the death of his mother, who was a German seamstress from East Prussia. A court determined that the father couldn’t properly raise his four children, and operators of a human zoo officially became the young Theodor’s foster parents in the 1920s. « Their only interest in us was for our labor, » explains Michael.

All four children were taken on by different operators of ethnological expositions and had to present and sell « a typical African lifestyle » for a curious public, like their father had done previously. For Theodor Wonja Michael, it was torture.

Just like fans want to see stars up close today, visitors at the time wanted to see Fuegians, Eskimos or Samoans. When one group decided to stay hidden in their hut during the last presentation of a day in a Berlin zoo in November 1881, thousands of visitors protested by pushing down fences and walls and destroying banks. « This shows what these expositions subconsciously triggered in people, » says Dreesbach.

Theodor Wonja Michael was nine years old when his father died in 1934, aged 55. He only has very few memories of him left. From his siblings’ stories, he knows that his father worked as an extra on silent films at the beginning of the 1920s. The whole family was brought with him to the studio and also hired as extras because they were viewed as « typically African. »

Several human zoos stopped running after the end of World War I. Hagenbeck organized his last show of « exotic people » in 1931 – but that didn’t end discrimination.

Theodor Wonja Michael’s book is available in German under the title, « Deutsch sein und Schwarz dazu. Erinnerungen eines Afro-Deutschen » (Being German and also Black. Memoires of an Afro-German).

The Strange Case of Dr. Couney: How a Mysterious European Showman Saved Thousands of American Babies by Dawn Raffel (Blue Rider Press, 284 pp.)

By Laura Durnell

The National Book Review

8.15. 2018

With a couturier’s skill, Dawn Raffel’s The Strange Case of Dr. Couney: How a Mysterious European Showman Saved Thousands of American Babies threads facts and education into a dramatic and highly unusual narrative. The enigmatic showman Martin Couney showcased premature babies in incubators to early 20th century crowds on the Coney Island and Atlantic City boardwalks, and at expositions across the United States. A Prussian-born immigrant based on the East coast, Couney had no medical degree but called himself a physician, and his self-promoting carnival-barking incubator display exhibits actually ended up saving the lives of about 7,000 premature babies. These tiny infants would have died without Couney’s theatrics, but instead they grew into adulthood, had children, grandchildren, great grandchildren and lived into their 70s, 80s, and 90s. This extraordinary story reveals a great deal about neonatology, and about life.

Raffel, a journalist, memoirist and short story writer, brings her literary sensibilities and great curiosity, to Couney’s fascinating tale. Drawing on extraordinary archival research as well as interviews, her narrative is enhanced by her own reflections as she balanced her shock over how Couney saved these premature infants and also managed to make a living by displaying them like little freaks to the vast crowds who came to see them. Couney’s work with premature infants began in Europe as a carnival barker at an incubator exposition. It was there he fell in love with preemies and met his head nurse Louise Recht. Still, even allowing for his evident affection, making the preemies incubation a public show seems exploitative.

But was it? In the 21st century, hospital incubators and NICUs are taken for granted, but over a hundred years ago, incubators were rarely used in hospitals, and sometimes they did far more harm than good. Premature infants often went blind because of too much oxygen pumped into the incubators (Raffel notes that Stevie Wonder, himself a preemie, lost his sight this way). Yet the preemies Couney and his nurses — his wife Maye, his daughter Hildegard, and lead nurse Louise, known in the show as “Madame Recht” — cared for retained their vision. The reason? Couney was worried enough about this problem to use incubators developed by M. Alexandre Lion in France, which regulated oxygen flow.

Today it is widely accept that every baby – premature or ones born to term – should be saved. Not so in Couney’s time. Preemies were referred to as “weaklings,” and even some doctors believed their lives were not worth saving. While Raffel’s tale is inspiring, it is also horrific. She does not shy away from people like Dr. Harry Haiselden who, unlike Couney, was an actual M.D., but “denied lifesaving treatment to infants he deemed ‘defective,’ deliberately watching them die even when they could have lived.”

Haiselden’s behavior and philosophy did not develop in a vacuum. Nazi Germany’s shadow looms large in Raffel’s book. Just as they did with America’s Jim Crow laws, Raffel acknowledges the Nazis took America’s late nineteenth and early twentieth century fascination with eugenics and applied it to monstrous ends in the T4 euthanasia program and the Holocaust. To better understand Haiselden’s attitude, Raffel explains the role eugenics played throughout Couney’s lifetime. She dispassionately explains the theory of eugenics, how its propaganda worked and how belief in eugenics manifested itself in 20th century America.

Ultimately, Couney’s compassion, advocacy, resilience, and careful maintenance of his self-created narrative to the public rose above this ignorant cruelty. True, he was a showman, and during most of his career, he earned a good living from his incubator babies show, but Couney, an elegant man who fluently spoke German, French and English, didn’t exploit his preemies (Hildegard was a preemie too). He gave them a chance at the lives they might not have been allowed to live. Couney used his showmanship to support all of this life-saving. He put on shows for boardwalk crowds, but he also, despite not having a medical degree, maintained his incubators according to high medical standards.

In many ways, Couney’s practices were incredibly advanced. Babies were fed with breast milk exclusively, nurses provided loving touches frequently, and the babies were held, changed and bathed. “Every two hours, those who could suckle were carried upstairs on a tiny elevator and fed by breast by wet nurses who lived in the building,” Raffel writes. “The rest got the funneled spoon.”

Yet the efforts of Dr. Couney’s his nurses went largely ignored by the medical profession and were only mentioned once in a medical journal. As Raffel writes in her book’s final page, “There is nothing at his grave to indicate that [Martin Couney] did anything of note.” The same goes for Maye, Louise and Hildegard. Louise’s name was misspelled on her shared tombstone (Louise’s remains are interred in another family’s crypt), and Hildegard, whose remains are interred with Louise’s, did not even have her own name engraved on the shared tombstone.

With the exception of Chicago’s Dr. Julius Hess, who is considered the father of neonatology, the majority of the medical establishment patronized and excluded Couney. Hess, though, respected Couney’s work and built on it with his own scientific approach and research; in the preface to his book Premature and Congenitally Diseased Infants, Hess acknowledges Couney “‘for his many helpful suggestions in the preparation of the material for this book.’” But Couney cared more about the babies than professional respect. His was a single-minded focus: even when it financially devastated him to do so, he persisted, so his preemies could live.

A Talmud verse Raffel cites early in her book sums up Martin Couney: “If one saves a single life, it is as if one has saved the world.” The Strange Case of Dr. Couney gives Couney his due as a remarkable human being who used his promotional ability for the betterment of premature infants, and for, 7,000 times over, saving the world.

Laura Durnell’s work has appeared in The Huffington Post, Fifth Wednesday Journal, Room, The Antigonish Review, Women’s Media Center, Garnet News, others. She currently teaches at DePaul University, tutors at Wilbur Wright College, one of the City Colleges of Chicago, and is working on her first novel. Twitter handle: @lauradurnell