Insurance policies go back to the ancient Babylonians and were crucial in the early development of capitalism

ILLUSTRATION: THOMAS FUCHS

Living in a world without insurance, free from all those claim forms and high deductibles, might sound like a little bit of paradise. But the only thing worse than dealing with the insurance industry is trying to conduct business without it. In fact, the basic principle of insurance—pooling risk in order to minimize liability from unforeseen dangers—is one of the things that made modern capitalism possible.

The first merchants to tackle the problem of risk management in a systematic way were the Babylonians. The 18th-century B.C. Code of Hammurabi shows that they used a primitive form of insurance known as “bottomry.” According to the Code, merchants who took high-interest loans tied to shipments of goods could have the loans forgiven if the ship was lost. The practice benefited both traders and their creditors, who charged a premium of up to 30% on such loans.

The Athenians, realizing that bottomry was a far better hedge against disaster than relying on the Oracle of Delphi, subsequently developed the idea into a maritime insurance system. They had professional loan syndicates, official inspections of ships and cargoes, and legal sanctions against code violators.

With the first insurance schemes, however, came the first insurance fraud. One of the oldest known cases comes from Athens in the 3rd century B.C. Two men named Hegestratos and Xenothemis obtained bottomry insurance for a shipment of corn from Syracuse to Athens. Halfway through the journey they attempted to sink the ship, only to have their plan foiled by an alert passenger. Hegestratos jumped (or was thrown) from the ship and drowned. Xenothemis was taken to Athens to meet his punishment.

In Christian Europe, insurance was widely frowned upon as a form of gambling—betting against God. Even after Pope Gregory IX decreed in the 13th century that the premiums charged on bottomry loans were not usury, because of the risk involved, the industry rarely expanded. Innovations came mainly in response to catastrophes: The Great Fire of London in 1666 led to the growth of fire insurance, while the Lisbon earthquake of 1755 did the same for life insurance.

It took the Enlightenment to bring widespread changes in the way Europeans thought about insurance. Probability became subject to numbers and statistics rather than hope and prayer. In addition to his contributions to mathematics, astronomy and physics, Edmond Halley (1656-1742), of Halley’s comet fame, developed the foundations of actuarial science—the mathematical measurement of risk. This helped to create a level playing field for sellers and buyers of insurance. By the end of the 18th century, those who abjured insurance were regarded as stupid rather than pious. Adam Smith declared that to do business without it “was a presumptuous contempt of the risk.”

But insurance only works if it can be trusted in a crisis. For the modern American insurance industry, the deadly San Francisco earthquake of 1906 was a day of reckoning. The devastation resulted in insured losses of $235 million—equivalent to $6.3 billion today. Many American insurers balked, but in Britain, Lloyd’s of London announced that every one of its customers would have their claims paid in full within 30 days. This prompt action saved lives and ensured that business would be able to go on.

And that’s why we pay our premiums: You can’t predict tomorrow, but you can plan for it.

Share story

Canada gave us the modern form of a sport that has been played for centuries around the world

ILLUSTRATION: THOMAS FUCHS

Canadians like to say—and print on mugs and T-shirts—that “Canada is Hockey.” No fewer than five Canadian cities and towns claim to be the birthplace of ice hockey, including Windsor, Nova Scotia, which has an entire museum dedicated to the sport. Canada’s annual Hockey Day, which falls on February 9 this year, features a TV marathon of hockey games. Such is the country’s love for the game that last year’s broadcast was watched by more than 1 in 4 Canadians.

But as with many of humanity’s great advances, no single country or person can take the credit for inventing ice hockey. Stick-and-ball games are as old as civilization itself. The ancient Egyptians were playing a form of field hockey as early as the 21st century B.C., if a mural on a tomb at Beni Hasan, a Middle Kingdom burial site about 120 miles south of Cairo, is anything to go by. The ancient Greeks also played a version of the game, as did the early Christian Ethiopians, the Mesoamerican Teotihuacanos in the Valley of Mexico, and the Daur tribes of Inner Mongolia. And the Scottish and Irish versions of field hockey, known as shinty and hurling respectively, have strong similarities with the modern game.

Taking a ball and stick onto the ice was therefore a fairly obvious innovation, at least in places with snowy winters. The figures may be tiny, but three villagers playing an ice hockey-type game can be seen in the background of Pieter Bruegel the Elder’s 1565 painting “Hunters in the Snow.” There is no such pictorial evidence to show when the Mi’kmaq Indians of Nova Scotia first started hitting a ball on ice, but linguistic clues suggest that their hockey tradition existed before the arrival of European traders in the 16th century. The two cultures then proceeded to influence each other, with the Mi’kmaqs becoming the foremost maker of hockey sticks in the 19th century.

The earliest known use of the word hockey appears in a book, “Juvenile Sports and Pastimes,” written by Richard Johnson in London in 1776. Recently, Charles Darwin became an unlikely contributor to ice hockey history after researchers found a letter in which he reminisced about playing the game as a boy in the 1820s: “I used to be very fond of playing Hocky [sic] on the ice in skates.” On January 8, 1864, the future King Edward VII played ice hockey at Windsor Castle while awaiting the birth of his first child.

As for Canada, apart from really liking the game, what has been its real contribution to ice hockey? The answer is that it created the game we know today, from the official rulebook to the size and shape of the rink to the establishment of the Stanley Cup championship in 1894. The first indoor ice hockey game was played in Montreal in 1875, thereby solving the perennial problem of pucks getting lost. (The rink was natural ice, with Canada’s cold winter supplying the refrigeration.) The game involved two teams of nine players, each with a set position—three more than teams field today—a wooden puck, and a list of rules for fouls and scoring.

In addition to being the first properly organized game, the Montreal match also initiated ice hockey’s other famous tradition: brawling on the ice. In this case, the fighting erupted between the players, spectators and skaters who wanted the ice rink back for free skating. Go Canada!

Share story

The 100th anniversary of Prohibition is a reminder of how hard it is to regulate consumption and display

ILLUSTRATION: THOMAS FUCHS

This month we mark the centennial of the ratification of the Constitution’s 18th Amendment, better known as Prohibition. But the temperance movement was active for over a half-century before winning its great prize. As the novelist Anthony Trollope discovered to his regret while touring North America in 1861-2, Maine had been dry for a decade. The convivial Englishman condemned the ban: “This law, like all sumptuary laws, must fail,” he wrote.

Sumptuary laws had largely fallen into disuse by the 19th century, but they were once a near-universal tool, used in the East and West alike to control economies and preserve social hierarchies. A sumptuary law is a rule that regulates consumption in its broadest sense, from what a person may eat and drink to what they may own, wear or display. The oldest known example, the Locrian Law Code devised by the seventh century B.C. Greek law giver Zaleucus, banned all citizens of Locri (except prostitutes) from ostentatious displays of gold jewelry.

Sumptuary laws were often political weapons disguised as moral pieties, aimed at less powerful groups, particularly women. In 215 B.C., at the height of the Second Punic War, the Roman Senate passed the Lex Oppia, which (among other restrictions) banned women from owning more than a half ounce of gold. Ostensibly a wartime austerity measure, 20 years later the law appeared so ridiculous as to be unenforceable. But during debate on its repeal in 195 B.C., Cato the Elder, its strongest defender, inadvertently revealed the Lex Oppia’s true purpose: “What [these women] want is complete freedom…. Once they have achieved equality, they will be your masters.”

Cato’s message about preserving social hierarchy echoed down the centuries. As trade and economic stability returned to Europe during the High Middle Ages (1000-1300), so did the use of sumptuary laws to keep the new merchant elites in their place. By the 16th century, sumptuary laws in Europe had extended from clothing to almost every aspect of daily life. The more they were circumvented, the more specific such laws became. An edict issued by King Henry VIII of England in 1517, for example, dictated the maximum number of dishes allowed at a meal: nine for a cardinal, seven for the aristocracy and three for the gentry.

The rise of modern capitalism ultimately made sumptuary laws obsolete. Trade turned once-scarce luxuries into mass commodities that simply couldn’t be controlled. Adam Smith’s “The Wealth of Nations” (1776) confirmed what had been obvious for over a century: Consumption and liberty go hand in hand. “It is the highest impertinence,” he wrote, “to pretend to watch over the economy of private people…either by sumptuary laws, or by prohibiting the importation of foreign luxuries.”

Smith’s pragmatic view was echoed by President William Howard Taft. He opposed Prohibition on the grounds that it was coercive rather than consensual, arguing that “experience has shown that a law of this kind, sumptuary in its character, can only be properly enforced in districts in which a majority of the people favor the law.” Mass immigration in early 20th-century America had changed many cities into ethnic melting-pots. Taft recognized Prohibition as an attempt by nativists to impose cultural uniformity on immigrant communities whose attitudes toward alcohol were more permissive. But his warning was ignored, and the disastrous course of Prohibition was set.

Share story

Roman emperors and American presidents alike have struggled to deal with sudden economic crashes

ILLUSTRATION: THOMAS FUCHS

On January 12, 1819 Thomas Jefferson wrote to his friend Nathaniel Macon, “I have…entire confidence in the late and present Presidents…I slumber without fear.” He did concede, though, that market fluctuations can trip up even the best governments. Jefferson was prescient: A few days later, the country plunged into a full-blown financial panic. The trigger was a collapse in the overseas cotton market, but the crisis had been building for months. The factors that led to the crash included the actions of the Second Bank of the United States, which had helped to fuel a real estate boom in the West only to reverse course suddenly and call in its loans.

The recession that followed the panic of 1819 was prolonged and severe: Banks closed, lending all but ceased and businesses failed by the thousands. By the time it was over in 1823, almost a third of the population—including Jefferson himself—had suffered irreversible losses.

As we mark the 200th anniversary of the 1819 panic, it is worth pondering the role of governments in a financial crisis. During a panic in Rome in the year 33, the emperor Tiberius’s prompt action prevented a total collapse of the city’s finances. Rome was caught among falling property prices, a real estate bubble and a sudden credit crunch. Instead of waiting it out, Tiberius ordered interest rates to be lowered and released 100 million sestertii (large brass coins) into the banking system to avoid a mass default.

But not all government interventions have been as successful or timely. In 1124, King Henry I of England attempted to restore confidence in the country’s money by having the mint-makers publicly castrated and their right hands amputated for producing substandard coins. A temporary fix at best, his bloody act neither deterred people from debasing the coinage nor allayed fears over England’s creditworthiness.

On the other side of the globe, China began using paper money in 1023. Successive emperors of the Ming Dynasty (1368-1644) failed, however, to limit the number of notes in circulation or to back the money with gold or silver specie. By the mid-15th century the economy was in the grip of hyperinflationary cycles. The emperor Yingzong simply gave up on the problem: China returned to coinage just as Europe was discovering the uses of paper.

The rise of commercial paper along with paper currencies allowed European countries to develop more sophisticated banking systems. But they also led to panics, inflation and dangerous speculation—sometimes all at once, as in France in 1720, when John Law’s disastrous Mississippi Company share scheme ended in mass bankruptcies for its investors and the collapse of the French livre.

As it turns out, it is easier to predict the consequences of a crisis than it is to prevent one from happening. In 2015, the U.K.’s Centre for Economic Policy Research published a paper on the effects of 100 financial crises in 20 Western countries over the past 150 years, down to the recession of 2007-09. They found two consistent outcomes. The first is that politics becomes more extreme and polarized following a crisis; the second is that countries become more ungovernable as violence, protests and populist revolts overshadow the rule of law.

With the U.S. stock market having suffered its worst December since the Great Depression of the 1930s, it is worth remembering that the only thing more frightening than a financial crisis can be its aftermath.

From the ancient Babylonians to Victorian England, the year’s end has been a time for self-reproach and general misery

I don’t look forward to New Year’s Eve. When the bells start to ring, it isn’t “Auld Lang Syne” I hear but echoes from the Anglican “Book of Common Prayer”: “We have left undone those things which we ought to have done; And we have done those things which we ought not to have done.”

At least I’m not alone in my annual dip into the waters of woe. Experiencing the sharp sting of regret around the New Year has a long pedigree. The ancient Babylonians required their kings to offer a ritual apology during the Akitu festival of New Year: The king would go down on his knees before an image of the god Marduk, beg his forgiveness, insist that he hadn’t sinned against the god himself and promise to do better next year. The rite ended with the high priest giving the royal cheek the hardest possible slap.

There are sufficient similarities between the Akitu festival and Yom Kippur, Judaism’s Day of Atonement—which takes place 10 days after the Jewish New Year—to suggest that there was likely a historical link between them. Yom Kippur, however, is about accepting responsibility, with the emphasis on owning up to sins committed rather than pointing out those omitted.

In Europe, the 14th-century Middle English poem “Sir Gawain and the Green Knight” begins its strange tale on New Year’s Day. A green-skinned knight arrives at King Arthur’s Camelot and challenges the knights to strike at him, on the condition that he can return the blow in a year and a day. Sir Gawain reluctantly accepts the challenge, and embarks on a year filled with adventures. Although he ultimately survives his encounter with the Green Knight, Gawain ends up haunted by his moral lapses over the previous 12 months. For, he laments (in J.R.R. Tolkien’s elegant translation), “a man may cover his blemish, but unbind it he cannot.”

New Year’s Eve in Shakespeare’s era was regarded as a day for gift-giving rather than as a catalyst for regret. But Sonnet 30 shows that Shakespeare was no stranger to the melancholy that looking back can inspire: “I summon up remembrance of things past, / I sigh the lack of many a thing I sought, / And with old woes new wail my dear time’s waste.”

For a full dose of New Year’s misery, however, nothing beats the Victorians. “I wait its close, I court its gloom,” declared the poet Walter Savage Landor in “Mild Is the Parting Year.” Not to be outdone, William Wordsworth offered his “Lament of Mary Queen of Scots on the Eve of a New Year”: “Pondering that Time tonight will pass / The threshold of another year; /…My very moments are too full / Of hopelessness and fear.”

Fortunately, there is always Charles Dickens. In 1844, Dickens followed up the wildly successful “A Christmas Carol” with a slightly darker but still uplifting seasonal tale, “The Chimes.” Trotty Veck, an elderly messenger, takes stock of his life on New Year’s Eve and decides that he has been nothing but a burden on society. He resolves to kill himself, but the spirits of the church bells intervene, showing him a vision of what would happen to the people he loves.

Today, most Americans recognize this story as the basis of the bittersweet 1946 Frank Capra film “It’s a Wonderful Life.” As an antidote to New Year’s blues, George Bailey’s lesson holds true for everyone: “No man is a failure who has friends.”

Share story

From Saturnalia to Christmas Eve, people have always had a spiritual need for greenery in the depths of winter

Queen Victoria and family with their Christmas tree in 1848. PHOTO: GETTY IMAGES

My family never had a pink-frosted Christmas tree, though Lord knows my 10-year-old self really wanted one. Every year my family went to Sal’s Christmas Emporium on Wilshire Boulevard in Los Angeles, where you could buy neon-colored trees, mechanical trees that played Christmas carols, blue and white Hanukkah bushes or even a real Douglas fir if you wanted to go retro. We were solidly retro.

Decorating the Christmas tree remains one of my most treasured memories, and according to the National Christmas Tree Association, the tradition is still thriving in our digital age: In 2017 Americans bought 48.5 million real and artificial Christmas trees. Clearly, bringing a tree into the house, especially during winter, taps into something deeply spiritual in the human psyche.

Nearly every society has at some point venerated the tree as a symbol of fertility and rebirth, or as a living link between the heavens, the earth and the underworld. In the ancient Near East, “tree of life” motifs appear on pottery as early as 7000 B.C. By the second millennium B.C., variations of the motif were being carved onto temple walls in Egypt and fashioned into bronze sculptures in southern China.

The early Christian fathers were troubled by the possibility that the faithful might identify the Garden of Eden’s trees of life and knowledge, described in the Book of Genesis, with paganism’s divine trees and sacred groves. Accordingly, in 572 the Council of Braga banned Christians from participating in the Roman celebration of Saturnalia—a popular winter solstice festival in honor of Saturn, the god of agriculture, that included decking the home with boughs of holly, his sacred symbol.

It wasn’t until the late Middle Ages that evergreens received a qualified welcome from the Church, as props in the mystery plays that told the story of Creation. In Germany, mystery plays were performed on Christmas Eve, traditionally celebrated in the church calendar as the feast day of Adam and Eve. The original baubles that hung on these “paradise trees,” representing the trees in the Garden of Eden, were round wafer breads that symbolized the Eucharist.

The Christmas tree remained a northern European tradition until Queen Charlotte, the German-born wife of George III, had one erected for a children’s party at Windsor Castle in 1800. The British upper classes quickly followed suit, but the rest of the country remained aloof until 1848, when the London Illustrated News published a charming picture of Queen Victoria and her family gathered around a large Christmas tree. Suddenly, every household had to have one for the children to decorate. It didn’t take long for President Franklin Pierce to introduce the first Christmas tree to the White House, in 1853—a practice that every President has honored except Theodore Roosevelt, who in 1902 refused to have a tree on conservationist grounds. (His children objected so much to the ban that he eventually gave in.)

Many writers have tried to capture the complex feelings that Christmas trees inspire, particularly in children. Few, though, can rival T.S. Eliot’s timeless meditation on joy, death and life everlasting, in his 1954 poem “The Cultivation of Christmas Trees”: “The child wonders at the Christmas Tree: / Let him continue in the spirit of wonder / At the Feast as an event not accepted as a pretext; / So that the glittering rapture, the amazement / Of the first-remembered Christmas Tree /…May not be forgotten.”

The tell-all memoir has been a feature of American politics ever since Raymond Moley, an ex-aide to Franklin Delano Roosevelt, published his excoriating book “After Seven Years” while FDR was still in office. What makes the Trump administration unusual is the speed at which such accounts are appearing—most recently, “Unhinged,” by Omarosa Manigault Newman, a former political aide to the president.

Spilling the beans on one’s boss may be disloyal, but it has a long pedigree. Alexander the Great is thought to have inspired the genre. His great run of military victories, beginning with the Battle of Chaeronea in 338 B.C., was so unprecedented that several of his generals felt the urge—unknown in Greek literature before then—to record their experiences for posterity.

Unfortunately, their accounts didn’t survive, save for the memoir of Ptolemy Soter, the founder of the Ptolemaic dynasty in Egypt, which exists in fragments. The great majority of Roman political memoirs have also disappeared—many by official suppression. Historians particularly regret the loss of the memoirs of Agrippina, the mother of Emperor Nero, who once boasted that she could bring down the entire imperial family with her revelations.

The Heian period (794-1185) in Japan produced four notable court memoirs, all by noblewomen. Dissatisfaction with their lot was a major factor behind these accounts—particularly for the anonymous author of ‘The Gossamer Years,” written around 974. The author was married to Fujiwara no Kane’ie, the regent for the Emperor Ichijo. Her exalted position at court masked a deeply unhappy private life; she was made miserable by her husband’s serial philandering, describing herself as “rich only in loneliness and sorrow.”

In Europe, the first modern political memoir was written by the Duc de Saint-Simon (1675-1755), a frustrated courtier at Versailles who took revenge on Louis XIV with his pen. Saint-Simon’s tales hilariously reveal the drama, gossip and intrigue that surrounded a king whose intellect, in his view, was “beneath mediocrity.”

But even Saint-Simon’s memoirs pale next to those of the Korean noblewoman Lady Hyegyeong (1735-1816), wife of Crown Prince Sado of the Joseon Dynasty. Her book, “Memoirs Written in Silence,” tells shocking tales of murder and madness at the heart of the Korean court. Sado, she writes, was a homicidal psychopath who went on a bloody killing spree that was only stopped by the intervention of his father King Yeongjo. Unwilling to see his son publicly executed, Yeongjo had the prince locked inside a rice chest and left to die. Understandably, Hyegyeong’s memoirs caused a huge sensation in Korea when they were first published in 1939, following the death of the last Emperor in 1926.

Fortunately, the Washington political memoir has been free of this kind of violence. Still, it isn’t just Roman emperors who have tried to silence uncomfortable voices. According to the historian Michael Beschloss, President John F. Kennedy had the White House household staff sign agreements to refrain from writing any memoirs. But eventually, of course, even Kennedy’s secrets came out. Perhaps every political leader should be given a plaque that reads: “Just remember, your underlings will have the last word.”

Share story

It took centuries for the spud to travel from the New World to the Old and back again

At the first Thanksgiving dinner, eaten by the Wampanoag Indians and the Pilgrims in 1621, the menu was rather different from what’s served today. For one thing, the pumpkin was roasted, not made into a pie. And there definitely wasn’t a side dish of mashed potatoes.

In fact, the first hundred Thanksgivings were spud-free, since potatoes weren’t grown in North America until 1719, when Scotch-Irish settlers began planting them in New Hampshire. Mashed potatoes were an even later invention. The first recorded recipe for the dish appeared in 1747, in Hannah Glasse’s splendidly titled “The Art of Cookery Made Plain and Easy, Which Far Exceeds Any Thing of the Kind yet Published.”

By then, the potato had been known in Europe for a full two centuries. It was first introduced by the Spanish conquerors of Peru, where the Incas had revered the potato and even invented a natural way of freeze-drying it for storage. Nevertheless, despite its nutritional value and ease of growing, the potato didn’t catch on in Europe. It wasn’t merely foreign and ugly-looking; to wheat-growing farmers it seemed unnatural—possibly even un-Christian, since there is no mention of the potato in the Bible. Outside of Spain, it was generally grown for animal feed.

The change in the potato’s fortunes was largely due to the efforts of a Frenchman named Antoine-Augustin Parmentier (1737-1813). During the Seven Years’ War, he was taken prisoner by the Prussians and forced to live on a diet of potatoes. To his surprise, he stayed relatively healthy. Convinced he had found a solution to famine, Parmentier dedicated his life after the war to popularizing the potato’s nutritional benefits. He even persuaded Marie-Antoinette to wear potato flowers in her hair.

Among the converts to his message were the economist Adam Smith, who realized the potato’s economic potential as a staple food for workers, and Thomas Jefferson, then the U.S. Ambassador to France, who was keen for his new nation to eat well in all senses of the word. Jefferson is credited with introducing Americans to french fries at a White House dinner in 1802.

As Smith predicted, the potato became the fuel for the Industrial Revolution. A study published in 2011 by Nathan Nunn and Nancy Qian in the Quarterly Journal of Economics estimates that up to a quarter of the world’s population growth from 1700 to 1900 can be attributed solely to the introduction of the potato. As Louisa May Alcott observed in “Little Men,” in 1871, “Money is the root of all evil, and yet it is such a useful root that we cannot get on without it any more than we can without potatoes.”

In 1887, two Americans, Jacob Fitzgerald and William H. Silver, patented the first potato ricer, which forced a cooked potato through a cast iron sieve, ending the scourge of lumpy mash. Still, the holy grail of “quick and easy” mashed potatoes remained elusive until the late 1950s. Using the flakes produced by the potato ricer and a new freeze drying method, U.S. government scientists perfected instant mashed potatoes, which only requires the simple step of adding hot water or milk to the mix. The days of peeling, boiling and mashing were now optional, and for millions of cooks, Thanksgiving became a little easier. And that’s something to be thankful for.

Share story

From Japanese knotweed to cane toads, humans have introduced invasive species to new environments with disastrous results

Ever since Neolithic people wandered the earth, inadvertently bringing the mouse along for the ride, humans have been responsible for introducing animal and plant species into new environments. But problems can arise when a non-native species encounters no barriers to population growth, allowing it to rampage unchecked through the new habitat, overwhelming the ecosystem. On more than one occasion, humans have transplanted a species for what seemed like good reasons, only to find out too late that the consequences were disastrous.

One of the most famous examples is celebrating its 150th anniversary this year: the introduction of Japanese knotweed to the U.S. A highly aggressive plant, it can grow 15 feet high and has roots that spread up to 45 feet. Knotweed had already been a hit in Europe because of its pretty little white flowers, and, yes, its miraculous indestructibility.

First mentioned in botanical articles in 1868, knotweed was brought to New York by the Hogg brothers, James and Thomas, eminent American horticulturalists and among the earliest collectors of Japanese plants. Thanks to their extensive contacts, knotweed found a home in arboretums, botanical gardens and even Central Park. Not content with importing one of world’s most invasive shrubs, the Hoggs also introduced Americans to the wonders of kudzu, a dense vine that can grow a foot a day.

Impressed by the vigor of kudzu, agriculturalists recommended using these plants to provide animal fodder and prevent soil erosion. In the 1930s, the government was even paying Southern farmers $8 per acre to plant kudzu. Today it is known as the “vine that ate the South,” because of the way it covers huge tracts of land in a green blanket of death. And Japanese knotweed is still spreading, colonizing entire habitats from Mississippi to Alaska, where only the Arctic tundra holds it back from world domination.

Knotweed has also reached Australia, a country that has been ground zero for the worst excesses of invasive species. In the 19th century, the British imported non-native animals such as rabbits, cats, goats, donkeys, pigs, foxes and camels, causing mass extinctions of Australia’s native mammal species. Australians are still paying the price; there are more rabbits in the country today than wombats, more camels than kangaroos.

Yet the lesson wasn’t learned. In the 1930s, scientists in both Australia and the U.S. decided to import the South American cane toad as a form of biowarfare against beetles that eat sugar cane. The experiment failed, and it turned out that the cane toad was poisonous to any predator that ate it. There’s also the matter of the 30,000 eggs it can lay at a time. Today, the cane toad can be found all over northern Australia and south Florida.

So is there anything we can do once an invasive species has taken up residence? The answer is yes, but it requires more than just fences, traps and pesticides; it means changing human incentives. Today, for instance, the voracious Indo-Pacific lionfish is gobbling up local fish in the west Atlantic, while the Asian carp threatens the ecosystem of the Great Lakes. There is only one solution: We must eat them, dear reader. These invasive fish can be grilled, fried or consumed as sashimi, and they taste delicious. Likewise, kudzu makes great salsa, and Japanese knotweed can be treated like rhubarb. Eat for America and save the environment.

Ever since they were worshiped in ancient Egypt, cats have occupied an uncanny place in the world’s imagination

As Halloween approaches, decorations featuring scary black cats are starting to make their seasonal appearance. But what did the black cat ever do to deserve its reputation as a symbol of evil? Why is it considered bad luck to have a black cat cross your path?

It wasn’t always this way. In fact, the first human-cat interactions were benign and based on mutual convenience. The invention of agriculture in the Neolithic era led to surpluses of grain, which attracted rodents, which in turn motivated wild cats to hang around humans in the hope of catching dinner. Domestication soon followed: The world’s oldest pet cat was found in a 9,500 year-old grave in Cyprus, buried alongside its human owner.

Yet as the ancient Egyptians realized, even when domesticated, the cat retains its independence. The Egyptians were fascinated by divine opposites and cosmic symmetries, and they saw this kind of duality in the cat—a fierce predator that was also a loyal guardian. Several Egyptian deities were depicted in part-cat, part-human form, including Bastet, who was a goddess of violence as well as fertility. One of her sacred colors was black, which is how the black cat first achieved its special status.

According to the Roman writer Polyaenus, who lived in the second century A.D., the Egyptian veneration of cats led to disaster at the Battle of Pelusium in 525 B.C. The invading Persian army carried cats on the front lines, rightly calculating that the Egyptians would rather accept defeat than kill a cat.

The Egyptians were unique in their extreme veneration of cats, but they weren’t alone in regarding them as having a special connection to the spirit world. In Greek mythology the cat was a familiar of Hecate, goddess of magic, sorcery and witchcraft. Hecate’s pet had once been a serving maid named Galanthis, who was turned into a cat as punishment by the goddess Hera for being rude.

When Christianity became the official religion of Rome in 380, the association of cats with paganism and witchcraft made them suspect. Moreover, the cat’s independence suggested a willful rebellion against the teaching of the Bible, which said that Adam had dominion over all the animals. The cat’s reputation worsened during the medieval era, as the Catholic Church battled against heresies and dissent. Fed lurid tales by his inquisitors, in 1233 Pope Gregory IX issued a papal bull, “Vox in Rama,” which accused heretics of using black cats in their nighttime sex orgies with Lucifer—who was described as half-cat in appearance.

In Europe, countless numbers of cats were killed in the belief that they could be witches in disguise. In 1484, Pope Innocent VIII fanned the flames of anti-cat prejudice with his papal bull on witchcraft, “Summis Desiderantes Affectibus,” which stated that the cat was “the devil’s favorite animal and idol of all witches.”

The Age of Reason ought to have rescued the black cat from its pariah status, but superstitions die hard. (How many modern apartment buildings lack a 13th floor?). Cats had plenty of ardent fans among 19th century writers, including Charles Dickens and Mark Twain, who wrote “I simply can’t resist a cat, particularly a purring one.” But Edgar Allan Poe, the master of the gothic tale, felt otherwise: in his 1843 story “The Black Cat,” the spirit of a dead cat drives its killer to madness and destruction.

So pity the poor black cat, which through no fault of its own has gone from being an instrument of the devil to the convenient tool of the horror writer—and a favorite Halloween cliché.

Share story

About the Author

Amanda Foreman

Amanda Foreman is the author of the prize-winning best sellers, ‘Georgiana, Duchess of Devonshire', and 'A World on Fire: A Epic History of Two Nations Divided'. She is currently a columnist for 'The Wall Street Journal'. Her latest work is the BBC documentary series, 'The Ascent of Woman'. In 2016, Foreman served as chair of The Man Booker Prize. Her book on the history of women, 'The World Made by Women', will be published in 2019. She is a co-founder of the literary nonprofit, House of SpeakEasy Foundation, a trustee of the Whiting Foundation, and an Honorary Research Senior Fellow in the History Department at the University of Liverpool. Amanda lives in New York with her husband and five children.

The World Made by Women

Coming in 2019

The World Made By Women: a narrative history of women from prehistory to present day. It is time for the history of women to break free from the straight jacket of Women's History. The history of women is the history of the world.