By William Burr and Jeffrey P. Kimball

National Security Archives May 29, 2015

Washington, D.C., May 29, 2015 – President Richard Nixon and his national security adviser Henry Kissinger believed they could compel “the other side” to back down during crises in the Middle East and Vietnam by “push[ing] so many chips into the pot” that Nixon would seem ‘crazy’ enough to “go much further,” according to newly declassified documents published today by the National Security Archive (www.nsarchive.gwu.edu).

The documents include a 1972 Kissinger memorandum of conversation published today for the first time in which Kissinger explains to Defense Department official Gardner Tucker that Nixon’s strategy was to make “the other side … think we might be ‘crazy’ and might really go much further” – Nixon’s Madman Theory notion of intimidating adversaries such as North Vietnam and the Soviet Union to bend them to Washington’s will in diplomatic negotiations

Nixon’s and Kissinger’s Madman strategy during the Vietnam War included veiled nuclear threats intended to intimidate Hanoi and its patrons in Moscow. The story is recounted in a new book, Nixon’s Nuclear Specter: The Secret Alert of 1969, Madman Diplomacy, and the Vietnam War, co-authored by Jeffrey Kimball, Miami University professor emeritus, and William Burr, who directs the Archive’s Nuclear History Documentation Project. Research for the book, which uncovers the inside story of White House Vietnam policymaking during Nixon’s first year in office, drew on hundreds of formerly top secret and secret records obtained by the authors as well as interviews with former government officials.

With Madman diplomacy, Nixon and Kissinger strove to end the Vietnam War on the most favorable terms possible in the shortest period of time practicable, an effort that culminated in a secret global nuclear alert in October of that year. Nixon’s Nuclear Specter provides the most comprehensive account to date of the origins, inception, policy context, and execution of “JCS Readiness Test” – the equivalent of a worldwide nuclear alert that was intended to signal Washington’s anger at Moscow’s support of North Vietnam and to jar the Soviet leadership into using their leverage to induce Hanoi to make diplomatic concessions. Carried out between 13 and 30 October 1969, it involved military operations around the world, the continental United States, Western Europe, the Middle East, the Atlantic, Pacific, and the Sea of Japan. The operations included strategic bombers, tactical air, and a variety of naval operations, from movements of aircraft carriers and ballistic missile submarines to the shadowing of Soviet merchant ships heading toward Haiphong.

To unravel the intricate story of the October alert, the authors place it in the context of nuclear threat making and coercive diplomacy during the Cold War from 1945 to 1973, the culture of the Bomb, bureaucratic infighting, intra-governmental dissent, international diplomacy, domestic politics, the antiwar movement, the “nuclear taboo,” Vietnamese and Soviet actions and policies, and assessments of the war’s ending. The authors also recount secret military operations that were part of the lead-up to the global alert, including a top secret mining readiness test that took place during the spring and summer of 1969. This mining readiness test was a ruse intended to signal Hanoi that the US was preparing to mine Haiphong harbor and the coast of North Vietnam. It is revealed for the first time in this book.

Another revelation has to do with the fabled DUCK HOOK operation, a plan for which was initially drafted in July 1969 as a mining-only operation. It soon evolved into a mining-and-bombing, shock-and-awe plan scheduled to be launched in early November, but which Nixon aborted in October, substituting the global nuclear alert in its place. The failure of Nixon’s and Kissinger’s 1969 Madman diplomacy marked a turning point in their initial exit strategy of winning a favorable armistice agreement by the end of the year 1969. Subsequently, they would follow a so-called long-route strategy of withdrawing U.S. troops while attempting to strengthen South Vietnam’s armed forces, although not necessarily counting on Saigon’s long-term survival.

In researching Nixon’s Nuclear Specter, the authors filed mandatory and Freedom of Information requests with the Defense Department and other government agencies and examined documents in diverse U.S. government archives as well as international sources. Today’s posting highlights some of the U.S. documents, many published for the first time:

A March 1969 memorandum from Nixon to Kissinger about the need to make the Soviets see risks in not helping Washington in the Vietnam negotiations: “we must worry the Soviets about the possibility that we are losing our patience and may get out of control.”

The Navy’s plan in April 1969 for a mine readiness test designed to create a “state of indecision” among the North Vietnam leadership whether Washington intended to launch mining operations.

Kissinger’s statement to Soviet Ambassador Dobrynin in May 1969 that Nixon was so flexible about the Vietnam War outcome that he was “was prepared to accept any political system in South Vietnam, provided there is a fairly reasonable interval between conclusion of an agreement and [the establishment of] such a system.”

The top secret warning to the North Vietnamese leadership that Nixon sent through an intermediary Jean Sainteny: If a diplomatic solution to the war is not reached by 1 November, Nixon would “regretfully find himself obliged to have recourse to measures of great consequence and force. . . . He will resort to any means necessary.”

The Navy’s plan for mining Haiphong Harbor, code-named DUCK HOOK, prepared secretly for Nixon and Kissinger in July 1969.

The cover page to the Navy’s Duck Hook plan for mining Haiphong Harbor, developed in July 1969 at the request of President Nixon and national security adviser Kissinger.

A telegram from the U.S. Embassy in Manila reporting on the discovery of the mining readiness test by two Senate investigators, including former (and future) Washington Post reporter Walter Pincus. After learning about aircraft carrier mining drills in Subic Bay (the Philippines), the investigators worried about a possible escalation recalling that Nixon had made such threats during the 1968 campaign.

A report from September 1969 on prospective military operations against North Vietnam (referred to unofficially within the White House as DUCK HOOK) included two options to use tactical nuclear weapons: one for “the clean nuclear interdiction of three NVN-Laos passes”-the use of small yield, low fall-out weapons to disrupt traffic on the Ho Chi Minh trail. The other was for the “nuclear interdiction of two NVN-CPR [Chinese People’s Republic] railroads”-presumably using nuclear weapons to destroy railroad tracks linking North Vietnam and China.

A Kissinger telephone conversation transcript, in which Nixon worried that with the 1 November deadline approaching and major anti-Vietnam war demonstrations scheduled for 15 October and 15 November, escalating the war might produce “horrible results” by the buildup of “a massive adverse reaction” among demonstrators.

As part of the White House plan for special military measures to get Moscow’s attention, an October 1969 memorandum from the Joint Staff based on a request from Kissinger for an “integrated plan of military actions to demonstrate convincingly to the Soviet Union that the United States is getting ready for any eventuality on or about 1 November 1969.” .

A Department of Defense plan for readiness actions that included measures to “enhance SIOP [Single Integrated Operational Plan] Naval Forces” in the Pacific and for the Strategic Air Command to fly nuclear-armed airborne alert flights over the Arctic Circle.

The thematic focus of Nixon’s Nuclear Specter is Madman Theory threat making, which culminated in the secret, global nuclear alert. But as the Kissinger statement to Dobrynin cited above suggested, a core element in Nixon’s and Kissinger’s overall Vietnam War strategy and diplomacy was the concept of a “decent interval” between the withdrawal of U.S. forces from South Vietnam and the possible collapse or defeat of the Saigon regime. In private conversations Kissinger routinely used phrases such as “decent interval,” “healthy interval,” “reasonable interval,” and “suitable interval” as code for a war-exiting scenario by which the period of time would be sufficiently long that when the fall of Saigon came-if it came-it would serve to mask the role that U.S. policy had played in South Vietnam’s collapse.

In 1969, the Nixon’s administrations long-term goal was to provide President Nguyen Van Thieus government in Saigon with a decent chance of surviving for a reasonable interval of two to five years following the sought-after mutual exit of US and North Vietnamese forces from South Vietnam. They would have preferred that President Thieu and South Vietnam survive indefinitely, and they would do what they could to maintain South Vietnam as a separate political entity. But they were realistic enough to appreciate that such a goal was unlikely and beyond their power to achieve by a military victory on the ground or from the air in Vietnam.

Giving Thieu a decent chance to survive, even for just a decent interval, however, rested primarily on persuading Hanoi to withdraw its troops from the South or, if that failed, prolonging the war in order to give time for Vietnamization to take hold in order to enable Thieu to fight the war on his own for a reasonable period of time after the US exited Indochina. In 1969, Nixon and Kissinger hoped that their Madman threat strategy, coupled with linkage diplomacy, could persuade Hanoi to agree to mutual withdrawal at the negotiating table or lever Moscows cooperation in persuading Hanoi to do so. In this respect, Nixon’s Nuclear Specter is an attempt to contribute to better understanding of Nixon and Kissinger’s Vietnam diplomacy as a whole.

William is Senior Analyst at the National Security Archive, where he directs the Archives nuclear history documentation project. See the Archives Nuclear Vault resources page;Jeffrey is professor emeritus, Miami University, and author of Nixon’s Vietnam War and The Vietnam War Files.

The Philippines was at a turbulent, quickly moving setting during the last decade of the 19th century. Revolutions have gained momentum, Spain was losing grip of its colonies, and ideas of democracy were quickly spreading (arguably from the west) to different nations. Moreover, America was quickly rising to become another superpower. After settling its own conflict with British and Spanish colonizers and eventually a civil war, the country managed to efficiently set up a democratic government and sustain one of the most prosperous economic markets in the world.

Another significant change was the Philippines’ rapid press development. From the relatively slow output, due largely to the presiding Spanish friars and government, Filipinos made their way to quick publication which was of course in line with the revolutionary movement. By the 1890s papers in both the Spanish and Filipino language were already in circulation.

Aeon May 22, 2015

For nearly 350 years, anti-Catholic bias was a reliable and powerful presence in the political and religious culture of the United States. Today, when the Louisiana governor Bobby Jindal, for example, insists that Muslim immigrants ‘want to use our freedoms to undermine… freedom’, it can be easy to forget that for most of US history, Catholicism, not Islam, was the bogeyman against which Americans defined themselves as a free, noble and (some have said) ‘chosen’ people.

It was a desire to get away from what the English Puritan Samuel Mather in 1672 called ‘the manifold Apostasies, Heresies, and Schisms of the Church of Rome’ that drove the Puritans to Massachusetts in the 1620s and ’30s. They believed that the Church of England was tainted by the remnants of Catholic theology, and they thought these ‘popish relics’ destroyed the freedom people needed in order to accept salvation from God. Because Americans held onto this Puritan understanding of Catholicism for centuries, the idea that the founding of Massachusetts had been a bold bid for ‘freedom’ became an almost religious truth. Even though people were actually executed and banished in colonial Massachusetts because they held ideas about religion that were considered ‘newe & dangerous’, schoolchildren still learn this myth in US classrooms.

In 1774, John Adams felt sorry for the Catholics he observed at a mass in Philadelphia. The ‘poor wretches,’ the future US president told his wife, were ‘fingering their beads [and] chanting in Latin, not a word of which they understood’. A century later, the cartoonist Thomas Nast was less sympathetic on the pages of Harper’s Weekly. Nast’s Catholics in the 1860s and ’70s were violent and drunk ‘Paddys’ and ‘Bridgets’ too ignorant to think for themselves and dominated by priests who worked to obliterate the separation between church and state.

In 1960, the self-help guru Norman Vincent Peale worried that Catholic voters were theocracy-loving minions who’d put a man, the Catholic John F Kennedy, in the White House who couldn’t ‘withstand the determined efforts of the hierarchy of his church’ to meddle in US politics. So Peale (the original ‘positive thinker’) formed the National Conference of Citizens for Religious Freedom and campaigned for Richard Nixon.

For most of US history, voters, ministers and lawmakers believed that there was something fundamentally un-American about Roman Catholics. They weren’t ‘free’ – and they couldn’t be free so long as they worshipped within the Church of Rome. Catholics were an element in US culture that had to be kept as far away as possible from the centres of political, military, economic and educational power. Letting such an intrinsically enslaved element ‘have its say’, so to speak, would constitute an existential challenge to the US, since at its core, the country was just an idea – the idea of freedom.

Given how long Americans feared Catholicism, the years that have passed since Kennedy’s election in 1960 have been remarkable. Today, six of the nine justices on the US Supreme Court are Catholic. The US hasn’t had another Catholic president since Kennedy, but that’s not because Protestants still fear the corrupting potential of Catholicism. Jeb Bush, a Catholic convert, is already being held up as the frontrunner for the Grand Old Party for 2016. Paul Ryan and Joe Biden both spoke proudly of their Catholic upbringing when competing to be vice-president in 2012. Ryan talked about the Catholic concept of ‘subsidiarity’ when recommending that Medicare be turned into a voucher programme, and Biden pointed to the ‘dignity in every man and woman’ that he’d learned about from priests and nuns when supporting President Barack Obama’s overhaul of the healthcare system.

The former altar-boy John Kerry failed to win the White House in 2004, but it wasn’t because Protestant voters had concerns about his Catholic faith. If anyone had concerns about Kerry’s faith, it was his co-religionists. More than half of the Catholics who voted in 2004 cast their ballots for George W Bush, the evangelical incumbent from Texas, rather than the Catholic senator from Massachusetts, in part because Kerry’s record on abortion didn’t reflect the teachings of his church. In this sense, the senator’s Catholicism might have been a political burden for him – but not in the way it was a burden for Al Smith, the 1928 Democratic nominee for president, who swept the Catholic vote, but lost the election in a landslide because many Americans saw him as being on the side of ‘rum, Romanism, and ruin’.

The story of anti-Catholicism’s dramatic disappearance from the cultural landscape in the US (Dan Brown’s novels notwithstanding) is a complicated one. It would be a mistake, however, to see the story as proof that the destiny of the US is to become a place of complete religious tolerance. Americans no longer consider Catholicism to be a threat because the very idea of ‘freedom’ in the US has changed into something more compatible with the corporate approach to freedom that the Catholic Church has always insisted upon. The Catholic understanding of religious liberty and church-state relations has also changed, becoming more compatible with the US vision and the reality of religious pluralism.

But what hasn’t changed – at least not fundamentally – is a need in the US to oppose religious groups that don’t define freedom in modern liberalism’s terms. Indeed, this need has only expanded in recent years into parts of western Europe where concepts of freedom also contribute to national identity, but immigration has forced native-born people to confront the reality that some don’t understand freedom to be a matter of liberté and egalité.

For example, the status or condition of women in many cultures that define freedom as ‘submission’ to the will of God is repugnant to people who understand freedom to be the exercise of certain individual rights, such as the right to personal expression or a jury of one’s peers. But repugnant, too, has been the condition of some women in Western countries such as the US and Austria, where an overweening respect for the rights to privacy and personal property enabled Ariel Castro and Josef Fritzl to keep women imprisoned in their basements for decades, even though both men had been visited by police officers and had next-door neighbours.

Americans are actually less violent today than they used to be when encountering a religious ‘other’. The protests over building of an Islamic cultural centre near the World Trade Center site in 2010 did not deteriorate into deadly riots, the way rumours of Catholic efforts to remove Protestant bibles from public schools sparked protests in 1844. At least 15 people were killed that year in the Kensington and Southwark Bible Riots in Philadelphia, and more than 50 were injured. Two Catholic churches and a seminary were destroyed, and when the fighting was finally over, the collective property damage exceeded $150,000, at a time when yearly household incomes averaged less than $900.

Signs and bullhorns were the only weapons anyone brought to the protests in New York in 2010. But make no mistake: the impulse that drove those people to gather in downtown Manhattan, predicting and decrying the implementation of Sharia in New York City, is the same impulse that brought Protestants in Philadelphia to the streets in 1844 – the urge to protect the US from the perceived threat of a religious population that understands freedom in terms that are very different from those of most Americans.

One of the ironies of the American Revolution is that the colonists’ opposition to British rule began as what they thought of as a defense of their rights as Englishmen. England had once been a ‘land of freedom and delight’, according to the Calvinist minister Abraham Keteltas, whose sermons helped to bring skeptical residents on Long Island over to the Patriots’ side. The country had grown corrupt, however, its governmental ministers in London made greedy by the spoils of colonialism. Colonists thought that ministerial corruption threatened the freedom that people on both sides of the Atlantic believed was their ‘birthright’ as Englishmen. Only the colonists, though, were strong and virtuous enough to see this truth and do something about it. It was for the sake of liberty, Keteltas insisted in 1777, that ‘the present civil war is carried on by the American colonies’.

When he justified the revolutionary conflict in this way, Keteltas was essentially saying that there was a new sheriff in town. England’s government was no longer the world’s best protector of the ‘absolute rights of individuals’, which included the rights to property, political representation, personal security and the rule of law. Individual rights had a new set of protectors who lived in North America; their names were Thomas Jefferson and Benjamin Franklin.

Rights that were ‘absolute’ were inalienable ‘gifts of God to man at his creation’, according to Anglo-American jurists such as William Blackstone. They were not, in other words, something governmentsgave to people; absolute rights were something governments protectedfor people. History had shown, however, that kings and politicians were more than capable of insulting God’s wishes and becoming tyrants. It had happened in England in the late 1680s, as all colonists knew. And now, nearly a century later, it seemed to be happening again.

The events of 1689 loomed large in the minds of the men and women who pushed for independence in the 1770s. England’s king that year had been James II and, in his short time on the throne, he’d managed to dismiss Parliament, expand the size of the army, and suspend the charters of seven colonies in North America.

The reason he’d done all of this was clear – at least to many members of Parliament. James was a Catholic. He’d converted after fleeing to France at the age of 15, following the execution of his Anglican father, King Charles I, during the English Civil War, the Calvinist-tinged uprising of the 1640s. While his conversion might have been understandable, given the way Protestants in his home country had treated his father, many still felt it was ‘inconsistent with the safety and welfare of this Protestant kingdom to be governed by a popish prince’. Catholicism, after all, was a faith that demanded blind obedience, crushed independent thought, and inculcated the habits of ‘tyranny and arbitrary power’ into its adherents. It was no surprise, really, that James had chosen to disregard the God-given rights of his subjects; he was just treating the English people the way the Pope treated Catholics. And in 1689, the Anglicans and Calvinists in Parliament realised he would have to be stopped.

To that end, they ‘invited the prince of Orange to vindicate their liberties’, according to the well-known history that Keteltas recited to his congregants in the lead-up to the Revolutionary War. William of Orange was the Calvinist stadtholder of several provinces in the Netherlands. He was also James II’s son-in-law, married to the king’s oldest child, Mary, who’d been raised as an Anglican by her mother. Forty years earlier, Anglicans and Calvinists had hated each other enough to fight a civil war that led to the execution of the king. But in a classic diplomatic move, whereby the ‘enemy of my enemy’ becomes ‘my friend’, England’s Protestants united in 1689 to launch a coup that forever linked ‘English’ and ‘Protestant’ identity. To this very day, Roman Catholics are barred from sitting on the English throne – though since 2013, heirs to the throne have been permitted to marry Catholics, provided they don’t convert.

However, what became known as ‘the Glorious Revolution’ did more than just link ‘English’ and ‘Protestant’ identity. It also settled the question of what ‘freedom’ was, and defined the concept for English-speaking people in thoroughly Protestant terms. Freedom became the absence of outside restraint – or ‘the power of acting as one sees fit,’ in the words of Blackstone. Rights were something held wholly and intrinsically by the individual, because without them, individuals could not fulfill the responsibilities given to each person by God. The Protestant emphasis on sola scriptura obliged people to read their Bibles and use their reason to construct a personal piety that began with the Word and the undeniable reality of their sinfulness. Freedom, in this Protestant way of thinking, was something given to human beings by the Creator so that they might choose to receive God’s grace. Governments that respected an understanding of freedom that began with the rights of the individual, then, were thought to be ‘godly’ governments.

Freedom was not something to be realised by human beings only with the help of others. This way of thinking reflected the Vatican’s belief that truth was something too complex for any one person to access on his own. It’s not that there was no freedom within early modern Catholicism; however, freedom was the fulfillment of God’s wishes for humanity, and Scripture and human reason, on their own, were not enough to understand what those wishes were. For this reason, God had created a ‘Brain Trust’ of really smart men to advise Him on the ‘New Deal’. These men studied Scripture and the teachings of earlier theologians to uncover the fullness of God’s grace. Governments that allowed themselves to be guided by the Church, therefore, were the only godly governments for a Catholic before the modernising influence of the Second Vatican Council of 1962.

In the wake of the Glorious Revolution of 1688, the vast majority of English-speaking people believed that freedom did not exist within the confines of the Catholic Church. What they called ‘popery’ (to emphasise Catholics’ servile obedience to the Pope) was synonymous with ‘slavery’. In the decades that followed 1689, the colonists in North America considered the king and parliament to be the best checks against popery the world had ever seen. That’s why they spoke of the rights that God had given to them as ‘the rights of Englishmen’. But following the Seven Years’ War (1754-63), England’s colonists became convinced that parliament was no longer respecting their rights. And even though not a single MP was Catholic, they expressed their fears in the familiar language of anti-Catholicism.

Political leaders predicted that the US would soon be ‘fed with blood by the Roman Catholic doctrines’ and subjected to the kind of ‘tyranny under which Europe groaned for many ages’. Newspapers fretted that ‘the medium of French law and popery’ would soon be ‘established’ in the colonies, ‘the one enslaving the body, the other the mind’. In 1827, one veteran recalled that ‘the real fears of popery… stimulated many timorous people to send their sons to join the military ranks’. The common cry of the Patriots, he recounted, was ‘NO KING, NO POPERY!’

These concerns about ‘Jesuitical designs’ didn’t go away after the war was over, but public expressions of anti-Catholicism did decrease considerably in the years following independence, partly because US Catholics had sided with the Patriots, and partly because there weren’t many Catholics in the US to stimulate the fear. The first US bishop, John Carroll, estimated the country was home to 30,000 Catholics in 1790, the year the nation’s first census put the overall population at nearly 4 million. Catholics were ‘as rare as a comet or an earthquake’ in the US, according to John Adams. But that situation was soon to change.

The first Catholic immigrants to come to the US in large numbers were not the starving and destitute, famine-fleeing Irish who dominate the narrative in survey courses on US history. They were Germans who started coming over in the 1820s to get away from religious violence in their home provinces. Some of these Germans stayed in the coastal cities where they first landed, but many more headed inland to places such as Cincinnati and Chicago, both in sparsely populated territories that had only recently become states.

This mass migration convinced many leaders that a papal conspiracy to undermine US freedom was afoot. In words that could easily be mistaken for those of modern-day, anti-Islamic politicians such as France’s Marine Le Pen and the Netherlands’ Geert Wilders, the prominent Congregationalist minister Lyman Beecher warned in 1835 that Catholics ‘do design the subversion of our institutions’. Theirs was a religion ‘enslaving and terrible in its recorded deeds’, and their numbers in Ohio, Indiana and Illinois were becoming ‘too great and influential for the safety of republican institutions’. Priests, Beecher claimed, were ‘wield[ing] in mass the suffrage of their confiding people’, telling Catholics to vote for laws and leaders who would ultimately destroy democracy.

Beecher’s fears eventually led to the formation of the American Party, which was nativist and anti-immigrant. Popularly known as the ‘Know-Nothing’ Party, its stance on immigration was similar to the Tea Party’s today – though Know-Nothings didn’t have any existing immigration restrictions to appeal to. The American Party never captured the White House, but their candidate in 1856, Millard Fillmore, had spent two and a half years there, since he’d been vice president at the time of Zachary Taylor’s death in 1850. The mayors of Chicago, Boston and Washington, DC were members of the American Party in the 1850s. The party won control of the state legislatures in Massachusetts and Pennsylvania in 1854, and several congressmen and at least one governor, J Neely Johnson of California, also belonged to the American Party.

There is no historical evidence that anything resembling a Catholic conspiracy against US democracy ever existed. At the same time, the fact is that Catholics in the 19th and early 20th centuries did conceive of freedom differently from most Protestant Americans. While Catholics weren’t the patsies that Beecher made them out to be, they were not obsessed with their individual rights the way many Protestants were. They listened to their bishops when those bishops warned them that public schools were a place where ‘all classes, Protestants, Jews, and Infidels meet promiscuously’, and they used their hard-earned money to send their sons and daughters to Catholic schools because – in the words of Rochester’s Bishop Bernard McQuaid – ‘watchful Christian parents would never allow their children to associate with such [people], justly fearing contamination’.

The ‘contamination’ that McQuaid and others feared was the ‘false theory of authority’ underpinning Protestants’ understanding of freedom. That theory, Father James Keogh of Pittsburgh explained in 1862, elevated ‘the principle of private judgment’ above ‘the one power on earth that has the right to decide whether the civil law be in accordance with, or in opposition to, the law of God. That power is the Church of Christ.’

So damaging was this false theory of authority that Pope Leo XIII felt the need to speak out against it in 1899. The word he used to identify the false theory was ‘Americanism’. Manifested in everyday life, Americanism consisted of ‘the passion for discussing and pouring contempt upon any possible subject, [and] the assumed right to hold whatever opinions one pleases upon any subject and to set them forth in print to the world’. Such freedoms, Pope Leo insisted, ‘wrapped minds in darkness’ and fostered a climate of individualism that was dangerous because it caused people to ‘become unmindful of both conscience and of duty’.

Leo XIII’s words captured the attention of the US press. The Boston Daily Advertiser called the Pope’s position ‘a solemn manifestation of the intransigent spirit of Catholicism’. The New York Times Magazineand the Milwaukee Sentinel each printed a syndicated editorial suggesting that Catholicism wasn’t ‘compatible with the virility and independence of the American people’. The Times added a hopeful observation, however: Catholics in the US, it noted, couldn’t ‘escape the atmosphere of liberty in which they live’.

Leo’s disdain for the idea that a person might actually have a right to his opinions is as odious to Americans today as it was in 1899. But his concern that a preoccupation with individual rights could cause people to forget their duty to ‘be solicitous for the salvation of one’s neighbour’ does sound a bit different to a 21st-century audience that has accepted (to varying degrees) the premise behind Theodore Roosevelt’s trust-busting, his cousin Franklin’s ‘New Deal’ reforms, and the programmes that came out of Lyndon Johnson’s ‘Great Society’.

In a broad sense, the history of the 20th century was the history of how Americans came to terms with the reality that individual rights alone could not produce a society that was both free and industrialised. In an age of modern corporations – lawmakers gradually learned – freedom needed some extra help.

In his 1964 State of the Union address, President Johnson declared a ‘war on poverty’ that gave good government the obligation of protecting not just an individual’s rights, but also her potential. The housing, food and educational assistance programs he put forward were designed to ‘give our fellow citizens a fair chance to develop their own capabilities’. Drawing upon FDR’s ‘four freedoms’ address of 1941, Johnson called for the US to be ‘a nation free from want’, expanding the conditions of freedom well beyond the rights to property and political representation outlined by Anglo-American jurists in the 18th century.

Just as the Catholic Church has always taught that reason alone cannot help an individual access truth, US policies and institutions (even many of the conservative ones) now teach that rights alone cannot help individuals access the freedom that is available to them as human beings. This shift in Americans’ understanding of what makes freedom possible is one of the reasons they no longer view Catholicism as an existential threat.

The Catholic Church has also changed its understanding of freedom – specifically religious freedom. Until 1965, the church/state separation enshrined in the US Bill of Rights was anathema to the Catholic Church. ‘Error has no rights’ was the phrase that animated the Vatican’s relations with secular authorities, and as the only earthly institution that contained the fullness of divine truth, the Catholic Church was believed to be a proper partner for any and all states.

But in the early years of the Cold War, Pope John XXIII worried that the world was being threatened by ‘a temporal order which some have wished to reorganise excluding God’. Under such circumstances, any belief in God became preferable to Communism. Therefore, in 1962 the Pope convened the Second Vatican Council to consider several modern questions, including the questions of religious liberty and ecumenism. The result was Dignitatis Humanae (1965), which recognises religious freedom as a social and civil right, grounded in ‘the dignity of the human person as this dignity is known through the revealed word of God and by reason itself’.

The Catholic Church embraced a Protestant understanding of religious freedom in 1965 in response to a perceived threat – the threat of godless Communism. Today, some Protestants in the US are embracing an older, pre-Vatican II understanding of Catholic religious freedom in response to another perceived threat – in this case, the growing number of lawmakers and courts that have insisted gays and lesbians have a fundamental right to marry.

This trend is the reason evangelical voters turned out in droves in 2012 to support the candidacy of Rick Santorum, a traditionalist Catholic who attends a Latin Mass and has insisted he doesn’t ‘believe in an America where the separation of church and state is absolute’. Support for Santorum was strong among evangelical Protestants even before he announced his candidacy – so strong, in fact, that in 2005, Time magazine named him one of the ‘25 Most Influential Evangelicals in America’, in spite of his Catholicism.

The idea that the Church should have ‘no influence or no involvement in the operation of the state is absolutely antithetical to the objectives and vision of our country’, according to Santorum. On this point the Republican from Pennsylvania has something in common with many of the world’s Muslims – though naturally, he’d disagree with them about which religion ought to have an influence. A survey conducted in 2012 by the Pew Research Center found that 98 per cent of the Muslims in Jordan, 97 per cent of them in Pakistan, and 92 per cent of them in Egypt believe that the teachings of Islam should ‘hold sway’ over the laws in their country.

Interestingly, in the US, Muslim immigrants feel differently. Only 28 per cent of US‑born Muslims think that mosque leaders should refrain from politics, but 60 per cent of Muslim immigrants recently told researchers at Pew that mosque leaders should ‘keep out of political matters’. It’s a directive that suggests Muslim immigrants in the US might be more ‘American’ than some of the Catholics and Protestants voting and campaigning in the US today.

Maura Jane Farrellyteaches American studies and journalism at Brandeis University in Massachusetts. She is the author of Papist Patriots(2012), and lives in the Boston area.

Left Out

Boston Review May 21, 2015

Detail from The popular tendency to rail at wealth is not entirely justified (by Samuel Ehrhart) showing “a group of working class individuals complaining about the selfish accumulation of wealth by a small percentage of society.” Image: Library of Congress.

Two new books worry about the unstable lives of the white working class. Both Andrew Cherlin, noted sociologist of the family, and Robert Putnam, of Bowling Alone (2000) fame, warn that the economic insecurity blue-collar workers have faced over the last forty years has disordered the lives of white working-class children. That transformation, in turn, has handicapped their cognitive development, personal ties, community involvement, and economic success.

The basic story is well known. Since about 1970, there has been a gross deterioration in the jobs, wages, and employment stability available to men with no more than a high school degree. A few conservative writers have tried to muddle these facts, but facts they are. And it is not just that the economic fortunes of less-educated men have diverged sharply from those of men with bachelor’s degrees; the latter have been marrying the growing number of prospering women with degrees, too. (Of course, because Americans today have much more schooling, on average, than they did fifty years ago, the less-educated among us make up a much smaller and less academically skilled portion of the whole population. But the basic account stands.)

During roughly the same period, Americans came to accept premarital sex, divorce, single-parent households, and the widespread pursuit of self-fulfillment. The coincidence of these economic and cultural trends has stirred heated debate. Much of the analysis is a recasting—sometimes crudely, sometimes subtly—of the debate over rising rates of single motherhood in black families which has been with us since the Moynihan Report raised alarms in 1965.

• • •

These books overlap in subject and approach—both authors title their last chapters, “What Is to Be Done?”—but Putnam’s Our Kids: The American Dream in Crisis, enlivened by many personal stories gathered by Jennifer Silva, focuses on unequal opportunity. Putnam argues that poor and working-class youth’s chances of moving up have declined since the 1950s. This is not an original thesis, but Putnam conveys the evidence in his usually compelling way. He presents the story of family disorder and goes beyond it to show how schools, increasingly segregated by parental income, fail to close and may even widen the academic and soft-skills gaps. Putnam is particularly irritated by requirements that families pay for extracurricular activities, an important training ground for success. And he reports on working-class families’ weakening social ties to potentially helpful mentors and references. Their kids are falling farther behind.

Cherlin’s Labor’s Love Lost: The Rise and Fall of the Working-Class Family in Americaprovides a longer historical take on the white working-class family. He also more directly confronts the charge—made most noisomely by Charles Murray, as I discussed in these pages three years ago—that the origin of the economic and social crises of the white working class is cultural: the baleful influence of 1960s hippie hedonism.

Cherlin first shows that the class gap in marriage rates is not new. In 1880 about 65 percent of white, U.S.-born men between ages twenty and forty-nine who worked in the professions or in managerial positions were married, but only 38 percent of similar men in service jobs were married. That is a twenty-seven-point gap. By 1960 more men of all classes were marrying, and the class gap had narrowed to about fifteen points. But by 2010, marriage rates had dropped, most sharply for service workers; the class gap is roughly the size it was before, about 59 versus 30 percent. Cherlin argues that this rise and fall of working-class marriage follows the rise and fall of economic equality.

Some believe that no social program could restore working-class stability.

The stability and lifestyle of the white working-class family followed a similar up-and-down trajectory. In the nineteenth century, married men needed their wives and daughters to earn money mainly at home—doing piecework such as sewing shirt collars and taking in roomers—and needed their sons to work outside the home. The emerging middle-class model—one man fully supports a family while the wife tends to the home and the children to learning—was to most workers a mirage. By the middle of the twentieth century, however, the economy provided enough stable working-class jobs at family wages to often make the mirage attainable. For a brief time, both white middle-class and working-class children typically grew up in male-breadwinner families. A half-century later, such jobs had dwindled; the foundation for the stable working-class family had cracked. Meanwhile, the middle class had moved on to a newer, egalitarian, two-career model of the family.

Today’s economic insecurity is, of course, far less than that of the nineteenth century. Economic downturns used to drive millions of unemployed men to tramp the country’s roads rather than sign up at the unemployment office. Expectations are also quite different. Far more Americans—women especially—expect to find happiness in personal liberty, including sexual liberty, and a fully satisfying marriage. Achieving the latter has become increasingly class-dependent.

The well-educated enjoy the new freedoms. They marry relatively late and stay married, they bear children in marriage, and they intensely prepare those children for success. The poorly educated also value marriage and also delay marrying as they seek financial security and the right partner. For both groups getting married caps rather than begins a successful adulthood.

But for the working class, those expectations are increasingly unrealistic. As a result, Cherlin writes, a critical issue, especially for women, is “what to do about children until one marries.” Unlike a century ago, many do not and cannot wait for the right partner and time. Years pass; young adults have sex, serial attachments, and children. The result is an unstable family life for millions of children. Working-class parents, many of them single mothers or couples together only temporarily, have a hard time giving their children the close attention and stability necessary to succeed. For all the celebrations of family diversity, children do notably better with two stable adults, preferably both their own parents. The child-rearing gap makes the class gap multigenerational.

• • •

Cherlin and Putnam both answer the cultural explanation by arguing that the white working-class crisis is the product of a bad economy and an aspirational culture. Children, self-fulfillment, and a lifelong soul mate depend on stable and well-paid employment. In their “What Is to Be Done?” chapters, both authors report that efforts to reeducate working-class Americans to make more pragmatic choices, along with hortatory programs to foster and preserve marriages, have borne little fruit. Instead, “sustained economic revival for low-paid workers would be as close to a magic bullet as I can imagine,” Putnam writes. To that forlorn wish he adds, as does Cherlin, the basic liberal package for improving the education, work, and income of working-class Americans. (And yet, Jill Lepore criticizes Putnam in The New Yorker for not pushing still more radical ideas.) Some believe that no program could restore working-class stability, but social democratic nations show that policies can protect children even in the new economy.

Each of these authors notes, but does not emphasize, another coincident trend: rising gender equality. The halcyon days for working-class children—the 1950s—may have been ones of quiet despair for housewives. It is unlikely that we will return there. Working-class women today expect more, and because they are catching up or passing working-class men in learning and earning, they demand more. They are more economically independent, sometimes independent with children. Conservatives have attributed women’s independence to welfare, but the post-welfare reform years show that its sources are broader. Middle-class men are forming new kinds of stable families with independent middle-class women. But working-class men are increasingly left out—and so are their children.

Claude S. Fischer is Professor of Sociology at the University of California, Berkeley and author of Made in America. In his bimonthly BR column, Fischer explores controversial social and cultural issues using tools of sociology and history.

BASIC BOOKS, 2015

by LILIAN CALLES-BARGER

New Books in American Studies Network MAY 22, 2015

Kevin M. Kruse

Kevin M. Kruse is professor of history at Princeton University and author of One Nation Under God: How Corporate America Invented Christian America (Basic Books, 2015). Kruse argues that the idea that America was always a “Christian nation” dates from the 1930s. In opposition to FDR’S New Deal, businessmen and religious leaders began to promote the idea of “freedom under God.” The post-war era brought new fears of the advancement of domestic communism. In a decisive turn from an earlier social gospel, these leaders established a Christian ethos based on the ideas of private property, capitalism, and individual economic freedom. Adding “under God” to the pledge of allegiance, designating “In God We Trust” as the official motto of the nation, the controversial attempt to institute prayer and bible distribution in American schools were all forerunner to the Christian Right at the end of the century. Kruse’s narrative focuses on how American leaders from different powerful sectors of the nation sought through legislation and public practices to unify a pluralistic nation under a capitalist-affirming Christian framework. The result was not unity but a more fragmented and divided nation. In unfolding the narrative Kruse challenges the often-benign public religious images of men like Billy Graham, Dwight D. Eisenhower, and a multitude of recognizable business leaders. The book opens up a timely conversation on the meaning of religious pluralism and the place of religion in American public life.

How the Civil War Became the Indian Wars

The New York Times May 25, 2015

On Dec. 21, 1866, a year and a half after Gen. Robert E. Lee and Gen. Ulysses S. Grant ostensibly closed the book on the Civil War’s final chapter at Appomattox Court House, another soldier, Capt. William Fetterman, led cavalrymen from Fort Phil Kearny, a federal outpost in Wyoming, toward the base of the Big Horn range. The men planned to attack Indians who had reportedly been menacing local settlers. Instead, a group of Arapahos, Cheyennes and Lakotas, including a warrior named Crazy Horse, killed Fetterman and 80 of his men. It was the Army’s worst defeat on the Plains to date. The Civil War was over, but the Indian wars were just beginning.

These two conflicts, long segregated in history and memory, were in fact intertwined. They both grew out of the process of establishing an American empire in the West. In 1860, competing visions of expansion transformed the presidential election into a referendum. Members of the Republican Party hearkened back to Jefferson’s dream of an “empire for liberty.” The United States, they said, should move west, leaving slavery behind. This free soil platform stood opposite the splintered Democrats’ insistence that slavery, unfettered by federal regulations, should be allowed to root itself in new soil. After Abraham Lincoln’s narrow victory, Southern states seceded, taking their congressional delegations with them.

Never ones to let a serious crisis go to waste, leading Republicans seized the ensuing constitutional crisis as an opportunity to remake the nation’s political economy and geography. In the summer of 1862, as Lincoln mulled over the Emancipation Proclamation’s details, officials in his administration created the Department of Agriculture, while Congress passed the Morrill Land Grant Act, the Pacific Railroad Act and the Homestead Act. As a result, federal authorities could offer citizens a deal: Enlist to fight for Lincoln and liberty, and receive, as fair recompense for their patriotic sacrifices, higher education and Western land connected by rail to markets. It seemed possible that liberty and empire might advance in lock step.

But later that summer, Lincoln dispatched Gen. John Pope, who was defeated by Lee at the Second Battle of Bull Run, to smash an uprising among the Dakota Sioux in Minnesota. The result was the largest mass execution in the nation’s history: 38 Dakotas were hanged the day after Christmas 1862. A year later, Kit Carson, who had found glory at the Battle of Valverde, prosecuted a scorched-earth campaign against the Navajos, culminating in 1864 with the Long Walk, in which Navajos endured a 300-mile forced march from Arizona to a reservation in New Mexico.

That same year, Col. John Chivington, who turned back Confederates in the Southwest at the Battle of Glorieta Pass, attacked a peaceful camp of Cheyennes and Arapahos at Sand Creek in Colorado. Chivington’s troops slaughtered more than 150 Indians. A vast majority were women, children or the elderly. Through the streets of Denver, the soldiers paraded their grim trophies from the killing field: scalps and genitalia.

In the years after the Civil War, federal officials contemplated the problem of demilitarization. Over one million Union soldiers had to be mustered out or redeployed. Thousands of troops remained in the South to support Reconstruction. Thousands more were sent West. Set against that backdrop, the project of continental expansion fostered sectional reconciliation. Northerners and Southerners agreed on little at the time except that the Army should pacify Western tribes. Even as they fought over the proper role for the federal government, the rights of the states, and the prerogatives of citizenship, many Americans found rare common ground on the subject of Manifest Destiny.

During the era of Reconstruction, many American soldiers, whether they had fought for the Union or the Confederacy, redeployed to the frontier. They became shock troops of empire. The federal project of demilitarization, paradoxically, accelerated the conquest and colonization of the West.

The Fetterman Fight exploded out of this context. In the wake of the Sand Creek Massacre, Cheyennes, Arapahos and various Sioux peoples forged an alliance, hoping to stem the tide of settlers surging across the Plains. Officials in Washington sensed a threat to their imperial ambitions. They sent Maj. Gen. Grenville Dodge, who had commanded a corps during William Tecumseh Sherman’s pivotal Atlanta campaign, to win what soon became known as Red Cloud’s War. After another year of gruesome and ineffectual fighting, federal and tribal negotiators signed the Treaty of Fort Laramie, guaranteeing the Lakotas the Black Hills “in perpetuity” and pledging that settlers would stay out of the Powder River Country.

The Indian wars of the Reconstruction era devastated not just Native American nations but also the United States. When the Civil War ended, many Northerners embraced their government, which had, after all, proved its worth by preserving the Union and helping to free the slaves. For a moment, it seemed that the federal government could accomplish great things. But in the West, Native Americans would not simply vanish, fated by racial destiny to drown in the flood tide of civilization.

Red Cloud’s War, then, undermined a utopian moment and blurred the Republican Party’s vision for expansion, but at least the Grant administration had a plan. After he took office in 1869, President Grant promised that he would pursue a “peace policy” to put an end to violence in the West, opening the region to settlers. By feeding rather than fighting Indians, federal authorities would avoid further bloodshed with the nation’s indigenous peoples. The process of civilizing and acculturating Native nations into the United States could begin.

This plan soon unraveled. In 1872, Captain Jack, a Modoc headman, led approximately 150 of his people into the lava beds south of Tule Lake, near the Oregon-California border. The Modocs were irate because federal officials refused to protect them from local settlers and neighboring tribes. Panic gripped the region. General Sherman, by then elevated to command of the entire Army, responded by sending Maj. Gen. Edward Canby to pacify the Modocs. A decade earlier, Canby had devised the original plan for the Navajos’ Long Walk, and then later had helped to quell the New York City Draft Riots. Sherman was confident that his subordinate could handle the task at hand: negotiating a settlement with a ragtag band of frontier savages.

But on April 11, 1873, Good Friday, after months of bloody skirmishes and failed negotiations, the Modoc War, which to that point had been a local problem, became a national tragedy. When Captain Jack and his men killed Canby – the only general to die during the Indian wars – and another peace commissioner, the violence shocked observers around the United States and the world. Sherman and Grant called for the Modoc’s “utter extermination.” The fighting ended only when soldiers captured, tried, and executed Captain Jack and several of his followers later that year. Soon after, the Army loaded the surviving Modocs onto cattle cars and shipped them off to a reservation in Indian Territory (present-day Oklahoma).

President Grant’s Peace Policy perished in the Modoc War. The horror of that conflict, and the Indian wars more broadly, coupled with an endless array of political scandals and violence in the states of the former Confederacy – including the brutal murder, on Easter Sunday 1873 in Colfax, La., of at least 60 African-Americans – diminished support for the Grant administration’s initiatives in the South and the West.

The following year, Lt. Col. George Armstrong Custer claimed that an expedition he led had discovered gold in the Black Hills – territory supposedly safeguarded for the Lakotas by the Fort Laramie Treaty. News of potential riches spread around the country. Another torrent of settlers rushed westward. Hoping to preserve land sacred to their people, tribal leaders, including Red Cloud, met with Grant. He offered them a new reservation. “If it is such a good country,” one of the chiefs replied, “you ought to send the white men now in our country there and leave us alone.” Crazy Horse, Sitting Bull and other warriors began attacking settlers. Troops marched toward what would be called the Great Sioux War.

Horse and his band of Indians on their way from Camp Sheridan to surrender at Red Cloud Agency, 1877.Credit Library of Congress.

Early in 1876, Lt. Gen. Philip Sheridan, the Army’s commander on the Plains, insisted that all Indians in the region must return to their reservations. The Lakotas and Northern Cheyennes refused. That summer, as the nation celebrated its centennial, the allied tribes won two victories in Montana: first at the Rosebud and then at the Little Bighorn. The Army sent reinforcements. Congress abrogated the Lakotas’ claims to land outside their reservation. The bloodshed continued until the spring of 1877, when the tribal coalition crumbled. Sitting Bull fled to Canada. Crazy Horse surrendered and died in federal custody.

The final act of this drama opened in 1876. When federal officials tried to remove the Nez Perce from the Pacific Northwest to Idaho, hundreds of Indians began following a leader named Chief Joseph, who vowed to fight efforts to dispossess his people. Sherman sent Maj. Gen. Oliver Otis Howard, formerly head of the Freedmen’s Bureau, to quiet the brewing insurgency. As Howard traveled west, the 1876 election remained undecided. The Democrat Samuel Tilden had outpolled the Republican Rutherford B. Hayes by nearly 300,000 votes. But both men had fallen short in the Electoral College. Congress appointed a commission to adjudicate the result. In the end, that body awarded the Oval Office to Hayes. Apparently making good on a deal struck with leading Democrats, Hayes then withdrew federal troops from the South, scuttling Reconstruction.

Less than two months after Hayes’s inauguration, Howard warned the Nez Perce that they had 30 days to return to their reservation. Instead of complying, the Indians fled, eventually covering more than 1,100 miles of the Northwest’s forbidding terrain. Later that summer, Col. Nelson Miles, a decorated veteran of Antietam, the Peninsula Campaign and the Appomattox Campaign, arrived to reinforce Howard. Trapped, Chief Joseph surrendered on Oct. 5, 1877. He reportedly said: “I am tired. My heart is sick and sad. From where the sun now stands, I will fight no more forever.”

One hundred and fifty years after the Civil War, collective memory casts that conflict as a war of liberation, entirely distinct from the Indian wars. President Lincoln died, schoolchildren throughout the United States learn, so that the nation might live again, resurrected and redeemed for having freed the South’s slaves. And though Reconstruction is typically recalled in the popular imagination as both more convoluted and contested – whether thwarted by intransigent Southerners, doomed to fail by incompetent and overweening federal officials, or perhaps some combination of the two – it was well intended nevertheless, an effort to make good on the nation’s commitment to freedom and equality.

But this is only part of the story. The Civil War emerged out of struggles between the North and South over how best to settle the West – struggles, in short, over who would shape an emerging American empire. Reconstruction in the West then devolved into a series of conflicts with Native Americans. And so, while the Civil War and its aftermath boasted moments of redemption and days of jubilee, the era also featured episodes of subjugation and dispossession, patterns that would repeat themselves in the coming years. When Chief Joseph surrendered, the United States secured its empire in the West. The Indian wars were over, but an era of American imperialism was just beginning..

Boyd Cothran is an assistant professor of United States Indigenous and cultural history at York University in Toronto and the author of “Remembering the Modoc War: Redemptive Violence and the Making of American Innocence.” Ari Kelman is the McCabe-Greer Professor of the Civil War Era at Penn State and the author of “A Misplaced Massacre: Struggling Over the Memory of Sand Creek,” which won the Bancroft Prize in 2014, and, with Jonathan Fetter-Vorm, “Battle Lines: A Graphic History of the Civil War.” Cothran and Kelman are both writing books about the relationship between Reconstruction and Native American history

A look back at some of the illustrations that graced the pages of Puck magazine, America’s first humor magazine that satirized political and social issues of the day.

This cartoon, “The Modern Colossus of [Rail] Roads,” dated December 10, 1879, depicts New York Central Railroad President Henry Vanderbilt at the center as the most powerful tycoon in the U.S. railroad industry. Standing on his feet are two other powerful industry figures, Cyrus West Field (left), who controlled the New York Elevated Railroad Company, and Jay Gould (right), who controlled the Union Pacific Railroad.

(Frederick Burr Opper/Library of Congress)

Belva Lockwood, the first woman to argue a case before the Supreme Court, is pictured here alongside presidential candidate Ben Butler (labeled “B.B.”) of the Greenback/Anti-Monopoly Party. In 1884, Lockwood was chosen by the small California-based Equal Rights Party as their presidential nominee, and the media quickly seized upon the news.

(Louis Dalrymple/Library of Congress)

In this June 23, 1897, illustration, the magazine’s recurring character, Puck, (after the word “puckish,” which means childishly mischievous) is handing a bouquet of flowers labeled “1837” and “1897” to Queen Victoria, who is sitting on a throne, holding a scepter, and leaning forward to accept the flowers.

(Udo J. Kepple/Puck Magazine/Wikimedia Commons)

This cartoon, dated March 30, 1898, depicts Richard “Boss” Croker, the head of New York City’s Tammany Hall, as the sun, with politicians and people from various professions revolving around him. With Tammany Hall, Croker controlled one of the most powerful political institutions of his time.

(Louis Dalrymple/Puck Magazine/Wikimedia Commons)

A cartoon, dated May 11, 1898, urging war with Spain over Cuba. A month earlier, the United States had declared war on Spain after the sinking of the battleship Maine in Havana harbor on February 15, 1898. The bottom of the photo reads, “The duty of the hour: To save her not only from Spain, but from a worse fate.” The Spanish-American War eventually ended with the signing of the Treaty of Paris on December 1898, after Spain lost control of Cuba, Puerto Rico, the Philippines, Guam, and other islands.

(Wikimedia Commons)

A 1901 cartoon depicting business magnate John D. Rockefeller, founder of Standard Oil, one of the world’s first and largest multinational corporations. Rockefeller stands on a podium with his company’s name and wears a crown labeled with the names of major railroads. In 1901, an anarchist assassinated President McKinley and included corporations like Standard Oil among his antitrust rhetoric.

(Udo J. Kepple/Puck Magazine/Library of Congress)

(Udo J. Kepple/Puck Magazine/Library of Congress)

In this illustration from April 1, 1903, Nicholas II, the last czar of Russia, is kneeling on one knee before a pillow, on which rests a scroll of papers labeled “Ukase civil and religious reforms,” with rays of light labeled “Enlightenment” beaming down on him. Nicholas II’s reign marked a turbulent time of immense political change in Russian history, and he and his family were executed on July 17, 1918.

(Udo J. Kepple/Puck Magazine/Library of Congress)

This March 9, 1904, illustration shows steel magnate Charles M. Schwab as Napoleon sitting on a rock in the middle of the ocean, looking back at the setting sun labeled “Business Reputation.” In his hands are papers labeled “Investigation Ship Building Scandal,” and other papers labeled “Steel Trust” are in his coat pocket.

This March 9, 1904, illustration shows steel magnate Charles M. Schwab as Napoleon sitting on a rock in the middle of the ocean, looking back at the setting sun labeled “Business Reputation.” In his hands are papers labeled “Investigation Ship Building Scandal,” and other papers labeled “Steel Trust” are in his coat pocket.

(Frank A. Nankivell/Library of Congress)

A Fourth of July cartoon from 1905 showing a crowd of people celebrating a spinning firework display with the head of Uncle Sam at the center.

(Frank A. Nankivell)

This illustration, dated February 2, 1910, shows banker John Pierpont “J.P.” Morgan clutching to his chest large New York City buildings labeled “Billion Dollar Bank Merger.” In the foreground, a young child puts a coin in a “toy bank” and Morgan’s left arm reaches around the buildings to grab it for himself. Three years earlier, during the Panic of 1907, Morgan resolved a banking crisis after major New York banks were on the verge of bankruptcy. The U.S. Federal Reserve System was created following the Panic, which the magazine cover alludes to with its title, “The Central Bank—A look back at some of the illustrations that graced the pages of Puck magazine, America’s first humor magazine that satirized political and social issues of the day.Why should Uncle Sam establish one, when Uncle Pierpont is already on the job?”

(Brynolf Wennerberg)

This illustration, dated July 25, 1914, shows a tall beautiful woman with red hair, wearing a long green dress and a headband with a feather. She is holding up her hands and perched on her fingers are several diminutive male figures who are courting her with bouquets of flowers, bags of money, serenading her, appealing to her, and even threatening suicide.

(Henry Mayer/Puck Publishing Corporation/Library of Congress)

A torch-bearing woman labeled “Votes for Women,” symbolizing the awakening of the nation’s women to the desire for suffrage, strides across the Western states, where women already had the right to vote, toward the East where women are reaching out to her, dated February 20, 1915.

(Rolf Armstrong/Library of Congress)

In this February 20, 1915, illustration, Puck is pictured with a pencil in his hand, next to a woman wearing a uniform and a sash labeled “Votes for Women.”

(Library of Congress)

This cartoon, dated October 9, 1915, “I Did Not Raise My Girl To Be a Voter,” is a parody of the anti-World War I protest song “I Did Not Raise My Boy To Be A Soldier,” with the context altered to women’s suffrage. A conductor labeled “political boss” leads a lone female soloist surrounded by a male chorus with various labels, including “procurer,” “child-labor employer,” and “sweat-shop owner.” Arguments in favor of granting women the right to vote included the contention that female voters would support laws that reduced prostitution, labor abuses, and other social evils.

Annum Masroor

Audience Engagement Fellow

Annum Masroor is an audience engagement fellow at National Journal. Previously, she was an intern at The Nation, Salon.com, and Democracy Now!. She graduated from the University of Georgia with a B.A. in international affairs and Arabic, and from Columbia University Graduate School of Journalism with an M.S. in journalism. She is from Savannah, GA.