February 8, 1996 – Clinton signs into the law the Communications Decency Act

In July 1995, Time Magazine published a cover story which gave voice to a contemporaneous moral panic regarding internet pornography. The article was riddled with faulty research, sloppy journalism and salacious fear mongering (so basically it was a Time cover story). The heart of the piece parroted the specious findings of a discredited Carnegie Mellon study that found that 83.5 percent of online images were pornographic. The crux of the story as it was so elegantly expressed by “journalist” Philip Elmer-Dewitt was: “What the Carnegie Mellon researchers discovered was: THERE’S AN AWFUL LOT OF PORN ONLINE.” (caps in original)

As Time was raising the spectre of cyberporn, the US Congress was debating the Communications Decency Act (CDA), an amendment to the Telecommunications Act which was to make it a federal crime to facilitate the availability of pornographic materials online where they could be accessed by children. The day after the article was published, long-serving Senator Chuck Grassey (R-IA) invoked the findings of the study on the floor of the U.S. Senate:

Eighty–three point five percent of all computerized photographs available on the Internet are pornographic. Mr. President, I want to repeat that: 83.5 percent of the 900,000 images reviewed — these are all on the Internet — are pornographic, according to the Carnegie Mellon study. Now, of course, that does not mean that all of these images are illegal under the Constitution. But with so many graphic images available on computer networks, I believe Congress must act and do so in a constitutional manner to help parents who are under assault in this day and age. There is a flood of vile pornography, and we must act to stem this growing tide, because, in the words of Judge Robert Bork, it incites perverted minds.

Grassley was so enamoured by the Time story that he had its full text along with that of similar Newsweek and Spectator pieces, entered into the Congressional Record. During the same session, Senator James Exon (D-NE) author of the CDA asked Grassley:

Mr. EXON: May I inquire of my friend from Iowa, did he have printed in the Record that portion of the Time magazine article from this morning’s Time magazine?

The PRESIDING OFFICER: The Chair will observe he did.

Mr. EXON: I thank the Chair.

If it was not referenced, I would reference the graphic picture on the front of Time magazine today, which I think puts into focus very distinctly and directly what my friend from Iowa and this Senator has been talking about for a long, long time.

The CDA, also known as the “Exon Amendment,” had come under attack by civil libertarians and free speech activists for violating the First Amendment and limiting Internet speech. The CDA was an attempt to regulate both obscenity and indecency on the Internet using the “contemporary community standards” measure to determine what would have been deemed “indecent.” Any operator who took “good faith provisions” to restrict access to minors, such as possession of a credit card or an adult identification number, would not have been liable under the law. The CDA would have imposed a fine or up to two years in jail on any person who:

… knowingly (A) uses an interactive computer service to send to a specific person or persons under 18 years of age, or (B) uses any interactive computer service to display in a manner available to a person under 18 years of age, any comment, request, suggestion, proposal, image, or other communication that, in context, depicts or describes, in terms patently offensive as measured by contemporary community standards, sexual or excretory activities or organs, regardless of whether the user of such service placed the call or initiated the communication. (Title V of the Telecommunications Act of 1996)

The Cyberporn scare story and Carnegie Mellon study was used concrete evidence of Exon’s claims that pornography ran rampant on the Internet, was readily available to children, and thus needed to be tightly controlled. The Time magazine story spawned a nation–wide media interest in the topic and the Telecommunications Act with the Exon Amendment passed the Senate 84–16. The Act was signed into law by President Bill Clinton on February 8 1996, and subsequently inspired a slew of legal challenges and online activism including the Black World Wide Web Protest. The CDA was struck down a year later in the landmark cyberlaw case of Reno v. ACLU in which the US Supreme Court unanimously (9-0) deemed the anti-decency provisions unconstitutional.

Like this:

Researching the history of America’s men’s movement(s) requires having to read related masculinist manifestos in at least a cursory fashion. While the uber icky and celebratory misogyny of today’s MRAs wasn’t really that fashionable in the late 1970s and early 1980s when the early men’s movement(s) originated, these earnest and often pitiful ruminations on how difficult it is for men to be trapped by (and/or) entrusted with the perpetuation of the patriarchy are making me rue the decision to take this particular research trajectory. With the exception of some of the more odious texts like Richard Doyle’s The Rape of the Male (which is not actually about rape, but rather uses rape as an analogy for the treatment of men family court system), many of them actually employ a feminist framework to construct their tales of beleaguered man-woe, and are thus, not overtly offensive to me. I’m just finding them kind of annoying and tedious and I couldn’t really figure out why.

This observation from Barbara Kingsolver clarified it for me:

…[Men] face some cultural problems that come to them solely on the basis of gender: They are so strictly trained to be providers that many other areas of their lives are neither cultivated nor validated… They struggle with guilt and doubts associated with a history of privilege.

Women struggle with the fact that they are statistically likely to be impoverished, worked to the bone and raped.

… The men’s movement and the women’s movement aren’t salt and pepper, they are hangnail and hang grenade.

It’s not so much that men don’t have legitimate complaints about the proscriptive nature of the patriarchy. But you simply can’t equate the spiritual oppression of men with the physical oppression of women. To do so is a perfidious mechanism which actually serves to undercut the quest for gender equality.

American historian Charles Akers has characterised John Hancock as, “The chief victim of Massachusetts historiography” insofar as, “He suffered the misfortune of being known to later generations almost entirely through the judgments of his detractors.”

Hancock served as president of the Second Continental Congress and was the first and third Governor of the Commonwealth of Massachusetts. When the merchant, statesman, and prominent Patriot of the American Revolution died at the age of 56, he was honored with a lavish state funeral, but soon faded from popular memory, only to be remembered for his oversized signature on the United States Declaration of Independence.

Compared to prominent Founding Fathers like Jefferson and John Adams, Hancock left relatively few personal writings for historians to examine or utilise. Consequently, early examinations of Hancock have relied on the writings of his political opponents, who were, unsurprisingly, scathing in their critique. In a particularly unflattering portrait published in Harper’s in 1930, historian James Truslow Adams portrayed Hancock as shallow and vain, an “empty barrel,” suffering from a “weakness of character.” According to Adams, Hancock could be easily dismissed because there “was no John Hancock,” he was merely a “puppet” of fellow revolutionaries who was considered useful because of his vast personal wealth, yet “displaying no clear ability of his own.”

The service thus rendered may have been very useful, but it is difficult to take deep interest or consider seriously the jerked motions of a puppet.

Ouch!

In the decades that followed, historians have been a bit kinder to poor John Hancock. In the 1970s for instance, Donald Proctor was critical of Adams’s assertions, accusing him of parroting the negative views of Hancock’s political opponents without doing any serious research. According to Proctor, Adams:

presented a series of disparaging incidents and anecdotes, sometimes partially documented, sometimes not documented at all, which in sum leave one with a distinctly unfavorable impression of Hancock

Both Proctor and Charles Akers called for scholars to evaluate Hancock based on his merits, rather than on the views of his critics. Yet, since this entreat was made in the 1970s, there has still been relatively little scholarly attention paid to examining the life of John Hancock. Although, Slate recently published a piece examining whether or not his “John Hancock” on the Declaration of Independence really was too big.

President William Henry Harrison, the ninth president of the United States, holds a number of distinctions: He was the last president to have been born a British subject. At the age of 68, he was the oldest man elected to the presidency until Reagan was elected in 1980. He delivered the longest inaugural address, which -at just shy of 9 000 words- took him two hours to deliver. He was also the first president to die in office, and his 32 days as president was the shortest term ever served.

His inauguration 1841 was held on a cold and wet March day. The poor weather coupled with the fact that 68-year-old Harrison refused to wear an overcoat whilst delivering his marathon address nor travel to the inauguration in a covered carriage were thought to be causal factors in the pneumonia which eventually killed him. Truthfully, Harrison didn’t get sick until 3 weeks after his inauguration. So while one certainly can’t rule out a link between his inauguration and his death, the correlation is probably not as obvious as people tend to mistakenly believe. So the whole long-winded speech, stupidly refusing t wear a coat thing, is not the Historical Douchebaggery to which I’m referring.

William Henry Harrison was kind of a douche because he completely misrepresented himself during his campaign. He had campaigned as a candidate of humble beginnings, touting he was born in a log cabin, and making repeated references to his “log cabin home.” In reality, Harrison was born in a 3 story, 22 bedroom brick mansion on Berkley Plantation – one of the oldest estates in Virginia. The Harrisons were one of Virginia’s most prominent political families. His father Benjamin Harrison V had been a delegate to the Continental Congress and a signatory to the Declaration of Independence, a member of the Virginia legislature, and the State Governor. His father-in law John Cleves Symmes, had also been a delegate to the Continental Congress, and a Justice in both the New Jersey and North West Territory Supreme Courts. His older brother had been a state legislator and a US congressman, and his cousin was also a Governor.

In sum, William Henry Harrison was no son of a lumber jack. He was the heir of a political dynasty, and the fact that he marketed himself as a home-spun, man of the people, makes him kind of a douche.

In the post-Vietnam era, America was experiencing an ambivalence about how they should conduct themselves on the global stage.

They had emerged from the Second World War with this sense of omnipotence; this confidence that America could achieve anything it set its mind to. It was this supreme confidence in American military strength, and perhaps more importantly, this confidence in the righteousness of America’s geo-political mission which had underpinned the rampant globalism of the Kennedy and Johnson era. Specifically, it had underpinned the war in Vietnam.

But Vietnam effectively revealed the limits of American globalism. It revealed that technological superiority did not necessarily determine foreign policy success. It also revealed that the American approach to foreign policy, was perhaps not as benevolent as people had previously believed that it was.

The trauma of the Vietnam experience made the American people deeply ambivalent about foreign policy.

On one hand they still saw themselves as a global superpower, and didn’t want to appear as a weakened nation. They wanted their leaders to be resolute and tough. But on the other hand they understood all too well the dangers of that toughness. They understood the dangers of American hubris, and the humiliation that accompanies defeat.

Jimmy Carter’s approach to foreign policy really typified the ambivalence of the post- Vietnam era. Insofar as it was an awkward and almost schizophrenic combination of toughness and restraint, and of idealism and pragmatism.

When Ronald Reagan assumed the presidency in 1981, he seemingly had none of the ambivalence or uncertainty that had plagued Carter. His certainty, his profound faith in American strength and American mission positioned him as a man who would shake America out of the “Vietnam Syndrome.”

Whenever I talk to students (or anyone, really) about Ronald Reagan, I can’t not talk about Rocky 4 and Rambo 2. Given I’ve already written about Rocky 4 and the Reagan Revolution in a previous post, I’ll just stick to Rambo 2.

First, let me start my saying that it’s a really dreadful film– like truly awful. But it’s also a really potent symbol of the American zeitgeist in the 1980s.

Sylvester Stallone plays John Rambo, a highly damaged Vietnam vet who in First Blood, the original film, comes into conflict with the police in a small town, and becomes totally unhinged. First Blood is actually more of a psychological thriller than an action movie. It’s not a great film, but it is a somewhat serious commentary on the psychological damage that the war inflicted upon those who fought it, and how poorly vets were treated when they returned. In contrast. First Blood Part 2 is just a full-on hypermasculine action fantasy.

In the film, Rambo is asked to return to Vietnam to liberate US POWs. Essentially being given an opportunity to fight the war over again, he asks “Do we get to win this time?” It ties into the idea which gained prominence during the 1980s, that the loss in Vietnam, was due to the incompetence of government bureaucrats and politicians, the inefficacy of American strategy, and the betrayal on the home front… not because there was something inherently wrong with the war itself. Reagan in fact, had called the war a “noble cause” and was super critical of those who employed a discourse of shame in terms of America’s involvement in the war in Vietnam. During his 1980 campaign he stated: “We dishonour the memory of 50,000 young Americans who died in that cause when we give way to feelings of guilt as if we were doing something shameful, and we have been shabby in our treatment of those who returned.”

Historian Michael Klare has defined the ‘Vietnam syndrome’ as “the American public’s disinclination to engage in further military interventions in internal Third World conflicts.” Since Vietnam, the foreign policy community, and legislators in particular, began to move away from this view that all upheaval in the third world was the result of some great communist conspiracy. However, Reagan and his advisors, rejected this idea that civil strife in parts of Latin America and elsewhere were due to indigenous issues such as economic instability, poverty and class oppression. Reagan blamed most Third World troubles on the Soviet Union and thought that revolutionaries took their orders from Moscow.

In 1985, the president put forth the Reagan Doctrine, declaring that the United States would openly support anti communist movements. They would fund and train anti-communist “freedom fighters” wherever they were battling Soviet-backed governments. Under this doctrine, the CIA funnelled aid to insurgents in Angola, Nicaragua, Ethiopia and Afghanistan.

In Afghanistan for instance, under the auspices of Operation Cyclone the US supplied arms, finance and training to the anti-Soviet freedom fighters, the Mujahedeen.

The Soviet occupation of Afghanistan was effectively their Vietnam. They spent years trying in vain to bolster a friendly regime while facing a tenacious guerrilla insurgency that just wouldn’t quit. After a decade the Soviets gave up and went home. After years of civil war, the pro-Soviet regime fell and the Mujahedeen took over. Only by that point they were calling themselves the Taliban.

The Taliban turned Afghanistan into a repressive Islamist state, based on Sha’ria law. Life under the Taliban was especially difficult for women, who were prohibited from going to school, from working, from driving, and from leaving the house unless accompanied by a male relative.

Afghanistan under the Taliban also hosted training camps for jihadists including Al Qaeda, the terrorist network responsible for the attacks of September 11th 2001.

So in attempting to frustrate their present enemy, the US inadvertently facilitated the rise of an entity that would prove to be a future enemy.

Another one for the file marked The Unintended (but totally foreseeable) Consequences of US Intervention.

On Monday, Anita Sarkeesian posted the latest installment of her Tropes Vs. Women in Video Games series on YouTube, a half-hour examination of the ways in which video game makers use sexualized violence against women as a cheap way to spice up their narratives and appeal to straight male gamers.

Her tone was measured, her analysis clear and logical and supported by dozens of clips from a wide assortment of games.

The North Atlantic Treaty was signed by its original 12 member states in April 1949, and came into force in August of the same year after being ratified by all signatories.

The nations of Western Europe were pretty concerned about the threat of Soviet expansion in the post-war years. The Soviets made no secret of their determination to develop a buffer zone of friendly states between themselves and Germany- and moreover, they felt pretty entitled to this “anti-fascist zone” given the tremendous sacrifice the nation had endured during the Second World War. Soviet willingness to create and maintain a bloc of friendly states on their western border was seen as somewhat threatening by the nations of Central and Western Europe.

The spectre of the Soviets’ nefarious intentions certainly wasn’t ameliorated by the Soviet-sponsored overthrow of the democratic government in Czechoslovakia in February 1948. Moreover, the Berlin Blockade (March, 1948 – May, 1949), in which the Soviet Union severely restricted access to the Allied sectors of Berlin, was a really bad PR move. Even though, critically speaking, the Soviet Union had some pretty good reasons to be pissed off at the Allies who had consistently (and fairly shamelessly) treated the Soviet Union like the red-headed stepchild. They had showed a complete disregard of the Soviet post-war position and contempt for Soviet concerns for strategic security: Churchill had basically called the Soviets out with his infamous “Iron Curtain speech,” the US had articulated a formal doctrine of “containing” the Soviet Union, and if all that wasn’t provocative enough, the Soviets were shamelessly marginalised in the Allied Control Council. Finally they said, “Screw you guys, I’m going home.” And they took their ball with them… which in this case was West Berlin… admittedly, not a perfect analogy…

Anyway, in hindsight, the Berlin Blockade was not the best strategic move, insofar as it actually made the Soviet Union look like total assholes, and gave the Western Allies (particularly the Americans) the chance to be the heroes who saved the besieged people of West Berlin via the Berlin Airlift. Moreover, it prompted the nations of Western Europe to align even closer with the United States, who were able to craft a fairly persuasive argument, that a strategic alliance with the United States was the only way to dissuade Soviet aggression.

The formation of NATO was an important shift in the trajectory of the Cold War. I would argue (and this is admittedly, a rather crude summation), that in many respects the establishment of a formal strategic alliance represented the militarisation of the Cold War- a hitherto rhetorical and philosophical conflict. When NATO members began pushing for the inclusion of West Germany into the alliance in the mid-1950s, the Soviets warned that this would be a bridge to far; it would be interpreted as a provocative act which would force them into formalising their own strategic defensive alliance. This is precisely what happened when West Germany joined NATO in May 1955. Less than two weeks later, the Soviet Union, East Germany, Czechoslovakia, Poland, Hungary, Romania, Albania and Bulgaria signed the Warsaw Pact. On the few occasions where Eastern Bloc countries endeavoured to extricate themselves from the yoke of Soviet-style communism, it was the threat of Warsaw Pact intervention (or actual intervention as was the case in Hungary in 1956 and Czechoslovakia in 1968), that brought them back into the fold.

The formation of NATO also represented a marked shift in the trajectory US foreign policy. For its entire history, US foreign policy was guided by an ethos of “independent isolationism”- economic interests and hegemonic ambitions in the American hemisphere had precluded pure isolationism, but the US had eschewed formal military alliances with Europe. Europe was viewed as this basket case of ancient rivalries- the flashpoint for two World Wars, into which the US had been reluctantly drawn. For the first time, the US was now yoking itself to Europe. This would have profound implications for the nature of American globalism in the post-war era.

Interestingly, while conceived as a defensive alliance that would deter or contest Soviet aggression against any of its member states, Article 5, the mutual self-defence clause of the North Atlantic Treaty was never invoked during the Cold War. In fact, the first decision to operationalise Article 5 was the AWAC air defence deployment Operation Eagle Assist, launched in response to the terrorist attacks against the United States on September 11, 2001.

When Czechoslovakia was invaded by its Warsaw Pact allies it signified more than just the end of the nation’s quest for “socialism with a human face.” It also effectively represented the end of socialism as a viable and legitimate alternative to liberal capitalism. Moreover, it signified that any attempt to break free from the increasingly oppressive yoke of Soviet-style socialism, would not be tolerated by Moscow, thereby fundamentally eroding the idea that the Soviet Union’s sphere of influence was maintained by anything other than force or the threat thereof.

The Prague Spring had begun months earlier when reformer Alexander Dubček was elected to the head of the Communist Party of Czechoslovakia (KSČ), but its origins lay in the economic reforms of his predecessor, Antonin Novotný. The soviet model of industrialisation, had applied poorly to Czechoslovakia, which was one of the most developed economies in the Eastern Bloc. In response to an economic downturn in the early 1960s, Novotný in 1965 had launched a New Economic Model aimed at restructuring the economy. This period of economic liberalisation, had spurred calls for political liberalisation.

Dubček and the other reformist party members positioned their Action Programme as the next stage of Czechoslovakian socialism, rather than a repudiation of the post-war Soviet model. Dubček’s reforms were designed to help transform Czechoslovakia socialism into an economic and state model which “corresponds to the historical democratic traditions of Czechoslovakia.”

Fearing not just the loss of Czechoslovakian commodities and industrial resources, but also the precedent that may be set by the country’s flirtation with defiance, the Soviet Union rallied her allies and ended the rebellion.

“So wait, I thought that the south losing the civil war meant that African Americans were free and had the same legal rights and stuff, why did there need to be another law? I saw it in that Lincoln movie- there was that debate over some law and Tommy Lee said something about ‘equality before the law’ and he won, right? Actually I’m not totally sure because I think I fell asleep in a few parts- that was movie was really long. But anyway, why didn’t they just enforce that law instead of making new ones?”

Sometimes you receive a question from a student that feels like that moment in Raiders when the Nazis open the ark of the covenant. Full-on face-melting word deluge. Nonetheless, somewhere in that largely unintelligible word scatter shot, is a rather astute question that’s certainly worthy of being addressed.

The 13th Amendment, which effectively outlawed slavery in the United States was adopted on December 6, 1865 after passing through both houses of congress and being ratified by 2/3 of the states (as is required for all constitutional amendments). It was the first of the three so-called Reconstruction Amendments which were intended to guarantee freedom to former slaves and to establish and prevent discrimination in civil rights. In fact it was the 15th Amendment that specifically addressed voting rights, stating that:

The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude.

So the premise of the question is basically correct. There were substantial efforts in the decades following the Civil War to guarantee “equality before the law.” (Although Radical Republican congressman Thaddeus Stevens was played by Tommy Lee Jones in Steven Spielberg’s Lincoln, not Tommy Lee. That would have been a very different interpretation I’m sure.)

During the period of Reconstruction (1865-1877), the federal government effectively occupied the defeated confederate states, and the rights of freed slaves were protected by force. As you could imagine, Southerners didn’t take too kindly to radical Reconstruction, and as soon as federal troops were removed in 1877 the south began “normalising” race-relations. Ie. Putting Blacks “back in their place.” This is what underpinned the policy of segregation which was employed by many southern (and northern) states. Segregation, specifically the principle of “separate but equal” was upheld by the US Supreme Court in 1896 Plessy v Ferguson case, and deemed not to be in violation of the 14th Amendment. This decision would be reversed half a century later in 1954 with the Brown v Board of Education decision, which essentially said that separate could never be truly equal.

Similarly, states were able to work within the confines of the 15th Amendment to disenfranchise southern Blacks. Some of the tactics included literacy tests, poll taxes and in some cases outright intimidation or violence. The Voting Rights Act specifically outlawed jurisdictions from changing voting regulations in any way that may result in the discrimination of minority voters. Moreover, certain jurisdictions with a history of voter discrimination had to seek special approval from the US attorney General before they could make any changes to the way they administered elections. There were also a number of northern districts that were subject to this special provision which was called “preclearance.” I use the past tense because the preclearance requirement was recently shot down by the US Supreme Court. The rationale was basically that the provision had been so successful in preventing the implementation of discriminatory election practices, that it was no longer required. In her scathing dissenting opinion, Justice Ruth Bader Ginsberg characterised the “sad irony” of throwing out an effective piece of legislation, thusly:

Throwing out preclearance when it has worked and is continuing to work to stop discriminatory changes is like throwing away your umbrella in a rainstorm because you are not getting wet.

Forty five years ago, the Apollo 11 lunar module landed on the moon’s surface and Neil Armstrong ventured forth to flub the immortal words, “one small step for [a] man…”

John F. Kennedy had declared in 1961 that man would walk on the moon by the end of the decade. The American mission to land on the moon was highly publicised, just as the Soviets too lauded every development in terms of their own space program. The space race was a manifestation of Cold War rivalries, and was exploited by both powers for propaganda purposes.

But it was also really cool. Moreover, the the space race served as a catalyst for a technological revolution that formed the foundation of the information age we live in today.

But there were also some highly secret, and arguably more nefarious dimensions to the US and Soviet efforts to land on the moon. The National Security Archive has recently compiled a collection of declassified documents which give some starting insights into the deliberations among US policy makers and the National Security establishment concerning the potential militarisation of the moon.

There are a number of US Army and Airforce studies from 1951-1964 which review the possibility of using the moon as a military base for both surveillance and “Lunar Based Earth Bombardment System,” such as Project Horizon , and the LUNEX (lunar expedition) Plan

In June 1959, the DoD conducted a studyon the implications of detonating a nuclear device on or in the vicinity of the moon.

A declassified report detailing the efforts of intelligence operatives to “borrow” a Soviet space capsule during an exhibition tour and return it before Soviet authorities were any of the wiser. This report actually reads like heist movie. You can tell the writer is very proud of himself.

There is also a number of intelligence assessments of the Soviet Luna program, including a 1963 CIA estimate of Soviet intentions with regards to a manned moon landing

The briefing book is edited by NSA senior fellow Jeffrey T. Richelson and features a useful bibliographical essay.