On January 30, 1989, an article appeared in the student-run Stanford Daily under the headline “Racial slurs cause University to shut down bulletin board.”
The bulletin board in question, rec.humor.funny, was one of hundreds of
so-called newsgroups—glorified mass e-mails organized around specific
interests—that streamed onto the school’s computer terminals via Usenet,
an early precursor to today’s Internet forums. Rec.humor.funny was
conceived as a place to share jokes, many of them crude and off-color, and one
in particular, the Daily explained, had caught the eye of Stanford’s
nascent I.T. department. Though decidedly stale and not nearly as
offensive as some of the other material in the newsgroup, it relied on
ethnic stereotypes: “A Jew and a Scotsman have dinner. At the end of the
dinner the Scotsman is heard to say, ‘I’ll pay.’ The newspaper headline
next morning says, ‘Jewish ventriloquist found dead in alley.’ ” Upon
reading those words, a student at M.I.T. had complained, and the
attention had led a Canadian university to stop hosting rec.humor.funny.
Eventually—most likely thanks to Usenet—word reached Stanford.

I.T. administrators soon decided to block the group. “Jokes based on
such stereotypes perpetuate racism, sexism, and intolerance,” they wrote
in a note that appeared on terminals campus-wide. “They undermine an
important University purpose: our collective search for a better way,
for a truly pluralistic community in which every person is acknowledged an individual, not a caricature.” Carefully stressing the value of
freedom of expression, the note nevertheless concluded that “our respect
for the dignity and rights of every individual” was more important. This
was a notably early attempt to clean up the Internet—occurring at
Stanford, no less, the epicenter of Silicon Valley—and the reactions to
it established a pattern of toxic rhetoric and hypocritical
argumentation that, nearly three decades later, remains discouragingly
familiar.

Even before the I.T. department announced its decision, the atmosphere
at Stanford had been politically fraught. In many ways, it resembled
America in 2017. Women and minority students, spurred on by the Reverend
Jesse Jackson’s “rainbow coalition,” had been demanding new, more
inclusive curriculum requirements and greater diversity, while a
reactionary movement had sprung up among their conservative peers. One
of the leaders of the right-wing insurrection was Peter Thiel,
who would go on to co-found PayPal and the software company Palantir and
make millions of dollars as an early investor in Facebook. At the time,
he was an undergraduate philosophy major and the editor of the Stanford
Review, a sort of collegiate Breitbart News for the late eighties,
dedicated to bemoaning what it saw as political correctness run amok.
The Review, with Thiel at its helm, yearned to make Stanford great
again. As he observed in “The Diversity Myth,” his 1995 polemic
co-written with David Sacks, another Review editor who later became a
Silicon Valley bigwig, “Multiculturalism caused Stanford to resemble
less a great university than a Third World country, with corrupt
ideologues and unhappy underlings.”

Banning rec.humor.funny was the Stanford I.T. team’s attempt to calm
campus nerves; only a few months earlier, there had been a polarizing
case of two white freshmen drawing racist graffiti on a poster of
Beethoven. But the backlash was immediate and extreme, and it went well
above Thiel. When the team decided to act, they had sought technical
advice from a graduate student, who, quite predictably, informed one of
the eminences of the computer-science department, John McCarthy, what was
going down. Before the ban had taken effect, McCarthy, a pioneer in
programming and artificial intelligence, spearheaded a free-speech
crusade. He took to the department’s own electronic bulletin board to
make the case against what he saw as censorship, strengthened by his
conviction that computers were destined to be crucial to how we lived.
“Newsgroups are a new communication medium just as printed books were in
the 15th century,” McCarthy wrote. “I believe they are one step towards
universal access through everyone’s computer terminal to the whole of
world literature.” In what must have been one of the first online
petitions, McCarthy gathered a hundred digital signatures of support
from his colleagues.

Throughout this campaign, McCarthy never acknowledged the racial
tensions that had so clearly informed the university’s actions. Rather,
he offered an engineer’s systemic analysis of how information ought to
be distributed, without regard for cultural or political context—a
species of reasoning that, decades later, has become ingrained in
Silicon Valley. But context intruded on the bulletin board anyway,
through the posts of William Augustus Brown, Jr., an African-American
medical student who was participating in the department’s research on
using A.I. to treat patients. Brown was the lone voice among Stanford’s
computer scientists to support the ban.

“Even if I can’t force the presentation of other cultures—and I DO NOT
assume this is impossible—I will ALWAYS protest the stereotyping of my
culture,” he wrote. “For once the University acted with some modicum of
maturity. I sincerely hope it maintains this status by refusing to
reverse its decision.” Brown framed the debate in terms quite different
from McCarthy’s. “Whether disguised as free speech or simply stated as
racism or sexism, such humor IS hurtful,” he wrote. “It is a
University’s right and RESPONSIBILITY to minimize such inflammatory
correspondence in PUBLIC telecommunications.”

McCarthy never responded, directly or indirectly, to Brown, but others
in his department did. Their rhetoric offers an early glimpse at how
alternative opinions would be shouted down or patronized online from
that point onward. (Terms such as “social-justice warrior” and
“whitesplaining” had yet to be coined, but they would have been right at
home.) One graduate student replied to Brown, “I am a white male, and I
have never been offended by white male jokes. Either they are so
off-base that they are meaningless, or, by having some basis in fact
(but being highly exaggerated) they are quite funny. I feel that the
ability to laugh at oneself is part of being a mature, comfortable human
being.” A second grad student patiently explained that Brown didn’t
understand his own best interests. “The problem is that censorship costs
more than the disease you’re trying to cure,” the student wrote. “If you
really believe in the conspiracy, I’m surprised that you want to give
‘them’ tools to implement their goals.”

The reactions against Brown were so uniformly critical that he chose a
different tack, opening up to his fellow-students about the difficulties
of being a black man at Stanford. “Having received most of my
pre-professional training in the Black American educational system, I
have a different outlook than most students,” Brown wrote. “I certainly
didn’t expect the kind of close, warm relationships I developed at
Hampton University, but I was not prepared for the antagonism.” He
continued, “I don’t really mind the isolation—I can still deal, and it
gives me PLENTY of time to study. But I really don’t like the cruel
humor. Once you come down from the high-flying ideals, it boils down to
someone insisting on his right to be cruel to someone. That is a right
he/she has, but NOT in ALL media.”

Again, no one responded directly. The closest there was to a defense of
Brown came from another grad student, who said that, while he was
opposed to the rec.humor.funny ban, he worried that many of his peers
believed that “minority groups complain so much really because they like
the attention they get in the media.” He added that people rarely “try
to understand the complaints from the minority point of view.” Then he
ended his post to the bulletin board by asking, “Do people feel that the
environment at Stanford has improved for minority students? Worsened?
Who cares?” Based on the lack of reply, “Who cares?” carried the day.

McCarthy wasn’t persuadable on the matter, and certainly not through
personal testimony. To his way of thinking, there was no such thing as
inappropriate tech or inappropriate speech. Besides, who could be
trusted to decide? One post, which McCarthy endorsed, suggested that
letting I.T. administrators determine what belonged on the computers at
Stanford was like giving janitors at the library the right to pick the
books.

McCarthy’s colleagues innately shared his anti-authoritarian
perspective; they voted unanimously to oppose the removal of
rec.humor.funny from Stanford’s terminals. The students were nearly as
committed; a confidential e-mail poll found a hundred and twenty-eight
against the ban and only four in favor. McCarthy was soon able to win
over the entire university by enlisting a powerful metaphor for the
digital age. Censoring a newsgroup, he explained to those who might not
be familiar with Usenet, was like pulling a book from circulation. Since
“Mein Kampf” was still on the library shelves, it was hard to imagine
how anything else merited removal. The terms were clear: either you
accepted offensive speech or you were in favor of destroying knowledge.
There was no middle ground, and thus no opportunity to introduce
reasonable regulations to insure civility online. In other words, here
was the outline for exactly our predicament today.

McCarthy, who died in 2011, considered his successful campaign against
Internet censorship the capstone to a distinguished career. As he
boasted to a crowd gathered for the fortieth anniversary of the Stanford
computer-science department, on March 21, 2006, his great victory had
been to make the school understand that “a faculty-member or student Web
page was his own property, as it were, and not the property of the
university.” At the time, almost as much as in 1989, McCarthy could
safely see this victory as untainted; the Internet still appeared to be
virgin territory for the public to frolic in. Facebook wouldn’t go
public for another six years. The verb “Google” had yet to enter the
Oxford English Dictionary. The first tweet had just been sent—the very same day, in fact.

Today, of course, hateful, enraging words are routinely foisted on the public by users of all three companies’
products, whether in individual tweets and Facebook posts or in flawed Google News algorithms.
Championing freedom of speech has become a business model in itself, a
cover for maximizing engagement and attracting ad revenue, with the
social damage mostly pushed aside for others to bear. When the Internet
was young, the reason to clean it up was basic human empathy—the idea
that one’s friends and neighbors, at home or on the other side of the
world, were worth respecting. In 2017, the reason is self-preservation:
American democracy is struggling to withstand the rampant, profit-based
manipulation of the public’s emotions and hatreds.

William Brown, who ended up leaving Stanford for Howard University
Medical School and is now the head of vascular surgery at the Naval
Medical Center in Portsmouth, Virginia, told me recently that he wishes
his fellow computer scientists had heeded his warnings. “Compassion and
equity and humanity matters more than your right to say whatever comes out of your mouth,” he said. “That environment sort of sparked the
attitude that yes, if you came from a refined enough background, you
could say whatever you wanted. Somehow the First Amendment was unlimited
and there was no accountability.” The problem, Brown added, remains all
too pervasive. “I see that attitude today,” he said. “It doesn’t matter
whether it’s Stanford or the alt-right.”

Parts of this essay were adapted from Noam Cohen’s book “The Know-It-Alls,”
which was released earlier this month by the New Press.