When Prescriptivists Overprescribe

Robert Lane Greene 4:43 PM

Welcome to another round of the Language Wars. By now we know the battle lines: As a “descriptivist,” I try to describe language as it is used. As a “prescriptivist,” you focus on how language should be used. If we were from the two extremes, I would open fire by saying that you preach stodgy nonrules that most people don’t obey, and that people like you don’t understand that language must grow and change. You would then call me a permissivist who ignores the fact that people can use language incompetently or well, and that people want to write and speak well.

But I believe that we’re both reasonable moderates, and have something more interesting to say than that old pantomime. Your excellent guide, “Garner’s Modern American Usage,” shows you to be, in your words, a “descriptive prescriber.” You give not just “right” or “wrong” rulings on usage, but often a 1-5 score, in which a given usage may be a 1 (definitely a mistake), 3 (common, but …) or 5 (perfectly acceptable). This notion of correctness as a scale, not a binary state, makes you different from many prescriptivists.

For my part, I glory in the real-world mess of dialects and slang, and think that some popular prescriptivists have imposed some bogus nonrules on too many schoolchildren. But as I have written before: “There is a set of standard conventions everyone needs for formal writing and speaking. Except under unusual circumstances, you should use the grammar and vocabulary of standard written English for these purposes.”

You’re free to prefer 'which' for extra information and 'that' for a crucial bit of definition. Let's just not use the word 'error' when someone prefers 'which' for both.

We still disagree on both some points of usage and on the underlying sources of authority. The usage books of the past hundred years, written by prescriptivists, very often prescribe rules that I don’t believe are part of standard English.

Where they insist “hopefully” should not be used to mean “I hope,” you differ with them. So why does the campaign against this “error” live on? Probably in large part because a few writers — E. B. White, half of Strunk and White, among them — think that using “hopefully” in this way is a mistake. Strunk and White’s “Elements of Style,” one of the most popular usage books of all time, is short and pithy and by and large a good guide to good writing. But where Strunk and White went wrong was in prescribing from their own intuitions rather than — what else am I going to say? — accurately describing standard English. Sometimes you join them.

For example, to pick on Strunk and White again, they prescribe a rule on “which” and “that” to introduce relative clauses: “which” must introduce a “nonrestrictive” relative clause (a mere extra bit of information). Only “that” can introduce a “restrictive” clause (a crucial bit of definition). You agree with White that “The lawn mower which is broken is in the garage” should have “that,” not “which.” However, even White doesn’t agree with White. As the linguist Geoff Pullum noticed, White used “which” in the “wrong” way in his essay “Death of a Pig”: “the premature expiration of a pig is, I soon discovered, a departure which the community marks solemnly on its calendar.” White would probably say he slipped. I’d console him; no, he didn’t. It’s a fine sentence from a fine American writer.

British English, including in my publication, The Economist, mostly ignores this rule, as you acknowledge. It was still common enough a century ago that the great H. W. Fowler, who preferred your rule, conceded that “The relations between that, who & which, have come to us from our forefathers as an odd jumble.” White even broke the rule in his own prose, probably unconsciously.

The bottom line for me would be this: You’re free to prefer different relative pronouns for different kinds of clauses, on the principle of “one word for one function, wherever possible.” I just wouldn’t use the word “error” in this case. The history of English usage tells us that the restrictive “which” is at the very heart of the language. Even the King James translators gave us “Our Father which art in heaven.” (In the 18th century, Bishop Robert Lowth, a godfather of prescriptivists, sought to correct this to “who art in heaven,” but that’s a topic for another day.)

The Labels Are Blurring

Bryan A. Garner 4:43 PM

It was telling, I think, that when we began talking about this exchange, you referred to me as a “successful descriptivist” — until I suggested that you re-examine your wording.

The labels “prescriptivist” and “descriptivist” are increasingly unhelpful. One could defensibly call me a descriptivist. I just describe something that dogmatic egalitarians don’t want described: the linguistic choices of a fully informed, highly literate but never uptight user of language. It’s a rational construct — rather like the law’s “reasonable person” — and a highly useful one at that. The moment one says, “If you want to be such a person — a fully informed, highly literate but never uptight user of language — then here’s how to do it,” one is prescribing.

But that’s all that the reputable usage experts were ever doing. I count among these writers H. W. Fowler, Eric Partridge, Margaret Nicholson, Theodore M. Bernstein, Ernest Gowers and Wilson Follett. I’d even add William Strunk and E. B. White. All of them were extraordinarily well read, sensitive to language, alert to nuance, versed in literary history and highly observant. They were describing the ideal writer or speaker, and they did it well — not infallibly, but well.

It sounds like wishful thinking when you say that 'no real-world descriptivist' still accepts that a native speaker can’t make a mistake. This is a thoroughly wrong-headed dogma that took many years to debunk — and still it persists.

Over the past three decades, linguists have become accustomed to using “prescriptivist” as a snarl‑word, essentially equivalent to “linguistic ignoramus.” The positions attributed to prescriptivists, even in your own work, almost never align with positions taken by Fowler, Partridge, Nicholson, Bernstein, Gowers, Follett or me. Instead, this “prescriptivist” is supposed to be someone who forbids sentence‑starting conjunctions and sentence‑ending prepositions. Yet no reputable prescriptivist — not even the 18th‑century grammarians — took such a position.

You’d object, I assume, if I were to define descriptivists as quantitative social scientists with no interest in literary style who nevertheless study language, reporting all findings in maladroit, leaden prose, fallaciously insisting, through a misguided relativism, that all forms of language are equal and berating anyone who dares to say that the nonstandard use of a word or phrase is “incorrect.”

The fact that this definition doesn’t fit you and many other modern writers on linguistics merely shows that descriptivists have moderated the indefensible positions they once took. The linguists have switched their position — without, of course, acknowledging that this is what they’ve done.

It sounds like wishful thinking when you say that “no real-world descriptivist” still accepts the dogma that a native speaker can’t make a mistake:

· “In language, what is used is right — and has to be.” (Ellsworth Barnard, 1979)

· “We believe, as do most linguists, that native speakers do not make mistakes.” (Peter Trudgill & Lars-Gunnar Andersson, 1990)

· “When we consider variation in language, we must give up the idea of errors.” (Donna Jo Napoli, 2003)

This is a thoroughly wrong-headed dogma that took many years to debunk — and still it persists. Let me reiterate that I am not alone in seeing this deep-rooted weed: “During the period of American structuralism a myth became well established that the native speaker cannot make a mistake” (Charles A. Ferguson, 1984).

The fact that you and other linguists are now embracing the prescriptive tradition is cause for celebration. Nowhere is the flip-flop more apparent than in the work of Steven Pinker. In “The Language Instinct” in 1994, Pinker argued the “no native speaker can make a mistake” position with a bizarre metaphor: “To a linguist or psychologist, language is like the song of the humpback whale.… Isn’t the song of the humpback whale whatever the humpback whale decides to sing?” A few pages later in that bestselling book, Pinker referred to linguistic guidance about standard English as surviving “by the same dynamic that perpetuates ritual genital mutilations.”

Inflammatory words. Guidance about good English gets equated with genital mutilations. But now, in this topsy-turvy world of ours, the same Steven Pinker who once likened prescriptive rules to genital mutilations has been newly appointed chairman of the usage panel of the American Heritage Dictionary. He is now a guardian of the language. At least he has modified his views, tacitly acknowledging the criticism that David Foster Wallace and I and others laid at his doorstep.

His stance today? As usage-panel chairman, Pinker now says that it is “well worth preserving” the standard, traditional uses of “enervate,” “flaunt,” “fortuitous,” “fulsome,” “reticent,” and “untenable.” Bully for him. He says it’s “almost a miracle” that we continue to distinguish between “affect” and “effect.”

Rarely have I seen a more agreeable intellectual about-face. But of course he doesn’t acknowledge that he now takes a position that reputable prescriptivists have taken for over a century.

You, Lane, got into the linguistic game late enough to join the wave of descriptivists flocking to the position of enlightened prescriptivists. But in your book, “You Are What You Speak,” you tendentiously call prescriptivists “language cranks,” “oddballs,” “declinists,” “self-appointed language guardians,” and “scolds” who habitually fly into “spittle-flecked fury.” A little of that stuff is good-natured fun, but the condescending haughtiness is unrelenting.

As for “that” and “which,” you’re simply disagreeing with my description of how an ideal, fully informed speaker or writer of American English uses these relative pronouns. I can live with that disagreement, but I stand by my words.

Further, as a descriptive matter, you are quite wrong to call “restrictive which” part of the “heart of our language,” and the King James passage you cite isn’t at all pertinent, since it’s nonrestrictive — as you’d surely notice if you took a moment to analyze it.

The real point is this: We could go a long way toward reconciling the language wars if linguists and writers like you would stop demonizing all prescriptivists and start acknowledging that the reputable ones have always tried to base their guidance on sound descriptions.

Rules and Nonrules

Robert Lane Greene 4:43 PM

Thanks for an engaging response. There’s so much in it that I’ll work from the specific to the general, because one example can sometimes illuminate a lot. I hope readers less interested in subordinate clauses than you and I are might bear with this point a moment. It widens out, I promise.

You think I misunderstood the clause “which art in heaven” in the Lord’s Prayer. That clause begins with “which.” You think such clauses should be used only to add extra information when talking about the referent previously mentioned. In this case you might repunctuate it “Our Father (which art in heaven), hallowed be thy name.”

But that interpretation isn’t the right one. The writer of Matthew was in fact obsessed with the opposition between heaven and earth, so the phrase “father which is in heaven” appears 14 times in that book, and only once elsewhere in the New Testament. Matthew’s Jesus even tells his disciples at one point (23:9) “to call no man on earth your father” and to serve only their heavenly father. This is why “father which is in heaven” appears without commas so many times in the King James translation of Matthew: Matthew’s Jesus is constantly reminding the reader which father he is talking about: not dad back at home, but a new kind of father. If you think this clause is “nonrestrictive,” try bracketing off all of those 14 instances of “father which is in heaven” with commas: “father, which is in heaven, .…” The results are often nonsensical.

I submit a meta-rule: When a proposed rule and actual usage conflict, the proposed rule is false, and actual usage should be our guide.

Which makes the point: in 1611, a committee of experts in fine English writing thought “which” could be used as a restrictive relative pronoun. If there were a native English “rule” against this, one of these scholars would surely have pointed it out. Since there was no such rule, the scholars agreed. “Which” can be restrictive. Great English writers went on for centuries using restrictive “which.”

Around the end of the 19th century, a few usage commentators, though, decided to “correct” this, going against the actual usage up to that point. They thought it was best to reserve “which” for nonrestrictive clauses only. The best known proponent of this rule in this era was the great Fowler, as mentioned. But he didn’t succeed in establishing it, either. It was not until E.B. White repeated his old teacher Will Strunk’s “which"/"that” rule that this really got stuck into American English usage discussion. Fifty years later still, you (like many other commentators, and the Microsoft Word software I’m typing this on) recommend the “rule.”

For those readers who have stuck with me, here is the point: the rule has no root in great English usage. It appears to have appealed to a few usage-book writers, among them Fowler, Strunk, White and you, for its logical simplicity: one relative pronoun for this role, another for that role. But it’s simply not what great writers consistently do, not now or ever.

What do we do when what we want the rule to be conflicts with what great writers actually do? I submit a meta-rule: When a proposed rule and actual usage conflict, the proposed rule is false, and actual usage should be our guide.

This takes some unpacking. Whose usage, in particular, should constitute our rules? But this is not so hard: when we’re describing standard edited written English, we look at standard edited written English to derive the rules. If a usage appears only very rarely, and is widely condemned, we call it a mistake. If it appears again and again from the pens of great writers and is printed after oversight by professional editors, the usage must be accepted.

Sometimes, a usage will spread that is new, illogical and strikes commentators as tasteless. But if, over time, it becomes widespread among a critical mass of good writers and is accepted by many good editors, we must acknowledge a new rule. We must be descriptivists, in other words.

And this is exactly what you are. You tell people which usages they should prefer, but when a battle has been lost over several decades, you call it lost and suggest they move on. The label of “descriptive prescriber” is one you wear proudly, and you should.

So why are academic linguists, and others like me, so down on “prescriptivism”? Because of a curious asymmetry. Systematic description of actual language is mostly undertaken by academics, a relatively small group. But the masses engage in prescriptivism. “The Elements of Style” is not the only language-usage book to sell in the millions; so has Lynne Truss’s screed “Eats, Shoots and Leaves.” If you do a radio show on language (I know you’re a veteran of many more than I), no matter what your original topic, you’ll get call after call of people asking you to rule on their pet peeves. Split an infinitive in print and you will get angry letter after letter. In fact, the style guides of The Economist and The New York Times say that split infinitives are fine, but should be avoided because they annoy many readers.

You say that for a century the best prescriptivists have dismissed nonrules like “don’t begin a sentence with a conjunction,” “don’t split an infinitive” or “don’t end a sentence in a preposition.” But all three rules are incredibly widespread; all of them were enforced by college professors of mine in the mid-1990s.

When descriptivists fight back, it is partly on behalf of others: black Americans, Southern whites (like my Dad) and many who were just unlucky not to get a great education.

Facing these presciptivist masses, linguists and descriptivists are on the back foot in many ways. They defend unfashionable propositions like the idea that African-American vernacular English is logical, expressive and grammatical. They work with a vocabulary (“determiner,” “x-bar,” “right-node raising”) that even the best lay grammarians are turned off by. And then they must defend their work against two sets of critics: the popular, often incorrect prescriptivist masses and the well-read, reasoned prescriptivists. In books aimed at a mass audience, like the 1994 text you mention, of course Steve Pinker aims at debunking mass prescriptivism. But it didn’t take Pinker a decade to convert to the idea of rules and correct English; in 1999, in “Words and Rules,” Pinker wrote about how children learn to use irregular verbs correctly.

Pinker saw no conflict in being a descriptivist and speaking of “correct” grammar. I consider myself a “prescriptive descriptivist,” and have no qualms with the word “error.” Even the “no such thing as an error” linguists whom you cite ring-fence their statements with things like “for the most part” (Trudgill and Andersson, 1990). They mean that “when expressing themselves as they intend to,” not hurried, tired, distracted or drinking, native speakers do not make mistakes. Instead, they would say that those speakers constitute their own idiolects (individual ways of speaking) and when their speech patterns line up, they constitute stable dialects, and when enough dialects overlap, they constitute languages. I would never say “native speakers can’t make an error,” but I do see what they’re aiming for: a correction of the centuries-old view that error is everywhere because most people are ignorant.

That is why I sometimes use language like “cranks,” “curmudgeons” and “sticklers” to describe many prescriptivists. Descriptivists are not only fighting their own intellectual battles but also responding to centuries of peeving. Cicero, who thought improper Latin was “disgraceful,” was quoted by Robert Lowth (the bishop who wanted to “correct” the Lord’s Prayer) on the title page of his influential 1762 English grammar. Jonathan Swift called new pronunciations “barbarous.” The thoughtful Fowler occasionally stooped to calling usages he disliked “ignorant.” Lynne Truss jokes that people who misuse apostrophes should be “struck by lightning, hacked up on the spot and buried in an unmarked grave.” Simon Heffer, a more serious recent English prescriptivist, likes to throw around “illiterate.” If you still think I’m overstating the peevish streak within prescriptivism, read the comments on any online article about grammar and usage. I wish you, Bernstein, Partridge, Fowler and the other intellectuals were fully representative of prescriptivism. I truly do. But this doesn’t seem to be the case.

Finally, when descriptivists fight back, we also do so on behalf of others: black Americans (whose distinctive dialect has long been sneered at by whites), Southern whites (like my Dad, who used “might could,” “ain’t” and “y’all”), Eliza Doolittle (the poor thing was no fool) and many who were just unlucky not to get a great education. When we see prescriptivists call such people “ignorant” and “illiterate,” it can set the blood to boiling. I wish all commentators saw the world on a scale like yours and could acknowledge “nonstandard, but rule-bound, dialect.” But most do not.

For too long, the so-called descriptivists and prescriptivists have talked past each other. I hope this conversation helps narrow the gap. I hereby promise, as you ask, to “stop demonizing all prescriptivists and start acknowledging that the reputable ones have always tried to base their guidance on sound descriptions.” This should come naturally: I never demonized “all” prescriptivists, and I praise Fowler every chance I get.

I hope that you and other arbiters of standard English will publicly take on the mass prescriptivists and nonrules with the same verve and vigor with which you take on real solecisms and slip-ups.

Some Biases Are Unfounded, and Some Aren’t

Bryan A. Garner 4:43 PM

Maybe we’re getting somewhere. If so, it’d be great if future writers were to take account of what we’re saying here. For many decades now, the needle in the prescriptive/descriptive long‑playing record has been stuck on a scratch. (I hope that metaphor is still comprehensible to our readers.)

The only thing that has kept me from being a “mass prescriptivist” is that there haven’t been more people buying “Garner’s Modern American Usage.” For my work in that book, a writer in The Economist recently called me a “highbrow prescriptivist.” But the sad fact is that there’s little call for usage guidance, whether highbrow or lowbrow.

People cling to their uninformed linguistic prejudices. I’ve countered them in print, at length, in such entries as the “Superstitions” essay in “Garner’s Modern American Usage.” Let us both continue to smite ignorance.

Hardly anyone wants to be a nonjudgmental collector of evidence. It’s far more interesting and valuable to assemble the evidence and then to draw conclusions from it. Judgments. Rulings.

You should acknowledge, however, that there’s often good reason for peeves. Some language is indeed “disgraceful” (Cicero’s word), “barbarous” (Swift’s), “ignorant” (Fowler’s) and “illiterate” (Heffer’s) — as judged against educated speech. (Some readers will suspect that the phrase “educated speech” is illogical — until they consult “Hypallage” in a dictionary or usage guide.) I avoid the Lynne Truss school of supercharged, hyperbolic sensationalism — partly because it’s lowbrow and partly because the sensationalists themselves are themselves typically ill‑informed. (See “Garner on Language and Writing,” pages 637–47.)

So you and I are getting closer together. But we’re not there yet. Your “meta‑rule” is flawed. You say: “When a proposed rule and actual usage conflict, the proposed rule is false, and actual usage should be our guide.” You can always find actual usage that contradicts any proposed linguistic ruling — and actual usage that contradicts other actual usages. The big problem with traditional descriptivism is that any evidence validates the usage. But descriptivists like you are (rightly) retreating from that position.

The better view, I submit, is the one I set forth in “Garner’s Modern American Usage”:

In the end, the actual usage of educated speakers and writers is the overarching criterion for correctness. But while actual usage can trump the other facts, it isn’t the only consideration.

There are several other factors to be accounted for, like the degree to which distinctions are being blurred, the age of an error that is becoming prevalent, and the extent to which a questionable word or phrase defies logic. For example, in my classification, “could care less” in place of the correct phrase “couldn’t care less” remains a stage-3 misusage.

If descriptivists believe that any linguistic evidence validates usage, then we must not be descriptivists. Hardly anyone wants to be a nonjudgmental collector of evidence. It’s far more interesting and valuable to assemble the evidence and then to draw conclusions from it. Judgments. Rulings. To the extent that “the masses” want such reasoning — as one could only wish — it’s because they want to use language effectively.

As for “that” vs. “which,” I’ll never hear the Lord’s Prayer again in quite the same way — given your convincing argument. But my basic point stands: In American English from circa 1930 on, “that” has been overwhelmingly restrictive and “which” overwhelmingly nonrestrictive. Strunk, White and other guidebook writers have good reasons for their recommendation to keep them distinct — and the actual practice of edited American English bears this out.

My most recent writing on “that” vs. “which” appears in “Reading Law: The Interpretation of Legal Texts.” My co-author, Justice Antonin Scalia, softened my words there because he sometimes (when I’m not around) uses “which” restrictively. When I tell him that’s a literary failing, he harrumphs. Fortunately, he has allowed me, in both our books, to change all his restrictive “whiches” to “thats.” It makes the style so much better.

I concede, though, that “mistake” is too strong a word for a “that”/“which” blemish. I maintain, however, that the practice in the best-edited American English is to confine “which,” as a relative pronoun, to either nonrestrictive uses or uses that follow prepositions (“by which,” “for which,” “in which,” and the like). I’m happy to live in disagreement with you on that tiny point — given that we have agreed on so much else.

Please, Lane, get the folks on your side of the fence to do something about that stuck needle. It’s hard on the ears and bad for one’s spirit to hear the same old epithets over and over. It’s time to move forward to a new track.

Once before I proposed a truce in the Language Wars, but only one linguist accepted my terms. If I had the power, I’d now declare the Language Wars officially at an end. It’s 3:43 p.m. Central Time on Sept. 27, 2012. The fighting must stop.

An earlier version of this article misstated the grammatical role of “which” and “that.” They were being discussed as relative pronouns, not subordinating conjunctions.

Introduction

Ed Nacional

Here’s a chilling thought: What if our English teachers were wrong? Maybe not about everything, but about a few memorable lessons. So many millions of writers have needlessly contorted their prose to avoid ending a sentence with a preposition. So many well-intentioned editors have fought to change “a historic” to “an historic.” If it turns out that the guidelines we cling to (“to which we cling”?) are nonsense, maybe the texters have the right idea when they throw out the old rules and start fresh.

But if you aren’t ready to give up — if the “flaunt” in that headline raised your blood pressure — then how can you tell the difference between a sound rule of English and a made-up shibboleth? Where do good rules come from, and how do bad ones catch on?

Room for Debate invited two authors to answer and argue: the journalist Robert Lane Greene and the usage expert Bryan A. Garner. (Their responses, conforming to ”The New York Times Manual of Style and Usage,” may not represent their positions on style issues like hyphenation and serial commas.)