It’s a dark night; you’re in an unfamiliar city, slightly lost, but pretty sure you’ll know where you are if you just get to the next corner. The streets are quiet. A stranger steps out of the gloom in front of you, and announces that certain words don’t mean what you think they mean. They’re words that you use but have never really felt comfortable with, words that you use mostly because you’ve heard them in set phrases, words like plethora.

Plethora, you wonder, could it be I’m using it wrong? That niggling uncertainty kicks in, the same niggling uncertainty that’s pushed you to educate yourself all these years. It creeps further, darkening your mind. Have I been using words wrong? Your breath quickens — how many others have thought heard me say them before this stranger came up and told me I was wrong? Have I used one of them lately? Have I been judged? Your pulse races. Did I just say one? — is, is that why this stranger materialized to announce it was wrong?

The stranger says more words are being used wrong, by others, by you. These words are more common, common enough to be known but not common enough to be well-known: myriad, enormity. Oh God, you think, I’ve used those words in business writing! The uncertainty changes into certainty, certainty that you are wrong, and worse, that people know it. Important people know it. That’s why you haven’t been promoted, it’s why your friends were laughing that one time and didn’t say why. The stranger has you now. The stranger knows the dark spots on your soul. The stranger is almost touching you now, so close, so close. Your eyes meet. The stranger’s eyes widen; this is it, the final revelation. Do you dare listen? You can’t listen, you must listen:

“And you’re using allow wrong, too!”

At which point the spell is broken — because c’mon, you’re not using allow wrong. You’d definitely have noticed that. You push the stranger out of the way, and realize your hotel’s just on the next block.

In the unfamiliar city of the Internet, I encountered such a stranger: Niamh Kinsella, writer of the listicle “14 words you’ve been using incorrectly this whole time“. Kinsella argues that your usage doesn’t fit with the true definition of these words, by which she usually means an early, obsolete, or technical meaning of the word.

Her first objection is to plethora, which she defines as “negative word meaning a glut of fluid”. And so it was in the 1500s, when it entered the language as a medical term. This medical meaning persists in the present day, but additional figurative meanings branched off of it long ago — so long ago, in fact, that one of the meanings branched off, flourished for 200 years, and still had enough time to fade into obsolescence by now. The extant figurative meaning, the one that most everyone means when they use plethora, is antedated to 1835 by the Oxford English Dictionary, at which point it was usually a bad thing (“suffering under a plethora of capital”, the OED quotes). But by 1882 we see the modern neutral usage: “a perfect plethora of white and twine-colored thick muslin”.

The second objection is to myriad, and here Kinsella deviates by ignoring the early usage. She hectors: “It’s an adjective meaning countless and infinite. As it’s an adjective, it’s actually incorrect to say myriad of.” But in fact myriadentered English as a noun, either as a transliteration of the Greek term for “ten thousand”, or as an extension of that very large number to mean “an unspecified very large number” (both forms are antedated by the OED to the same 1555 work). The adjectival form doesn’t actually appear until two centuries later, the 1700s. Both nominal and adjectival forms have been in use from their inception to the present day; claiming that one or the other is the only acceptable form is just silly.*

There’s no point in continuing this after the third objection, which is to using allow in cases that do not involve the explicit granting of permission. To give you an idea of what folly this is, think of replacements for allows in a supposedly objectionable sentence like “A functional smoke alarm allows me to sleep peacefully.” The first ones that come to my mind are lets, permits, gives me the ability, and enables. That’s the sign of a solid semantic shift; four of my top five phrasings of the sentence are all verbs of permission with the permission shifted to enablement. Kinsella herself has no beef with it when she isn’t aiming to object, judging by her lack of objection to an article headlined “Are we allowed optimism now?”.

This enablement usage isn’t new, either; the OED cites “His condition would not allow of his talking longer” from 1732. (Permit without permission is antedated even further back, to 1553.) This oughtn’t even to be up for debate; even if it were completely illogical — which, as an example of consistent semantic drift, it’s not — the fact that it is so standard in English means that it is, well, standard. It is part of English, and no amount of insisting that it oughtn’t to makes a difference. It’s similar to the occasional objection I see to Aren’t I?: even if I agreed it didn’t make sense, virtually every (non-Scottish/Irish) English speaker uses it in place of amn’t I?, so it’s right. End of discussion.

Why do we fall for this over and over again? Why do we let people tell us what language is and isn’t based on assertions that never have any references (Kinsella cites no dictionaries) and rarely hold up to cursory investigation? I don’t know, but my guess is that it appeals to that universal mixture of insecurity and vanity that churns inside each of us.

We are convinced that we must be doing everything wrong, or — and perhaps worse — that we’re doing most things right but there’s some unexpected subset of things that we have no idea we’re doing wrong. So if someone tells us we’re wrong, especially if they candy coat it by saying that it’s not our fault, that everyone’s wrong on this, well, we just assume that our insecurities were right — i.e, that we were wrong. But then, aware of this new secret knowledge, these 14 weird tricks of language use, our vanity kicks in. Now we get to be the ones to tell others they’re wrong. Knowing these shibboleths gives you the secret knowledge of the English Illuminati. Between our predisposition to believe we’re wrong, our desire to show others up by revealing they’re wrong, and our newfound membership in this elite brotherhood, what incentive do we have to find out that these rules are hogwash? All that comes out of skepticism is, well, this: me, sitting on my laptop, writing and rewriting while the sun creeps across a glorious sky on a beautiful day that I could have been spending on the patio of my favorite coffee shop, approaching my fellow patrons, dazzling them with my new conversation starter: “I bet you use plethora wrong. Allow me to explain.”

—

*: In fact, Kinsella undermines her own definition of “countless and infinite” in her supposedly correct example by using “countless and infinite” to describe the finite set of stars in the universe, so maybe she’s just in love with the sound of her own hectoring.

I’d presumed it’s trivial to show that good grammar can improve your chances of success — not that good grammar is an indication of ability, but merely that having good grammar skills lends an appearance of credibility and competence that may or may not be backed up with actual skills for the task at hand. I strongly suspect, for instance, that a resume written in accordance with the basic rules of English grammar will be more likely to bring its writer an interview, all else being equal. Rather like legacy status in an application to an Ivy League school — except with an at-least-tenuous link to ability — I’ve imagined it serves as a little bonus.*

But having recently seen a few ham-handed attempts at this yield results approximately as convincing as a child’s insistence that their imaginary friend was the one who knocked over the vase, I’m beginning to re-think my presumption.

For instance, I’ve recently found this terrible post and infographic from Grammarly that purports to show that — well, it’s a little hard to say, because they’ve managed to write 500-some words without ever having a clear thesis. The infographic reports the grammatical error rates for three pairs of competing companies, and juxtaposes this with corporate data on the three pairs, presumably to look for correlations between the two.

I believe their claim is that fewer grammar mistakes are made by more successful companies. That’s a pretty weak claim, seeing as it doesn’t even require causation. We’d see this pattern if greater success led to improved grammar, perhaps by having money to hire editors; we’d see it if better grammar increased the company’s performance; we’d see it if the two were caused by an unobserved third variable. That said, the study won’t even find evidence for this tepid claim, and perhaps that is why they carefully fail to make the claim explicit.

The post tells the reader that “major errors undermine the brand’s credibility” and that investors “may judge” them for it, but even these weak statements are watered down by the concluding paragraphs. This restraint from overstating their case is hardly laudable; it’s clear that the reader is intended to look at these numbers and colors, this subtle wrinkled-paper background on the infographic, and draw the conclusion that Grammarly has stopped short of: you need a (i.e., their) grammar checker or you will lose market share!**

The only testable claim in the infographic’s conclusion (“they must demonstrate accurate writing!”) isn’t borne out by the 1500 pixels preceding it.

It might not seem worth bothering with a breakdown of the bad science going on in this infographic. Alas, the results were uncritically echoed in a Forbes blog post, and the conclusions were only strengthened in the re-telling. So let’s look at exactly why this analysis fails to establish anything more than that people will see proof of their position in any inconclusive data.

Let’s start by looking at the data underpinning the experiment. The company took 400 (!) words from the most recent LinkedIn postings (!) of three (!) pairs (!) of competing multinational corporations. We’re not even looking at the equivalent of a single college admission essay from each company, in an age where companies are producing more publicly consumable text than ever before.

Not to mention, I looked at the LinkedIn posts from Coke, one of the companies tested. Nine of their last ten posts were, in their entirety: “The Coca-Cola Company is hiring: [position] in [location]”. The tenth was “Coke Studio makes stars out of singers in India [link]”. How do you assess grammaticality from such data?

Awesome Data, Great Jobs!

Well, let’s suppose the data is appropriate and see what results we get from it. Remember: the hypothesis is that lower error rates are correlated with higher corporate success (e.g., market share, revenue). Do we see that in the head-to-head comparisons?

The first comparison is between Coke and Pepsi. Pepsi has more errors than Coke, and, fitting the hypothesis, Coke has a higher market share! But Pepsi has higher revenues, as the infographic notes (and then dismisses because it doesn’t fit the narrative). So we start with inconclusive data.

The second comparison is between Google and Facebook. Google makes fewer errors and has higher corporate success. Let’s take this one at face value: evidence in favor.

The third comparison is between Ford and GM. Ford makes fewer errors but is worse on every financial metric than GM. “However, these numbers are close”, the infographic contends. Evidence against.

So we have three comparisons. In one, which company is more successful is ambiguous. The two “decisive” comparisons are split. The data is literally equal in favor and in opposition to the conclusion. It is insulting that anyone could present such an argument and ask someone to believe it. If a student handed this in as an assignment, I would fail them without hesitation.***

What’s richest about this to me is that the central conceit of this study is that potential consumers will judge poor grammar skills as indicative of poor capability as a company. I’ve never found convincing evidence that bad grammar is actually indicative of poor ability outside of writing; the construction crew that put together my house probably don’t know when whom can be used, but my house is a lot more stable than it would be if Lynne Truss and I were the ones cobbling it together. But for all those people out there saying that good grammar is indicative of good logic, this clearly runs counter to that claim. Grammarly’s showing itself incapable of making an reasoned argument or marshalling evidence to support a claim, yet their grammar is fine. How are poor logic skills not a more damning inability than poor grammar skills, especially when “poor grammar” often means mistakenly writing between you and I?

The Kyle Wienses out there will cluck their tongues and think “I would never hire someone with bad grammar”, without even thinking that they’ve unquestioningly swallowed far worse logic. Sure enough, the Forbes post generated exactly the comments you’d expect:

“I figuratively cringe whenever grammar worthy of decayed shower scum invades my reading; it makes you wonder just how careful the company is of other corporate aspects (oh, gee, I don’t know, say, quality as well)”

With comments like that, maybe these people are getting the company that best reflects them: superficial and supercilious, concerned more with window-dressing to appear intelligent than with actually behaving intelligently.

—
*: I, of course, don’t mean that being obsessive about different than or something is relevant, but rather higher-level things like subject-verb agreement or checking sentence structures.

**: Though Grammarly makes an automated grammar checker, it wasn’t used to assemble this data. Nor was it run on this data, so we don’t know if it would even provide a solution to help out these grammatically deficient brands.

***: I don’t mean to imply that this would be convincing if only the data were better and all three comparisons went the right way. There’s no statistical analysis, not even a whiff of it, and there’s no way you could convince me of any conclusion from this experiment as currently devised. But at least if the comparisons went the right way, I could understand jumping the gun and saying you’ve found evidence. As it is, it’s imagining a gun just to try to jump it.

People pop in fairly regularly to complain about “one of the only”, which I’m just really not that interested in. Usually the complaints are in response to my argument a few years ago that it was perfectly grammatical and interpretable (specifically rebutting Richard Lederer’s silly claim that only is equivalent to one and therefore is inappropriate for referring to multiple items). I haven’t gotten as many only=one complaints lately, but I’ve now received a new objection, presented as part of a comment by Derek Schmidt:

When [only] precedes a noun used in plural, it implies that there are no other similar items that belong to the list. “The only kinds of writing utensils on my desk are pencils and pens and highlighters.” […] But I have many of those pens, so if someone asked if they could borrow a pen, and I said, “No, that’s one of the only writing utensils on my desk!” that would be a little disingenuous and if someone was standing at my desk and saw the number of writing utensils, they would be baffled and think me a fool. Rightly so. Because they would understand it (logically, even) as meaning “that’s one of the few”, which is very false. So… “one of the only” means about as much as “one of them”.

To buttress his point, he referred me to a grammar column in the Oklahoman, which I never grow tired of noting was once called the “Worst Newspaper in America” by the Columbia Journalism Review. That was 14 years ago now, and I sometimes wonder if it is fair to keep bringing this up. Then I read Gene Owens’s grammar column in it and I wish the CJR had been harsher.*

“Now I can understand if he were the only English speaker or if he were only one of a few English speakers,” Jerry said, “but I don’t know how he could be one of the only English speakers.” That’s easy, Jerry. If he was any English speaker at all, he was one of the only English speakers in the area. In fact, he was one of the only English speakers in the world. […] The TV commentator probably meant “one of the few English speakers in the area.” But even if the colonel was “one of the many English speakers in the area,” he still was one of the only ones.

It continues on in this vein for a while, and but his point seems to be approximately the same as Schmidt’s, boiling down to the following statements:

It is grammatical to say “one of the only”.

It is used regularly in place of “one of the few”.

Examining it literally, one could say “one of the only” to describe something that there are many of.

This would be a strange situation to use it in.

Therefore “one of the only” oughtn’t be used in the case where it wouldn’t be strange.

Up till the last sentence, I agree. In fact, I don’t think any of those points are controversial.** But the last sentence is a big leap, and one that we demonstrably don’t make in language. Would it be silly of me to say:

(1) I have three hairs on my head.

Thankfully I’m still young and hirsute enough to have many more than three hairs on my head, and I think we’d all agree it would be a silly statement. But, parsing it literally, it is true: I do have three hairs on my head, though in addition I have another hundred thousand. In case this is such a weird setting that you don’t agree it’s literally true, here’s another example:

(2) Some of the tomatoes I purchased are red.

If I show you the bin of cherry tomatoes I just bought, and they’re all red, am I lying? No, not literally. But I am being pragmatically inappropriate — you expect “some” to mean “some but not all”, just as you expect “three” to generally mean “three and no more”. These are examples of what’s known as a scalar implicature: we expect people to use the most restrictive form available (given their knowledge of the world), even though less restrictive forms may be consistent too.***

To return to Schmidt’s example, it may be truthful but absurd to protest that one of 30 pens on my desk is “one of my only pens”. But just because the truth value is the same when I protest that one of two pens on my desk is “one of my only pens”, this doesn’t mean that the pragmatic appropriateness doesn’t change either. Upon hearing “one of the only”, the listener knows, having never really heard this used to mean “one of many”, that pragmatically it will mean “one of the (relatively) few”.

There is, perhaps, nothing in the semantics to block its other meanings, but no one ever uses it as such, just as no one ever says they have three hairs when they have thousands. This is a strong constraint on the construction, one that people on both sides of the argument can agree on. I guess the difference is whether you view this usage restriction as evidence of people’s implicit linguistic knowledge (as I do) or as evidence of people failing to understand their native language (as Schmidt & Owens do).

Finally, and now I’m really splitting hairs, I’m not convinced that “one of the only” can always be replaced by “one of the few”, as the literalists suggest. If we’re being very literal, at what point do we have to switch off of few? I wouldn’t have a problem with saying “one of the only places where you can buy Cherikee Red“, even if there are hundreds of such stores, because relative to the number of stores that don’t sell it, they’re few. But saying “one of the few” when there’s hundreds? It doesn’t bother me, but I’d think it’d be worse to a literalist than using “one of the only”, whose only problem is that it is too true.

Summary: If a sentence could theoretically be used to describe a situation but is never used to describe such a situation, that doesn’t mean that the sentence is inappropriate or ungrammatical. It means that people have strong pragmatic constraints blocking the usage, exactly the sort of thing that we need to be aware of in a complete understanding of a language.

—
*: I am being unfair. Owens’s column is at least imaginative, and has an entire town mythos built up over the course of his very short columns. But I never understand what grammatical point he’s trying to make in them, and as far as I can tell, I’d disagree with it if I did. As for the “worst newspaper” claim, this was largely a result of the ownership of the paper by the Gaylord family, who thankfully sold it in 2011, though the CJRnotes it’s still not great.

**: Well, it might be pragmatically appropriate to use “one of the few” in cases where the number of objects is large in absolute number but small relative to the total, such as speaking about a subset of rocks on the beach or something. I’m not finding a clear example of this, but I don’t want to rule it out.

***: Scalar implicatures were first brought to my attention when one of my fellow grad students (now a post-doc at Yale), Kate Davidson, was investigating them in American Sign Language. Here’s an (I hope fairly accessible and interesting) example of her research in ASL scalar implicature.

If you believe the grammar doomsayers, the English subjunctive is dying out. But if this is the end of the grammatical world, I feel fine — and I say that even though I often mark the subjunctive myself.

The most talked about use of the subjunctive is in counterfactuals:

(1) Even if I were available, I’d still skip his party.

For many people, marking the subjunctive here is not required; either they never mark it, using the past indicative form was instead, or they (like me) sometimes mark it with were, and sometimes leave it unmarked with was. For this latter group, the choice often depends on the formality of the setting. I’m calling this “not marking” the subjunctive, rather than “not using” it, because it seems less like people making a choice between two moods for the verb and more like a choice between two orthographic/phonemic forms for it.

It’s similar to the alternation for many people (incl. me) of marking or not marking who(m) in the accusative case, discussed by Arnold Zwicky here and here, and Stan Carey here. That said, I believe that (at least some) people who never use were in (1) do not have a grammatical rule saying that counterfactuals trigger the past subjunctive, and I’m not worried about that either.

For being such a foolish war, World War I did generate some artistic propaganda.

This blitheness about the subjunctive does not go unmourned. I recently found myself being Twitter-followed by someone whose account just corrects people who fail to use the subjunctive in sentences like (1).* And Philip Corbett, associate managing editor for standards at the New York Times, annually rants about people failing to mark the subjunctive. Consider one of Corbett’s calls to man the ramparts, which he begins by quoting, in its entirety, a 90-year-old letter complaining that the subjunctive must be saved from impending destruction.** Corbett continues:

“[…] despite my repeated efforts to rally support for [the subjunctive] the crisis has only grown. For those few still unaware of the stakes, here is a reminder from The Times’s stylebook”

What are the stakes? What would we lose without the subjunctive? Corbett cites sentences such as these:

The mayor wishes the commissioner were retiring this year.
If the commissioner were rich, she could retire.
If the bill were going to pass, Secretary Kuzu would know by now.

If these were the stakes, I’d ditch the subjunctive. Corbett points out that in each of these we’re referring to a counterfactual condition, which should trigger the subjunctive. But note that using the indicative/unmarked was doesn’t make that any less clear. There is nothing to be gained from using the subjunctive in these cases but a sense of superiority and formality. (Not that I’m against either of those.)

But here’s the weird thing: all this defense of the subjunctive, all these worries — they’re all only about the past subjunctive. And the past subjunctive is weird, because it’s only marked on be, and it’s just a matter of using were for singular as well as plural. For everyone worrying that this is some crucial distinction, please note these sentences where it is insouciantly the same as teh indicative form:

(2a) The mayor wishes the commissioners retired last year.
(2b) If the commissioner wanted to, she could retire.
(2c) If the billswere going to pass, Sec. Kuzu would know by now.

If anything, the loss of past subjunctive were strikes me as regularization of English, the loss of the last remaining vestige of what was once a regular and widespread marking system. Losing the past subjunctive makes English more sensible. I don’t see that as a bad thing.

And anyway, the subjunctive probably isn’t going to disappear, not even the past subjunctive. The past subjunctive is, to my knowledge, necessarily marked in Subject-Auxiliary Inversion constructions:

(3) Were/*Was I a betting man, I’d say the subjunctive survives.

A quick look at Google Books N-grams makes it look like were subjunctive marking has been relatively constant over the last 40 years in written American English, so maybe this is all just a tempest in a teacup.

Plus all of this worry about the subjunctive ignores that the present subjunctive is going strong.*** I’ve written about sentences where the present subjunctive changes the meaning (though I wrote with a dimmer view of the subjunctive’s long-term prospects), and Mike Pope supplied an excellent example:

(4a) I insist that he be there.
(4b) I insist that he is there.

In cases where marking the subjunctive is important, it’s sticking around. In cases where it isn’t important, and the subjunctive follows a strange paradigm, identical to the indicative for all but one verb, it may be disappearing. This is no crisis.

Summary: People who write “if I was” instead of “if I were” aren’t necessarily pallbearers of the English subjunctive. It may be regularization of the last remaining irregular part of the past subjunctive, with the present subjunctive remaining unscathed. And if the past subjunctive disappears, there will be, as far as I can tell, no loss to English. Go ahead and use it if you want (I often do), but to worry that other people aren’t is wrinkling your brow for nothing.

—
*: I do respect the tweeter’s restraint in seemingly only correcting people who’re already talking about grammar.

**: That this destruction has been impending for 90 years has somehow not convinced the ranters that their panic may be misplaced. Also, Corbett keeps titling his posts “Subjunctivitis”, which I think sounds great, but not in the same way he probably does. -itis usually means an unwelcome inflammation of the root word, and I can’t help but see all this as an unhelpful inflammation of passions over the subjunctive.

***: In fact, and I think this is pretty cool, (Master!) Jonathon Owen directed me to a classmate’s corpus work suggesting that for at least some verbs, marked subjunctive usage is increasing.

About The Blog

A lot of people make claims about what "good English" is. Much of what they say is flim-flam, and this blog aims to set the record straight. Its goal is to explain the motivations behind the real grammar of English and to debunk ill-founded claims about what is grammatical and what isn't. Somehow, this was enough to garner a favorable mention in the Wall Street Journal.

About Me

I'm Gabe Doyle, currently a postdoctoral scholar in the Language and Cognition Lab at Stanford University. Before that, I got a doctorate in linguistics from UC San Diego and a bachelor's in math from Princeton.

In my research, I look at how humans manage one of their greatest learning achievements: the acquisition of language. I build computational models of how people can learn language with cognitively-general processes and as few presuppositions as possible. Currently, I'm working on models for acquiring phonology and other constraint-based aspects of cognition.

I also examine how we can use large electronic resources, such as Twitter, to learn about how we speak to each other. Some of my recent work uses Twitter to map dialect regions in the United States.