Search

I’ve always basically agreed with this position, but I’ve never heard it expressed so starkly as the BBC does here:

Years ago, the inaugural post on this blog was about precisely this problem: should you follow common prescriptivist norms when editing, for a quiet life and to save your writers from the peevers? Or should you assist in the debunking of language myths by allowing new or common usages into print?

I thought the decision was an uncomfortable one then and still do. But there’s no agonising about it here. Although there’s a certain amount of rhetorical loading – by “good English”, the writer means “formal English”, and “bad” means “informal” – this doesn’t seem to be an argument based on conservatism. Rather, it’s the raw pragmatism that’s so arresting. The argument is simple: “Some listeners are pedants. Some are not. Only pedants complain. So write for the pedants.” It even uses the word “appease”.

And if that were not clear enough on its own, the entry in the accompanying style guide for “enormity” removes all doubt:

It should be said that this is from a guide to writing radio news that dates back to 2002. It’s still on the BBC website, but it’s not clear that it’s still the current advice. The BBC Academy, where many resources for the corporation’s journalists are now held, appears to have no equivalent passage on tone, and the latest style guide, although still prohibiting “enormity” meaning “size”, contains no observations about pedantry.

But it wouldn’t surprise me if the BBC’s underlying approach to language was still just as cautious. For an organisation that gets trapped in the middle of every political and cultural row in Britain, it probably doesn’t take long to decide that there’s no point getting shouted at over “decimate” as well.

Ten Minutes Past Deadline is five! I’d like to say “five today”, but in fact it was five last Friday: the first post on this site went up on 30 March 2013.

Although many subjects have attracted its attention, including baseball, cartoons and the rise of IMDb’s formidable robot copydesk, this blog has all too frequently returned to the subject that first inspired it: prescriptivism and formal English. The first post that ever appeared here arose from years of reading two inspiring blogs – You Don’t Say and HeadsUp – and, through them, becoming increasingly engaged with editing’s big issues: ethics, grammar, ambiguity, statistics, and, above all, language change.

Written in response to a debate on how forward-thinking one should be when editing someone else’s writing, that post was motivated by a slightly defensive sense that although formal English was indefensible, it was somehow important too: and that, even though the case against prescriptivist crotchets was unanswerable, deadline was not the right moment to get into an argument with a writer over notional agreement.

Five years later, that debate is as hard to resolve as ever, but the advice, tips and ideas readers have offered over that time have helped move the blog forward immeasurably. Thank you to everyone who’s read, commented, shared, liked, quoted, linked to, disagreed with and retweeted it over the past half-decade. And, by way of celebration, here is a distillation of what Ten Minutes Past Deadline now thinks it thinks (at least currently) about formal English:

Formal English is absurd, but unmistakable

There is no academic justification for the ban on split infinitives, or the stricture that forbids qualifying a sentence with “hopefully”, or the objection to ending a sentence with a preposition, or for many of the other rules taught or followed as being “good English”. And yet, taken together, those rules have come to create a recognisable register: a tone, a rhetoric, a voice. However baseless its antecedents, when formal English is spoken, everyone recognises it for what it is: the language in which power speaks and expects to be addressed.

Formal English is not imposed from above

The English language has no central authority, not even an ineffectual one like the Academie Française. Everyone who has tried to suggest usage changes, or best practice, or new words, has had to do so from a position as a private citizen – or, at best, as part of a self-appointed body. None of them have had the power to compel correct usage. The mechanism by which, say, a language commentator’s suggestion becomes a teaching point in primary-school English, which is then carried forward into the solicitors’ letters and leading articles of a generation of adults, is an achievement of influence, not enforcement. Prescriptivism in English has to win hearts and minds; there is no state imprimatur to reinforce the message. Which leads us to a surprising conclusion:

Formal English is a descriptivist phenomenon

In Modern English Usage, Fowler suggested dozens of improvements to written English, some of which caught on: some, but not all. In the 1930s, a BBC committee invented dozens of words to describe new phenomena in modern life, some of which caught on: some, but not all. Proposing, it appears, is not enough: every piece of language change, from the accidental to the intentional, has to pass the test of usage.

Some of Fowler’s ideas were terrible, but some – such as his forgotten proposals for punctuating parentheses – were just as useful as his “which/that” distinction, which has become a staple of legal English. Similarly, the BBC committee failed in its primary task of inventing a new word for one who watches television (the corporation rejected “auralooker” and went for “viewer”), but it did successfully popularise the term “roundabout” for the road junction. The unpredictability of these successes and failures suggests that prescription, just like natural language change, is subject to the mysterious processes of acceptance by which English is ultimately formed. That means many prescriptivist initiatives are doomed to failure: but it does suggest that the ones that have survived to create what we now call “formal English” have passed the stern test of public approval.

I fear the people who don’t like sentence adverbs are not going to like this:

And, although I don’t normally have a problem with “hopefully”, for once I might agree.

Sentence adverbs – or, as linguists call them, “modal adjuncts” – are adverbs that, rather than modifying the verb in a sentence, express an attitude towards the sentence itself. They frequently appear at the start of the sentence, set off by a comma: “Hopefully, I’ll find them”; Honestly, you may not”; “Frankly, my dear, I don’t give a damn.” Although all such words can operate as standard adverbs* – “she looked up hopefully”; “he spoke honestly for the first time”; “his eyes gazed frankly into hers” – when placed in certain contexts they take on a higher function: one of commenting on the thought being expressed.

Of all the common sentence adverbs, “hopefully” is the one that resonates with editors most, because it became the subject of a brief but heated usage debate about 50 years ago, as Geoffrey Pullum recounts in a blogpost on Lingua Franca:

The 1960s saw an increase in the frequency of modal-adjunct use for another adverb: hopefully. Alongside They’ll wait hopefully (“They’ll wait with hope in their hearts”), it became increasingly popular to use sentences like Hopefully they’ll wait (“It is to be hoped that they’ll wait”).

This unremarkable little piece of linguistic evolution might have gone unnoticed, if the aging usage specialist Wilson Follett had not bristled. It is “un-English and eccentric” to use the word that way, he asserted dogmatically (Modern American Usage: A Guide (1966), page 170), even though (as he said) the German equivalent “hoffentlich” is fine in modal-adjunct use.

Follett was dead by 1963 (his posthumous usage book was completed by Jacques Barzun and others), but he left a legacy: By the late 1960s, using hopefully as a modal adjunct was widely taken to be a grammatical sin.

As John McIntyre observes in You Don’t Say, Follett’s language was ferocious enough to have quite an impact – “how readily the rotten apple will corrupt the barrel”, he says at one point – and the disapproval spread to other style manuals. But it proved to have shallow roots: faced with popular usage and the existence of other unproblematic sentence adverbs in English, such as “mercifully”, people began to retreat from their positions. As Prof Pullum says:

For a few years, battles raged and peevers fumed. But the opposition peaked when disco was young, and Barry White and the Love Unlimited Orchestra were hot. By 1979, [conservative language columnist] William Safire had accepted the modal-adjunct use of hopefully … The dispute was basically over.

It was, having started and finished in less than two decades – although Associated Press, out of an abundance of caution, prohibited the usage until 2012 before finally caving in.

But although the acceptability of “hopefully” as a sentence adverb is now settled, that does not mean that it succeeds as one in all situations. While it is certainly not true that modal adjuncts always need to be at the start of a sentence, or even set off with commas, to work, as Prof Pullum shows in the following example –

Compare “He was flirting with her too obviously”, which comments on the manner of the flirting, and “He was obviously flirting with her”, which doesn’t.

– there is nonetheless something amiss with the Gary Younge standfirst that prevents “hopefully” from functioning as intended.

The sentence is an intricate one: the main subject and verb, “I decided”, is then followed by a long, comparative construction: “ignoring a feted white supremacist was more dangerous than hopefully exposing him”. In fact, the comparative construction functions as a complete sentence on its own; the main verb is “was”, and the subject of the sentence is “ignoring a feted white supremacist” – a verb phrase functioning as a noun, or, in other words, a gerund.

The object in the more/than construction is also a gerund – “exposing him” – and it is this idea of exposure that “hopefully” is trying to comment on, rather than directly modify. But, if anything, it really only succeeds in doing the latter and creating the idea of “exposing in a hopeful manner”.

It is possible to use modal adjuncts with gerundive constructions – “Hopefully, going to the coffee shop won’t make me late” – but I can’t think of an example where they succeed other than when placed at the start or the end of a simple sentence. In this standfirst, however, we have a “sentence adverb” that is neither intended to modify the verb that it sits next to, nor the sentence as a whole, but instead act as a comment on one of two gerunds contained in an independent clause. Setting it off in commas might help a bit, but, I fear, not enough. Sentence adverbs can do a lot, but I don’t think they can do that much.

Several of you have been asking for a definitive style ruling in recent weeks about the now-perennial “cannot be underestimated/cannot be overestimated” debate. I know feelings have run high on the issue, and until now we have tried to preserve the traditional distinction in meaning in our pages, even though the interchangeability between the two phrases in spoken English is now almost total.

Historically, it is true that – as recently as the early 21st century – the correct use of the phrases was highly dependent on context, and to say then that the prime minister’s intellectual capacity “cannot be underestimated”, when the opposite was meant, would have been to cause considerable offence. But the error has now become such a common one that it is time to seriously address the question of whether it is an error at all.

Of course I am aware, as some of you have kindly pointed out, that under and over “mean completely opposite things” and that the distinction is “perfectly obvious to those who are prepared to think about it”. Of course it is, but the everyday rough-and-tumble of language has a way of wearing fine distinctions – even useful ones like these – smooth. Look, for example, at how the similar (and now vanishing) terms “biennial” and “biannual” became so confused in the 1900s that the following definition once appeared in Chambers’s 20th Century Dictionary:

biannual (bi-an’-ū-əl) adj. two-yearly: also half-yearly.

And consider “head over heels” – a phrase universally understood in its metaphorical sense, but which, parsed logically, says the exact opposite of what it means.

I am reluctantly coming to the conclusion that “cannot be over/underestimated” have, through widespread usage, fallen into the same category of phrase as “head over heels”: those that can only be understood in the round, and not by parsing very word individually.

I am aware this decision will disappoint many of you, especially those of you who have pointed me to a significant strand of linguistics scholarship that disagrees with me. Writing in the early 2000s, eminent figures on the influential website Language Log contended against the acceptability of what was then called “misnegation”. Comparing “cannot be underestimated” in relation to the (now-uncontroversial) phrase “could care less”, Professor Mark Liberman wrote:

I’ve argued that “could care less”, where modality and scalar predication seem similarly to point in the wrong direction, has simply become an idiom. Shouldn’t the same be said for “cannot underestimate the importance”?

Whatever is happening with “cannot underestimate” applies equally to “cannot understate”, “impossible to underestimate/understate”, “hard to underestimate/understate”, “difficult to underestimate/understate”, “cannot be underestimated/understated”, “hard to underrate”, “cannot be undervalued”, and many other common ways to re-express the same idea.

In contrast, alternative formulations of “could care less” are rare, and can only be understood as bad jokes, to the extent that they’re not simply puzzling. Thus one semantic equivalent to “could not care less” might be “could not possibly have less concern” — and we find this in a published translation of Montaigne…

“However, if my descendants have other tastes, I shall have ample means for revenge: for they could not possibly have less concern about me than I shall have about them by that time.”

But in this case, Montaigne means to imply that his concern-meter will be pegged at zero, not at its maximum value. And more generally, we don’t see things like “I could possibly have less concern” used with the meaning idiomatically assigned to “I could care less”. This is the behavior that we expect from an idiom; and the different behavior of “cannot underestimate/understate/
underrate/undervalue” is what we expect from a psychologically probable error.

Other scholars at the time contended that “cannot be under/overestimated” was indeed an idiom; but even if they and I are wrong and it is a mistake, it seems to be a mistake that English-speakers are never going to stop making. And, as we all know to our frustration, appeals to reason over usage rarely succeed in these matters because language doesn’t listen to reason.

Therefore, henceforward, “should not be underestimated” and “should not be overestimated” shall in all cases be deemed to be equally correct ways of saying the same thing, which is something to the effect of “should not be evaluated incorrectly”. The style guide will be updated accordingly.

Believe me, it gives me no pleasure to come to this conclusion. But our language has changed around us: and with the 22nd century just over a decade away, we have better and more significant things to do with our editorial resources than enforcing a distinction that, to our readers, is increasingly becoming inaudible.

‘I’d have gone for “visionnaire” myself. I’m glad we didn’t get “auralooker”:

Historian Nick Kapur’s fascinating Twitter thread about the BBC’s Advisory Committee On Spoken English and its influence on modern speech reveals just how close we came to referring to anticyclones as “halcyons”, but also offers an illuminating insight into what prescription in language really means.

Because of course, there is not one kind of linguistic prescriptivism: there are two. One opposes all language change and all neologism, and attempts to conserve current norms as an eternal standard. But the other seeks to deliberately modify language: not to reject new words, but to invent them, and to influence speech and writing to go in new directions – such as the campaigns to popularise Ms and Mx as neutral honorifics. It is this second kind of prescriptivism, which one might call activist or progressive prescriptivism, that Kapur is tweeting about here.

The story begins, he relates, in 1926, when Lord Reith sets up a committee to help resolve one of the many problems a pioneer national broadcaster has to solve: how should you pronounce certain words on air? (This group, the Advisory Committee On Spoken English, still exists today, doing very similar work to help BBC broadcasters). Then in 1935, faced with the question of what to call users of the new media of the day – television – a new sub-committee was set up, not just to advise on pronouncing words, but to invent some new ones. Led by the Anglo-American man of letters Logan Pearsall Smith – an eager language reformer – the Sub-Committee on Words generated the alternatives listed above to start the debate (although it eventually rejected all of them and recommended “televiewer”, subsequently shortened to “viewer”.)

After that, the sub-committee remained active, and widened its remit to mass-produce new words for broadcast far beyond the new industry’s immediate needs, eventually becoming so extravagant and implausible in its inventions that an exasperated chairman of governors closed it down in 1937. But by then it had created several terms – “roundabout” for the road junction, “serviceman” for members of all the armed forces, “art researcher/art historian” to replace the German word “kunstforscher” – that are now commonplace in modern English.

Today we speak of "BBC English" as a standard form of the language, but this form had to be invented by a small team in the 1920s & 30s. 1/

The impression descriptivist scholarship frequently gives is that language is an unknowable stew of errors, localisms, homophone confusions and misreadings, prone to unpredictable change. The emphasis, or the cultural preference, often seems to be bestowed on the unwilled variations to language, not the willed ones. But Kapur reminds us that English is also highly susceptible to the approaches of those who have a design on it, from Edwardian grammarians like Fowler to equalities campaigners to spelling reformers like McCormick at the Chicago Tribune. There are words and conventions in many registers of modern English that were created deliberately by people who wanted to see them catch on and took the opportunity to make it happen.

Sometimes, of course, prescriptivism is institutional, and benefits from that privilege. It might be justifiably argued that the BBC’s committee, as a quasi-official body proposing usage for the nation’s only broadcaster, was in a very strong position to succeed, particularly as it was inventing terms for then-unnamed phenomena. But the Academie Française, which is attempting to do for French today almost exactly what the BBC committee did for English in the 1930s – and from a similarly state-sanctioned position – is greeted with widespread indifference and derision for its efforts.

And in any case, innovative prescription does not need an official platform to succeed. This blog has discussed at length the extent to which Fowler’s suggestions have influenced modern formal and legal English, but Fowler himself was no state official, nor did his books bear any government imprimatur (although Churchill is said to have recommended Modern English Usage to his staff after it came out). His books were a success because, then as now, there is a sustained public appetite for advice on how to engage with formal English. (Indeed, given the existence of a generation of professional linguists who consider it their role to observe rather than advise, the field for such material is possibly clearer today than it was then.)

This is not to say the process is easy: frequently, big innovations just don’t catch on. There is no doubt that some of the committee’s ideas, like some of Fowler’s, are much worse than others: for example, one member apparently felt it desirable to create a shorter term for “inferiority complex” (“inflex”), and another proposed “yulery” as a collective term for Christmas festivities. The point is not that Fowler or the committee were always “right” about what they proposed; the point is – at least sometimes – that they were successful.

Usage remains the timeless, and the only, judge of current English. But usage does not simply adjudicate on terms that have risen up unbidden from the demos; it also sits in judgment on peri-statal prescriptions and private linguistic entrepreneurialism. Due process is afforded to all new words, whether they are accidents or designs. Linguists say that language is a democracy, and it is: a democracy in which, among other things, anyone is free to prescribe and see what happens.

“Most writers I know have tales to tell of being mangled by editors,” writes the esteemed academic John Gross,*

“… and naturally it is the flagrant instances they choose to single out – absurdities, outright distortions of meaning, glaring errors. But most of the damage done is a good deal less spectacular. It consists of small changes (usually too boring to describe to anyone else) that flatten a writer’s style, slow down his argument, neutralise his irony; that ruin the rhythm of a sentence or the balance of paragraph; that deaden the tone that makes the music.”

Here at the Tribune, we are a “writer’s paper”: that is to say, we allow our senior writers – and especially our columnists – not just their own opinions, but their own style as well. Of course, in theory we edit everything perfectly – we intervene whenever it is required, and keep clear whenever it is not – but to the extent there is an institutional bias, it is to be hands-off: not to flatten a style or ruin an argument for the sake of enforcing “good English”. So we are, one would hope, less likely than some of Gross’s targets to “pounce mercilessly on split infinitives … and all the other supposed offenses that are often no offense at all”.

But hands-off editing comes with its own set of hazards. Specifically, it can create a culture of under-intervention: we do basic editing, correcting spellings and checking dates, but perhaps decline to step in when a columnist has mixed a metaphor, or written a sentence so long that it provokes amusement on Twitter. In the worst cases, faced with something notably angry, funny, colloquial or emotional, we can become paralysed: confronted by a confessional tour de force or celebrity stream of consciousness, we freeze, run a spellcheck and send it through without doing the whole job.

So, bearing the countervailing risks in mind, where you would you step in, and where you would you step back, here?

This is Laura Craik’s “Upfront” column in the Evening Standard’s ES magazine. She is a fashion and trends commentator who writes in a chatty, informal style typical of that genre: even if you don’t know her, that much becomes immediately apparent when you read the copy. The tone and register are easy to grasp, and so are the editing parameters: you instinctively allow “mahoosive”, “yada yada”, the sentence fragments, or “Soz” in a way that you wouldn’t if they cropped up in a Telegraph editorial.

But I’m not so sure about “pontificating”. Given the context (“I say ‘rushed’, but really I’d been pontificating since May”), I strongly suspect what’s meant is “prevaricating”. Even if the intended sense is something closer to “I’d been talking about it to everyone for months”, “pontificating” still isn’t quite right: it carries the sense of speaking (like a pontiff) from a sense of real or imagined authority, and the whole point of the piece is that the author didn’t know what to do. In a piece where nearly everything should be allowed to stand, this is something that needs to be changed: the one reason in 600 words not to step back and wave the copy through.

Intentional malapropisms are funny. Unintentional ones on the way to making a different kind of joke are just distracting. That’s where the kind of invisible mending that broadsheet subs do comes in. Tone is exclusively the province of the writer – there is a lot of truth in the columnist’s weary complaint that “it’s my column, not yours” – but sense and cogency are the business of the newspaper as a whole, and particularly the copydesk. Making a change like that doesn’t “flatten the writer’s style” but enhances it, by removing a distraction over which a literate reader might trip. Editors shouldn’t do too much, but we usually have to do something.

* “Editing and Its Discontents“, in The State of the Language, edited by Christopher Ricks and Leonard Michaels (University of California Press, 1990)

Ouch. Owned. Or – to use the correct spelling of the word in this context – “pwned“. As a rueful Roth wrote later, “I find myself wistfully remembering the days when tweeting at brands was a safe, innocuous pastime”. And other responses to M-W’s intervention have been broadly favourable: the tweet was rude, yes, commenters thought, but also uncompromisingly truthful about the ineluctable nature of language change.

However, scrolling down through M-W’s Twitter feed, it emerges that this is not the only time it’s taken a bold line in such matters. Five days earlier, in similarly lively terms, it made the following observation:

Well, hang on. Yes, “enormity” can indeed mean “great size”, and has done for centuries. But, no, it’s not “fine”: currently, as a word, it’s totally skunked. As we discussed last month, “enormity” is hovering uneasily on the brink of a permanent change in meaning, but is still tending to drag its other meaning of “moral horror” into simple discussions about size. It’s a very tricky word to be employing at the moment; a while ago, for example, we saw fit to remove it from a news story about the heated subject of the Scottish referendum because of its overtone of opprobrium. It’s far from clear that, in these circumstances, a major dictionary should be recommending it quite so breezily. Authorities are looked up to; these things get taken seriously.

As this blog has had occasion to remark before, people don’t require help with informal English. They speak it well. They do not seek the assistance of their editor friends when composing a tweet or posting on Instagram; but they do, sometimes, when updating their CV or writing to a solicitor. What they want is help with formal English: a register whose social significance they grasp, but one in which they perceive themselves not to be fluent.

This is when they turn to the dictionary: to be briefed on the meaning of a legal idiom, or the appropriate use of a word in their own reply: to find out, perhaps, whether “enormity” means what they think it means. But they are doing this at a time where one of the prime objectives of linguistics is the debunking of the prescriptive maxims about language that have been taught during last two centuries. An unsatisfactory dialogue has therefore developed between linguists and the public in which queries about the niceties of formal English are met only with assurances about the validity of informal English. For the last several decades, it seems, lexicographers have been talking about what’s changed in the language while their readers have been asking about what hasn’t.

The spirit behind this objective is democratic to a fault, and the efforts to expose the frailties of formal English are intellectually impeccable. But nonetheless, they are starting to amount to the total deconstruction of a dialect that many people still have no choice but to speak.

The ghosts of Fowler, Strunk and White still haunt the sphere of formal discourse. It is highly commendable that more modern authorities like Merriam-Webster should be getting involved in the conversation about usage. But burning a grumpy prescriptivist on Twitter? Waving off debate about a word in difficult transition? That isn’t advice; it’s advocacy. Roth is right: counsel as blasé as this is just a little too chill for comfort.