Publications

Numerous papers have been written over the years by Abelard consultants, each on some aspect of technical or scientific writing. Some of these are listed below and can be read (and printed) by clicking the associated link (coloured blue).

Further, between 2009 and 2011, Abelard Consulting published Words, a free, quarterly e-journal on technical writing and communication (with contributions from technical writers all round the world). The twelve issues of Words can be accessed by clicking here.

Note that the search facility (at the left of this page) will search through these publications as well as through the Abelard website.

This book explores the varieties of the English language (both historical and contemporary) and ponders what such variety means for those who think that usage can be correct or incorrect. A philosophical analysis of the concepts of incorrect and wrong follows, with the conclusion that it is a logical error (what philosophers call a category mistake) to claim that any particular way of writing can be incorrect or wrong. This does not mean though, that writing cannot be judged poor. But it does mean that a new attitude towards language use is needed, one that is more tolerant than that of pedants and sticklers, and anyone else who wants to deny humans their innate creativity and lust for novelty. What this attitude should be is the major theme of the book. Read more.

Are the creators of intelligence tests intelligent enough to create questions that admit to only one answer? The answer appears to be no. A number of common IQ test questions are considered in this paper and their lack of a single answer noted. For example, the number-sequence question common on IQ tests—such as {2, 4, 7, 11, 16, ?}—has an infinite number of answers. To psychologists, the correctness of any answer doesn't matter. All that does is that intelligent testees give the same answers as others who have been independently shown to have high intelligence. The illogicality, and immorality, of this approach is questioned.

It is a widespread view among language commentators that the number of words in a sentence is an important consideration in determining its readability. Many such commentators aver that a sentence should never go above a certain number of words. Indeed, common readability formulas (such as the Flesch Reading Ease Score found in Microsoft Word) calculate readability partly as a function of word count: the more words, the less readable a sentence. This paper presents the results of a simple experiment that proves that this view of readability is mistaken. It is not the number of words in a sentence that determines its readability, but the number of distinct concepts (or chunks of information) in it.

Time thievery
[First published in Dissent, edition 43, Summer 2013/2014.]

Time is everyone’s most valuable asset. We cannot, obviously, live without it. But more than that, time is a birth right. Now if the theft of a tangible asset, such as a car or a painting, is a moral issue then so must be the theft of an even more valuable asset: time. This is not appreciated by those many manufacturers whose products are sold with scant or no user instructions. The indiscriminate sale of such products is tantamount to selling a product knowing that some items are faulty. For if a purchaser cannot understand how to use the product, ipso facto they cannot use it, and, from the perspective of customer utility, this is no different to the product being faulty. The causes might be different, but the outcome is the same: despite money changing hands, the customer cannot use the product as they rightly expected to be able to. Now, selling products knowing that many don’t work is to take money under false pretences. This is morally reprehensible and recognised as such in law. If so, then it is morally reprehensible to sell products knowing that many purchasers will not be able to use them. Thus honesty and fairness impose on manufacturers of complex, non-intuitive products marketed indiscriminately an obligation to provide customers with usable instructions. Morality is on the side of consumers here, not manufacturers. It is time for the law to catch up.

The Plain English movement,
and legal challenges to organisations publishing indigestible public documents,
has fuelled a resurgence of interest in readability and its measurement.
Sentential measures of readability (based on sentence length and syllable
count) have many supporters. The readability statistics that Microsoft Word
gives are based on sentential measures. This paper argues that sentential
measures cannot define readability, nor can they be reliably used as indicators
of readability. Numerous examples of texts that score well on sentential
measures of readability but which are of dubious readability are given.
This is followed by an analysis of the purported correlation between readability
and sentential measures, and by a critique of the methods commonly used to validate readability formulas.

In defence of the passive voice
[Fist published in Words, volume 1, issue 3, August 2009. Reprinted, with minor changes, in Offpress: Newsletter of the Society of Editors (Queensland), February 2012, pp. 1–6.]

Many modern language handbooks (and Instructions for Authors) are advising writers to choose the active voice wherever possible. Indeed, preferring the active has become a mantra of the Plain English movement. But many of the arguments put forward against choosing the passive voice are misguided or poorly grounded. There are numerous circumstances where the careful writer should choose the passive voice (and some circumstances where they have no other option). This paper looks at some of those circumstances (and challenges some of the arguments put forward for avoiding the passive voice).

Many writers feel that information needs to be presented to readers in small quanta or chunks. An industry has grown up around Information Mapping, a methodology that insists on limiting the chunks presented to readers to 7 plus or minus 2. The Information Mapping methodology is supposedly based on research done by American psychologist George Miller. This paper looks at Miller's research and concludes that it does not support a 7 plus or minus 2 chunking limit. Moreover, more recent research—and logic—show that a chunking limit, whatever its size, is irrelevant in determining whether a reader will comprehend what they read. We may have reason to chunk the material we present to readers, but it cannot be based on a fixed limit (such as 7 plus or minus 2).

The boom in publication of books and magazines on popular science owes some of its impetus to a desire to arrest the decline of interest in studying science and mathematics. At a time when the world needs more science, students are increasingly being wooed to other disciplines, a trend that could, in the long-term, prove catastrophic. Thus the need to woo back the wavering student. But science is hard. It is difficult to popularise. Make it too simple and it will inspire few; make it too difficult and eyes will glaze over.

Simplification comes in many hues. A fairly harmless variety is the making of an unqualified statement when its truth is known to be conditional. More worrisome forms include shortcuts in logic and definitional sleights-of-hand. In this paper, I consider claims made in recent popular books, one by an eminent scientist (Stephen Hawking) and the other by an eminent mathematician (Marcus du Sautoy). Hawking gives a number of arguments that appear to be simple logical fallacies. Du Sautoy redefines some common notions in order to make claims that seem somewhat sensational or to explain, perhaps too easily, certain natural events.

Such simplifications might seem harmless, at least to those who look favourably on the sciences. But there is no reason to assume that only such folk will read these books. Science-agnostics and science-deniers might read them too, and might well find in the over-simplification evidence that science is lacking and that scientists are not to be trusted. The radio airwaves, and the opinion pages of newspapers, are full of the ranting of science-deniers. And there can be no doubting that they have influence. If their simplifications are seen as sloppy logic or definitional fiat, popularisers of science thus risk alienating the very folk they are trying to attract.

Should writers and editors follow trends in language use if there is no strong convention one way or the other? The bible of Australian English—Style manual for authors, editors and printers—says we should. But some trends are clearly not benign. They get in the way of readability. This is demonstrated in a critical review of the recommended use of quotation marks one finds in Style manual.

Technology is not always all it's cracked up to be. Many a silver lining harbours a cloud. Consider, for instance, the much vaunted simplicity of DITA and structured authoring, and then note how it shackles instructional creativity and denies the writer a chance to exploit the communicative power of textual appearance. Think of the customer-created wiki with its democratic invitation to Everyperson to share their wisdom: but then think too of the distracting anarchy of styles, the unexamined guesses camouflaged as truth, the un-thought-of gaps, the shaky reliance on the generosity of others, none of which is found in texts meticulously sculpted by technical writers. And think too of the power of the modern search engine and how it is supplanting the humble back-of-book index. And yet, as this paper shows, the humble index surpasses a search engine on any reasonable measure of usability.

Language
in motion
Presented at the Technical Communications Association of New Zealand conference, Wellington, NZ, November 7–9, 2007. Published in the conference proceedings.

There are growing signs of
a swing back towards linguistic puritanism, to the view that there are correct
and incorrect ways of writing and speaking. Proponents of this view must,
however, accommodate the rich variability that the language has shown over
the centuries (much of which is of respectable pedigree). This paper describes
some of that variability and then challenges a number of arguments for linguistic
prescriptivism (the view that, despite linguistic variability, some usages
are right and some wrong, come what may). The paper ends with an exploration
of how writers can continue to embrace the goal of writing for maximal communicative
efficiency while still accepting that change is inevitable, and even continuous.

The ISO definition of usability in documentation ties it to effectiveness, efficiency and user satisfaction.
Though clearly on the right track, this definition is too general to be of practical use. Another view ties
usability to the notion that information must be easy to find, easy to understand and easy to apply. This
is a practical definition, and its ramifications are discussed.
The notion of ease-of-understanding is given in-depth treatment. Attempts to equate understandability
with the sort of readability measured by readability formulas are dismissed in favour of a view that
equates it with communicative efficiency: writers must get their message across and with the least effort
and distraction on their readers’ part. This cannot be done without pre-eminent regard for words and
language: thus the unbreakable link between usability and language.
Finally, the drift of focus in our profession away from language and towards tools and methodologies is
discussed in light of the profession’s numerous, and not entirely understandable, name-changes. That
writing is our primary task was once evident in the name of what we did: technical writing. What we
call ourselves now obscures the fact that what most of us do most of the time is still writing. Language is
still our core business. Cementing the link between it and usability should help return the spotlight to
where it needs to be. For what good is expertise in tools and methodologies if we cannot get our message across to our users?