Menu

Sonority Sequencing Principle

To those without any knowledge of linguistics, let me apologize. A post about the sonority sequencing principle may seem wilfully obscure. It certainly isn’t engaging directly with the text of the Voynich manuscript, which should be our first goal. But when I set out my thoughts behind an agnostic approach to reading the manuscript the sonority sequencing principle was the kind of tool I had in mind. It is a universal principle which applies to all languages and lets us see part of the linguistic structure even if we do not know the language behind the characters. So long as the Voynich text is a natural language* the sonority sequencing principle should be able to tell us something.

With that out of the way, you may be wondering exactly what the sonority sequencing principle is (we’ll call it SSP from now on, for the sake of space). In short, SSP is a rule which makes sounds within a syllable abide by a set sequence. The sequence in question is governed by sonority, a phonological term we can think of loosely as relative loudness. The details don’t concern us, only that some sounds are more sonorous than others, and that the sequence of sonority orders the sequence of sounds in a syllable.

The sounds with the highest sonority are vowels. These are always the peak sonority in every syllable, and the SSP broadly states that sonority must decrease away from a vowel. So the sounds before and after vowels must be less sonorous, and in turn the sounds before or after those must be less sonorous again. There can be exceptions, and sometimes languages have specific quirks as to how the SSP works, but it is a linguistic universal.

For example, think of the syllable /pla/, which is common in English and many languages. We know that /a/ is a vowel so the most sonorous, and the SSP tells us to expect that as /l/ is nearer to the vowel than /p/ it is therefore more sonorous; this is in fact the case. The alternative syllable /lpa/ does not occur in English, nor in most (or even any) languages, and it violates SSP. It is not simply that /p/ must become before /l/, but rather that /l/ must be nearer the vowel. The syllable /alp/ is quite normal, but /apl/ much less so, because of SSP. Sonority grows to the vowel peak and then falls away.

The first researcher to find a pattern in the Voynich text which could fit the SSP was Jorge Stolfi. His ‘crust–mantle–core’ paradigm proposed that characters fell into three different classes—which he called layers—and that by assigning numerical values to each layer he could show that within typical words those values were highly ordered.

However, Stolfi’s model did not account well for the characters <a, o, y>, even though they are the most common letters. Also, the characters with the highest numerical value which, like vowels, formed the ‘peaks’ of words, were gallows. Not only did that mean many words had no sonority peak—and therefore no vowel—but that eight whole characters would have to be assigned vowel or vowel–like qualities, despite the Voynich script being fairly small.

I believe that my section/syllable proposal for the low level word structure is an advance on Stolfi’s work. It integrates all the most common characters and places <a, o, y> at the heart of syllables. By seeing <a, o, y> as potential vowels it also provides a location for peak sonority which occurs in most words and shows a clear change in pattern before and after. From this beginning we can build a rough sonority sequence based on what we understand of syllable structure.

Let us take separately the characters which come before and after a vowel. Even though some are the same we do not know enough at the moment to integrate them into one sequence.

For those characters which come before, we know that should an <e> sequence appear it will always be immediately before <a, o, y> (taking into account the deletion of <y>). Before that comes <ch, sh>, which in a sequence such as <kche> must be somewhere between <k> and <e> in sonority. This is not undermined by a sequence such as <chk>, as when we discussed high level word structure it made more sense to see the initial <ch> as a separate section like many other occurrences of <cho, cha, cheo, che> and so on.

The position of the rest of the characters before a vowel is unsure. Certainly <k, t, f, p> do not regularly let any others come before, but it may be that <ckh, cth, cfh, cph> are more sonorous as they do not even let <ch, sh> before them. The characters <d, s, r> must rank roughly with <k, t, f, p> as they neither regularly come before or after, but <l> must be less sonorous as it can come before gallows in some cases (leaving aside whether we believe those to be digraphs or not). So our sequence might be something like:

For those characters which come after a vowel the sequence is simpler yet more ambiguous. Certainly <i> sequences always come straight after <a, o, y>, and <m, n> always after those. But while <r> also comes after <i> sequences, <d, s, l> barely ever. Syllable tails with two characters from <d, s, l, r> are seldom seen, though they could give us more information about sonority. Only <ls> is at all common (106 tokens), though paradoxically it would make <l> more sonorous than <s>, which is not in line with the character sequence before vowels.

We can make up a rough sequence for characters after vowels, but bear in mind that it is much less sure:

[a, o, y] > [i, ii, iii] > [m, n, r] ?[d, s, l]?

Rather than discuss possible implications of these two sequences here, I will make a new post in due course to suggest what we could learn.

*Really it doesn’t even have to be a natural language. Any language, even constructed, would have to abide by the same phonological principles simply because it is spoken by humans.