This week's TES magazine contains an article by Usha Goswami on 'dyslexia'. The article includes a couple of colourful brain scan pictures, one marked 'normal' and the other 'dyslexic'.

I sent the article to Prof. Diane McGuinness because I had serious worries about the accuracy of the content. She kindly agreed to write a critque -see message 2. of this string.

Times Educational Supplement magazine. 17/08/07

The language barrier.

Why do Italians with dyslexia have an inbuilt advantage compared with English children?Usha Goswami explains

Does dyslexia really exist? Of course. All over the world, it is recognised as a specific learning difficulty intimately linked to the way we process language. Recent scientific research has found that dyslexia reflects atypical development in learning the sound structure of language -its "phonology".

Modern brain imaging helps to show where problems may lie. In skilled readers, brain activity in the left hemisphere's network of spoken language areas increases as they read. In children with developmental dyslexia, this network activity is reduced and there is more activity in right hemisphere networks.

Particularly crucial is an area in the left hemisphere that turns print into sound. It is called the posterior superior temporal cortex. All children with dyslexia find it difficult to count syllables in spoken words, to judge whether spoken words rhyme and to retain speech-based information in short-term memory.

The neural inefficiencies which result in dyslexia are shared across languages, with a similar prevalence of 5 to 7 per cent. Dyslexics in Chinese, French and Italian show similar characteristics. Nevertheless, its manifestation differs according to language. This is because of syllable structure and spelling systems.

Children with dyslexia learning to read languages such as Italian and Greek are best off developmentally. Syllable structure is simple: mostly consonant-vowel pairings, as in mama. There is a consistent, one-to-one correspondence between letters and sounds. In these languages,dyslexies show slow, effortful but accurate reading and poor spelling.

Children with dyslexia find it more difficult learning to read in languages such as English. The syllable structure is complex. Correspondence between letters and sounds is inconsistent (for instance, "a" makes a different sound in make, man, mark and mall). English dyslexic children show inaccurate reading, slow decoding and poor spelling characteristic of dyslexia in other languages.

Studies in psychology and neuroscience reveal important new information about how the brain builds a language system. Before they produce words, infants learn the basic sounds (called "phonemes"), the order they occur, and how to segment the stream of sound into separate words and syllables. Segmentation depends on speech rhythm and stress (called "prosody").

Babies between one and four days old can distinguish between languages such as Dutch and Japanese using rhythmic cues. This is a basic mammal skill: research has shown that rats and monkeys can also distinguish Dutch from Japanese. Early babbling also reflects rhythmic differences between languages. Adults who are played taped babble from French, Cantonese and Arabic infants can distinguish each "language".

In humans, the way carers speak to babies is important. This "motherese" (although fathers do it, too) uses higher pitch and increased syllable length for emphasis.

Cognitive neuroscience has shown there are populations of neurons in the brain that oscillate at the syllabic rate of speech. These neurons align their intrinsic rhythmic activity to the start of each spoken syllable. Children with developmental dyslexia find it hard to tell when syllables start. Syllables with abrupt onsets, such as "ba", are more difficult to distinguish from those with extended onsets, such as "wa". This is true in a variety of languages - including French, Hungarian and Finnish. Brain imaging studies also show this.

In languages with consistent spelling, children with dyslexia can use the
written word to sharpen up their phonological system. In English, spelling is less helpful.

Structured teaching of how sound and spelling are linked is the best way to help. In English, this is challenging, because spelling-sound consistencies occur at two levels, rhyme and phoneme. One useful scheme that trains children to make the link at both levels is Sound Linkage.

Usha Goswami is Professor of Education and director of the Centre for Neuroscience in Education at the University of Cambridge.

“All over the world, it is recognized as a specific learning difficulty intimately linked to the way we process language.”

But what does this mean? “Dyslexia” – Greek for “poor reading” means nothing more nor less than it says. If you are a poor reader this could be due to a variety of problems, the most common of which is inadequate instruction.

This statement implies that reading is a biological imperative rather than a human invention that has to be taught. One is no more likely to find “reading neurons” or ‘reading modules’ in the brain than brain cells devoted to deciphering electronic circuit diagrams, algebraic symbols, or musical notation. Inventions are objects or symbol-sets created by the human brain to help us do what we can’t normally do. If we could memorize verbatim everything that was ever said to us, we might not need a writing system. All codes – like those listed above, make it possible to link a biologically based aptitude to abstract symbols that stand for units of that aptitude – like spoken language or music. Codes have to be taught – people do not come equipped with them at birth.

Some codes are easy to learn (the Italian alphabet code is almost entirely transparent – one sound/one symbol) and thus is completely reversible. This means that reading and spelling are instantly connected and reinforce one another. Some codes are hard to learn. The English alphabet code is one of the most opaque writing systems in the world with multiple spellings for almost every sound, and multiple ways to decode the same symbol. The fact that nearly every child in Italy can read, write, and spell after the first term in school, but 30% or more of children in England (or any English speaking country) can scarcely read or spell anything after 4 or 5 years of school, tells us a lot about our writing system and the way it is taught. It tells us nothing about the human brain - unless one wanted to argue that Italians have entirely different brains to English-speaking people.

For these reasons and others, it is puzzling why Goswami prefers to believe that “dyslexia” is real, a property of the brain, and constitutes a primary disorder, rather than that language is a property of the brain and language problems or other problems (visual tracking, visual and verbal memory) may make it difficult for a child to master the symbol-to-sound correspondences of a writing system. Language is a biological imperative. Reading is not. We know a lot about the anatomical locus of language-related brain systems. But you will never find a brain module (or central locus) for translating sets of symbols created by humans to assist learning and memory for specific tasks.

Goswami continues: “Particularly crucial is an area in the left hemisphere that turns print into sound. It is called the posterior superior temporal cortex.”
Having worked in a brain research lab for 10 years, and taught neuropsychology for 20, I know enough to state unequivocally that this statement makes no sense. First, no area of the brain is designated to turn print into sound! Print is a human invention, and linking print to segments of speech is a complex cognitive task that has to be taught. Areas in the brain process what they are biologically (evolutionarily) primed to do. In complex brains, these areas can “gang up” to produce quite sophisticated learning and behaviour. Symbolic thought – for example - is very late in evolution.

To decode a writing system, various regions of the brain combine to make this possible. The frontal eye-fields (part of the frontal lobes) are entrained to scan print from left-to-right, both eyes in focus, and other frontal areas are engaged to keep attention on the task.. The visual system receives this input directly and processes it (or should) as individual letters or multi-letter units (digraphs), and transmits this elsewhere. The auditory system (superior temporal cortex) which has great facility in discriminating phonemes – links this input to its auditory representation. Cross-modal signal processing – largely a function of the parietal lobes (left and right hemisphere), help make this connection, and the left-hemisphere motor systems output subvocal or vocal speech – which it does spontaneously once a word is “decoded.”

A developmental delay or difficulty in any one of these brain systems could create problems learning to read. Thus, children with general language delays, weak auditory or verbal short-term memory, or other perceptual and cognitive deficits could have problems learning to read and spell. But these are language and memory problems, not “reading disorder” problems. These children are few and far between, constituting less than 5% of the population, and this cannot account for the 30-40% poor or non-readers in English-speaking schools.

If we look at the last 20 years of research on speech and language development, we find that there is little correspondence between what speech and hearing scientists, developmental psychologists, linguists, psychophysicists, etc. have learned about language development, and what Goswami identifies in her report as causal links to reading problems. For example, she argues that one reason English children have more difficulty learning to read is because the English “syllable structure is complex,” and that “Children with developmental dyslexia find it hard to tell when syllables start.”

But dyslexia is not a ‘developmental’ disorder (implying a biological basis for reading), nor do children anywhere in the world have difficulty segmenting syllables, a fact that has been repeatedly demonstrated in research over the last 20 years. If Goswami’s thesis was correct, English children would take much longer to learn to talk and understand speech than children in other countries, and we know this isn’t true from scores of cross-cultural studies on how infants hear and process speech. They do not, as she claims, have any more trouble hearing syllable onsets than children learning any other language. If a child couldn’t segment words out of the speech stream, he would never learn to understand or produce speech!

There is large scientific literature on infants’ auditory processing skills which dates back to 1971 with Eimas’ startling discovery that infants from 1 to 4 months old exhibit the same categorical perception for consonant-vowel contrasts (‘ba’-‘da’) as adults. In 1998, Aslin and his colleagues revealed 8 month-old infants’ astonishing skill in analyzing the phoneme cues that help wrench words out of the speech stream. And they can do this even when all rhythmic cues are eliminated.

Scores of developmental studies show that phonemic processing is one of the most “buffered” language skills humans possess, and is least susceptible to disruption and malfunction. Chaney showed that by age three, children are highly sensitive to the phoneme level of speech. Nearly all of the 87 three-year-olds in her study could listen to isolated phonemes (/b/ -- /a/ -- /t/), blend them into a word, and point to a picture representing that word – with nearly 90% scoring well above chance. Of the 22 tasks that she administered, this was the second easiest task. And contrary to Goswami’s assertion, the ability to reproduce rhyming endings or alliteration were the most difficult, with the vast majority of the children failing these tasks.

Despite results like these, Goswami persists in holding to her theory that “rhyme” is as important as phonemes in learning to master an alphabetic writing system. She even claims that rhyme is relevant to our spelling system: “spelling-sound consistencies occur at two levels, rhyme and phoneme.” The notion that the rhyme (word endings that sound alike) is relevant to learning an alphabetic writing system (which is entirely based on phonemes) has been largely discredited. When the National Reading Panel in the US published their landmark survey of reading research in 2000, results showed that rhyme-based teaching methods were singularly ineffective either alone or combined with something else. By contrast, the programmes which were highly successful all shared these features:

Teach the 40+ phonemes in English as the basis for the code (and NO OTHER UNITS), teach children to decode and encode in sequence from left to right (segmenting and blending), introduce letters as soon as possible (don’t teach phoneme awareness independently of print), include lots of copying and writing to link visual, auditory, and motor systems, avoid letter names, and never allow or encourage children to “guess” words on the basis of partial cues or pictures on the page.

Diane McGuinness
Emeritus Professor in Psychology
University of South Florida

Support for the above comments and analysis of the scientific literature is set out in depth in Early Reading Instruction (2004) and Language Development and Learning to Read (2005) MIT Press. These books review a vast literature on these topics dating from the mid 1960s to the 21st century.

One statement that immediately struck me in Goswami's article was this:
'In languages with consistent spelling, children with dyslexia can use the written word to sharpen up their phonological system. In English, spelling is less helpful'.

If 'consistent spelling' helps to 'sharpen' children's awareness of phonology, why not just start English-speaking children off on words with only the simplest mappings between spellings and sounds, as in synthetic phonics? This surely puts them on a level playing-field, at leat in the early stages, (and, I would argue, even in the longer term as far as phonemic awareness is concerned), with children learning to read in other languages.

This study is part of the continuing debate about the theory that beginning readers can work out a pronunciation for an unfamiliar printed word by seeing that its spelling, or orthography, is similar to the spelling of a familiar word. The study shows that children are not really seeing orthographic similarities but relying on ‘phonological priming’ – i.e. it is hearing a ‘clue word’ pronounced by an adult, rather than seeing it printed, which cause them to produce a similar-sounding word. Nation et al. ran some analogy experiments with children whose average age was 6.0 years. They found that ‘an equivalent number of “analogy” responses were made regardless of whether the clue word was seen or just heard’. These findings are yet another challenge to the view that young children make analogies in a way that is useful for reading: the analogy strategy is not useful as a way of reading unfamiliar words if it requires that an adult is on hand to pronounce the clue word for the child. Nation et al. conclude that ‘the extent to which beginning readers make orthographic analogies is overestimated and as a consequence, theories that emphasise the importance of orthographic analogy as a mechanism for driving the development of early reading skills need to be questioned’.
Nation, K., Allen, R., & Hulme, C. (2001). The limitations of orthographic analogy in early reading development: Performance on the clue-word task depends on phonological priming and elementary decoding skill, not the use of orthographic analogy. Journal of Experimental Child Psychology, 80, 75-94.

Bonnie Macmillan carried out a meticulous examination of the research evidence behind the influential claims that rhyme awareness promotes reading ability. Much of the article is very technical, but the first three and last three pages are quite accessible even to non-academics. A major point made by Macmillan is that many of the research studies, while claiming to have found a clear causal link between rhyming ability and reading ability, are equally open to the interpretation that the really crucial factor is alphabet knowledge – the researchers have often simply overlooked this possibility. Another important point is that ‘The [rime analogy] strategy cannot, in fact, be considered a beginning reading strategy because some letter-sound decoding skill and a considerable sight vocabulary are needed first, in order to use it’. In the closing section of the article, Macmillan gives a very clear and simple account of what is necessary in order to read a cvc word: `letter-shape recognition, the left-to-right, letter-to-sound translation of each letter in turn, and the blending together of the three letter-sounds to pronounce the word’. This study raises some very serious questions about the thinking behind much of the National Literacy Strategy.

There is debate over whether children’s early rhyme awareness has important implications for beginning reading instruction. The apparent finding that pre-readers are able to perform rhyme tasks much more readily than phoneme tasks has led some to propose that teaching children to read by drawing attention to rime units within words is ‘a route into phonemes’ (Goswami, 1999a, p. 233). Rhyme and analogy have been adopted as an integral part of the National Literacy Strategy (DfEE, 1998), a move which appears to have been influenced by three major research claims:1) rhyme awareness is related to reading ability, 2) rhyme awareness affects reading achievement, and 3) rhyme awareness leads to the development of phoneme awareness. A critical examination of the experimental research evidence from a methodological viewpoint, however, shows that not one of the three claims is sufficiently supported. Instructional implications are discussed
Macmillan, B.M. (2002). Rhyme and reading: A critical review of the research methodology. Journal of Research in Reading, 25(1), 4-42.

This is a bit behind the times but we thought we’d like to have our two penn’th on Goswami’s piece, which is, it has to be said, very loosely written indeed. There are so many points one could take up and challenge. For example, she says that ’… the way carers speak to babies is important. This “motherese” … uses higher pitch and increased syllable length for emphasis.’ Of course, she‘s quite right that in many societies people do speak to young children using child directed speech (CDS). However, CDS is by no means universal. When studying the Quiché people of central America, Clifton Pye* found that they did not modify their speech in the way that many western caregivers do. Such language practices as CDS may indeed contribute to a child’s speech development but without being essential to it. We need a lot more studies and more evidence before taking on trust the unqualified assertions made by Goswami and others.

However, we wanted to add one or two points to what Diane McGuinness has already cogently argued. One of the fundamental misunderstandings entertained by Goswami is betrayed in her assertion that: < a > ‘makes’ a different sound in ‘make’, ‘man’, ‘mark’ and ‘mall’. As Diane has pointed out repeatedly, letters don’t make sounds: ‘Letter symbols in alphabetic writing systems represent speech sounds. Speech sounds are the basis for the code, and letters are the code. Letters do NOT “have” or “make” sounds. People have sounds’ (Early Reading Instruction, p.13).
After explaining how the structure of the English sound-spelling system works, Diane points out that there is a difference ‘in outcome between teaching the code from letter to sound (visual strategy) versus from sound to letter (phoneme strategy). (Early Reading Instruction, p.65). The outcome of the former strategy is usually a ‘haphazard organization … notable for the total disregard for the sounds of the language.’ (Ibid., p.67) If you teach from sound to print, you always keep the logic of the code straight. When the trajectory is reversed, as soon as the many-to-one and the one-to-many aspects of the code are encountered, the whole system collapses into chaos.

Goswami’s approach is patently graphemic and, with its insistence on the importance of rhyme, has been shown by Diane not to be well founded. However, it has exercised an appeal to teachers all over the English-speaking world, many of whom are seduced by the seeming plausibility of onset and rime. What we need, as Diane points out later in her book, is for ‘better descriptors for the different types of phonics’ (p.129). The present classificatory system ‘is unsatisfactory because it does not identify the critical difference in logic between programs that teach the code backward from print to sound, and those that teach it forward from sound to print (linguistic phonics).’ (p.129) Not only does Goswami not understand the trajectory, she doesn’t understand the way the code works. And neither, in our opinion, do the government experts who have produced Letters and Sounds, its title alone making clear its orientation. Again as Diane has said repeatedly said, we need real evidence rather than personal opinions.

Macmillan, B. (2002). Rhyme and reading: A critical review of the research methodology. Journal of Research in Reading, 25, 4~42.

There is debate over whether children’s early rhyme awareness has important implications for beginning reading instruction. The apparent finding that pre‐readers are able to perform rhyme tasks much more readily than phoneme tasks has led some to propose that teaching children to read by drawing attention to rime units within words is ‘a route into phonemes’ (Goswami, 1999a, p. 233). Rhyme and analogy have been adopted as an integral part of the National Literacy Strategy (DfEE, 1998), a move which appears to have been influenced by three major research claims:1) rhyme awareness is related to reading ability, 2) rhyme awareness affects reading achievement, and 3) rhyme awareness leads to the development of phoneme awareness. A critical examination of the experimental research evidence from a methodological viewpoint, however, shows that not one of the three claims is sufficiently supported. Instructional implications are discussed.

The logic of onset-rime thinking has always struck me as faulty. However true it may be that it’s easier for children to identify onsets and rimes in spoken words than to identify all the individual phonemes, any practical act of reading starts with the print on the page, not with the spoken word. Children who are good at splitting spoken words into onset and rime would have to learn the corresponding print units (e.g. ’s – ip’, ‘sl – ip’, ‘st – op’, ‘sp – ot’, ‘str – ip’ etc.) in order for their onset-rime skill to help them with reading, and then the drawbacks become clear – one is that they have to learn some units of more than one letter before becoming confident just with single letters, and another is that there are many more onset and rime units than phonemic units.

Jonathan Solity put it well in a paper for a 1999 Ofsted seminar:

He wrote:While early phonological awareness may be at the level of onset and rime, low level correspondences (i.e. graphemes-phonemes) are effectively “given” by the orthography.

An article published in 2002 suggested that Goswami had finally understood that it’s easier for children to start with single letters and the sounds they represent, but I don’t know whether she has stuck with this.

The multimillion pound investigation, which is jointly funded by the Wellcome Trust and Education Endowment Foundation (EEF), comprises six studies, one of which is the GraphoGame Rime project, led by Professor Usha Goswami, who is Director of the Centre for Neuroscience in Education, and Professor of Cognitive Developmental Neuroscience in the Department.
The GraphoGame Rime project will test how the GraphoGame Rime computer game can affect how children learn to read. The GraphoGame Rime game was created with the aim of helping children learn to read by developing their phonological awareness through rhyme analogy.

The English version of GraphoGame Rime was developed by the lead grantee, the educational neuroscientist Usha Goswami, building on research into “rhyme analogy”. This is the notion that pupils learning to read in English learn not just through phonemes (“a”,”t”) but also rimes (“at”). Pupils sit at a computer, laptop or tablet with headphones on, and play the game for around 10 minutes a day. Instruction is focused on helping children to match auditory patterns with groups of letters (e.g. rimes) displayed on the screen. The game first focuses on rimes that are most common in English. But each child has a personal log-in, and the game offers increasingly challenging levels as they improve their skills.

Goswami concerned herself largely with normal reading development at first, but my impression is that she then started concerning herself more with 'dyslexia'. I suppose it would be possible for her to think that it's normal for beginners to start with single letters and the sounds they represent while also thinking that children who start showing signs of 'dyslexia' may benefit from onset-rime teaching. I haven't checked this in detail, however.

The way language is heard by children with dyslexia is subtly different to the way the rest of us hear language.”

It is the process in which the brain moves sounds into words that causes the problem. “When a speech signal comes into the brain, the sound is a pressure wave, and the energy in that pressure wave fluctuates,” Goswami says.

“Those fluctuations in energy carry speech rhythms – every time [you] stress a syllable, there will be more intensity coming into the brain; a word like 'baby’ has a strong first syllable and a weaker second syllable.”

And what happens is that those “modulations and changes in energy” fail to sync up with the neurons that send electrical signals in the brain. “The brain rhythms line themselves up with the speech rhythms and code the signal, and that process doesn’t work properly in dyslexia.”

“In nursery rhymes you have deliberate patterning, and that patterning which we find aesthetically appealing is carried by energy fluctuations.”

From this it follows that one way of helping potentially dyslexic children is to pump useful language patterns into the early years.

“Having a rich early repertoire of poetry, singing and musical remediation that’s always linked to language,” Prof Goswami says, is helpful, because “you’re matching syllable beat patterns to language before they start learning to read, to get their language systems in the same place as all the other children who are coming to school without this handicap.”

It is not, as one may suspect, about doing “more phonic drills – it’s actually in the stressed syllable patterning level which is reflected in speech rhythm”.

But we know that children who show signs of difficulty with reading when they are four to six years old, can learn to read by eight if they are given extra help with phonics early and consistently by a well-trained adult using an effective programme that is the same as that used by the rest of the class.

Children who are slow-to-start, for a variety of possible reasons, can be identified early and are responsive to catch-up intervention in small groups, also using synthetic phonics teaching. These early strugglers were shown to close the gap and to keep up with both reading and spelling.

A child with extreme difficulties may take longer, but intensive help with synthetic phonics, combined with other interventions like speech therapy, works.

Not only is AF’s reading age now 9 months above his chronological age but for each category of the nonword diagnostic reading test, he scored 100%.

There was a Jolly Phonics study in Australia that showed that children who appeared to have symptoms of dyslexia before they began learning to read, no longer had those symptoms after learning with Jolly Phonics. I cannot find the study or remember the exact details so if anyone can, please give me a link.

I am never clear how Goswami selects her subjects for her studies? How does she define and diagnose dyslexia? How are these youngsters any different from other youngsters with persistent difficulty in 'acquiring' literacy skills?