Here's a very interesting blog posting by Prof. Dorothy Bishop (hat tip to Liz Ditz) on the subject of using medical sounding labels for 'behaviourally defined conditions that can seldom be pinned down to a single cause' such as dyslexia, dyspraxia and ADHD.

So, what does the science say? Are these valid disorders? I shall argue that these medical-sounding labels are in many respects misleading, but they nevertheless have served a purpose because they get developmental difficulties taken seriously. I’ll then discuss alternatives to medical labels and end with suggestions for a way forward.

This looks well worth a read. I've only had time to read the first part but this sentence was particularly arresting:

in 1976, Bill Yule concluded: “The era of applying the label 'dyslexic' is rapidly drawing to a close. The label has served its function in drawing attention to children who have great difficulty in mastering the arts of reading, writing and spelling, but its continued use invokes emotions which often prevent rational discussion and scientific investigation".

I’ve been communicating with Hugo Kerr (a retired vet) re. Dorothy Bishop’s blog posting, ’What’s in a name?’ http://deevybee.blogspot.com/2010/12/whats-in-name.html The following paragraph caused me particular problems and I hoped that Hugo, with his medical background, would be able to put into words why it made me feel uneasy. He came up trumps! Thank you, Hugo.

Prof. Dorothy Bishop wrote:

It follows from what I’ve said above, that the boundary between disability and no disability is bound to be fuzzy: most problems fall on a scale of severity, and where you put the cutoff is arbitrary. But in this regard, neurodevelopmental disability is no different from many medical conditions. For instance, if we take a condition such as high blood pressure: there are some people whose blood pressure is so high that it is causing them major symptoms, and everyone would agree they have a disease. But other people may have elevated blood pressure and doctors will be concerned that this is putting health at risk, but where you actually draw the line and decide that treatment is needed is a difficult judgement, and may depend on presence of other risk factors. It’s common to define conditions such as dyslexia or SLI in terms of statistical cutoffs: the child is identified as having the condition if a score on a reading or language test is in the bottom 16% for their age. This is essentially arbitrary, but it is at least an objective and measurable criterion. However, test scores are just one component of diagnosis: a key factor is whether or not the individual is having difficulty in coping at home, work or school

Here is what Hugo had to say:

The problem you have with the piece you sent me is that much of it is perfectly good, even common, sense. The issue is that she applies said sense across from known concepts grounded in observable and concrete realities to improbable ones (like dyslexia, or even perhaps SLI) where they tend to steadily generate consequences which get less and less credible the more work is done with, or upon, them. The opening remarks people make about dyslexia are usually apparently banal enough, but because they are applied to an unlikelihood underpinned by beliefs rather than scientific understandings they tend to grow bizarre when developed.

Many medical conditions (like not spelling particularly well!!!!) come in categories arranged along a spectrum, often alongside many others with overlapping symptomology, as you already know of course. The normal range (ie no clinical indication at all of a need to intervene) can be wide. At the potentially pathological end of the normal range we begin to think in terms of abnormality, but of course there is also a range of abnormality (from let’s wait and see to let’s do something soon, or even let’s do something right now). It’s a smooth spectrum of course in most cases and conditions – and she insists dyslexia is a neurological condition like any other, so it will have to obey this iron law of clinical subjectivity. This frequent, and very often inherent, clinical subjectivity is why diagnosis is an activity which demands such expertise but also such humility. Making a diagnosis is very often, as he admits, a “difficult judgement” and to exactly this extent it demands an expert – one whose expertise lies precisely in the sensitive appreciation of signs and the appropriate weighing and prioritising of these - and the “other factors” she invokes.

It’s interesting that she mentions SLI in this regard. I read a bit about SLI at one time. It was fascinating to observe. Researchers invariably, and subjectively, decided that x% of low achievers would be designated SLI and did so. They awarded a diagnosis on the strength of this statistical placement. A few longitudinal studies, however, followed the same schools over many years and measured language ability at intervals, using the same tools. They continued to ‘diagnose’ the bottom x% as SLI even though very many actual children moved in and out of the category! Many children moved well out of the x%, while many others fell quite a distance into it. This did not shake the researchers’ confidence in their diagnoses. It all reminded me strongly of another condition…

The section you sent is very typical of today’s writing. Having abandoned the old IQ/achievement diagnostic standard (well, in theory; older research which used it is still disgracefully held to be solid) many people now fall back on the bottom so-and-so-much percentage as their criterion, floppy and scientifically disreputable though they would probably, if pushed, themselves even admit that this is. The genetics of dyslexia wallahs in Colorado do this, for example. Olson himself even writes that this makes statistical life so much more rewarding as using the Bell curve of normal distribution permits the fanciest statistics to be deployed, and then she does. It’s completely shameless!

'A further means of ‘diagnosis’ of dyslexia and selection of sample ‘dyslexics’ is simply to throw in the sponge, deploy the ‘bell curve’ of reading ability and define those in, say, the lowest 10% as ‘dyslexic’ (e.g. Olson 2006, Paracchini et al 2007). As you will know, the normal distribution curve, or bell curve, is the curve which can be plotted for any attribute which is normally distributed across a population (height is the usual example). The curve looks like a bell, hence the name. Reading ability is normally distributed. If you plot reading ability across the population you get the familiar bell shaped curve. It is easy to select, say, the bottom 10% of such a population from their results on reading tests and consequent place on the curve. As Paracchini et al write ‘RD [reading disability] represents the lower tail of a normal distribution of reading ability found in the general population’ (ibid. 2007 p. 59). Kate Nation (2006 p. 2) reaches the same over-extended classification when she writes about ‘… individuals who are at the low end of distribution – individuals who are reading disabled’. Olson further claims that ‘the positive consequence of the bell curve in reading research is that it allows us to apply powerful statistical methods in our genetic analysis of dyslexia and individual differences that depend on normal distributions …’ (Olson 2006 p. 3).

It may be useful, in certain limited circumstances and in certain rather broad but limited ways, to identify and analyse those in the lowest 10% of reading ability. However, it is not legitimate to claim that simply because they all find themselves in this bottom 10% they must all share any particular characteristic, let alone all suffer from the same syndrome, without further evidence that this is so. We have no evidence as to why these poor readers are in this group. We can guess, though, that there will be many and very various reasons for their poor reading. All we can properly say from contemplation of the bell curve is that they all seem to be poor readers. It is improper to claim more than this on this evidence – especially to claim that membership of the poor readers group per se indicates possession of a neurological deficit – indicates that all these people suffer from dyslexia. We cannot say this with any certainty whatsoever – the reasons for inclusion in this group will be numerous and various. These poor readers do not constitute a group which is reliably homogeneous. For this reason sophisticated statistical and/or genetic analyses and conclusions in respect of ‘dyslexia’ are not appropriate, however tempting the wonderful mathematical potentials of the bell curve and the statistical marvels of normal distribution.

A frivolous example to illustrate this general point: Suppose we set up a driving test whereby a thousand randomly selected people are asked to drive a car across rural Wales between two points 50 miles apart. We measure their performance. (Time taken, number of bumps recorded, frequency of road rage incidents etc.) The results will probably approximate to a bell curve of normal distribution of whatever we have decided to define as ‘driving ability’. Would this statistical fact mean we can consider that the worst 10% of drivers all share the same characteristics, though? Are they all poor drivers for the same reason? Of course not. Some may have been drunk, or high, others may have been partially sighted, others may have been teenage males charged with testosterone, others again may have been elderly and very cautious, some may have driven for years while some may only just have passed their test, some will only just have got off the plane from Australia, some will have been rendered hopelessly nervous by the knowledge that their driving was being tested, for some the route will be familiar while for others it will be completely novel. And on and on. There will be a plethora of reasons for their poor performance, and regarding these drivers as a homogeneous group with a single ‘syndrome’ (dysautomobilia?) will not be valid. Nor will it be particularly useful. It will not reliably reveal much of interest, either to science or to the department of transport. Our findings will not enable us to apply sophisticated analyses to make generally useful policy decisions, in fact, nor to reach any particularly valid conclusions about the drivers themselves.

I hope this answers the main point: for example the “essentially arbitrary but at least objective and measurable criterion” mentioned is a diagnostic chimera. It’s not a diagnostic criterion at all. It may (or may not, in individual cases over time & see my remarks about SLI above) be highly measurable and objective, but that ‘fact’ does not in and of itself give it any further meaning whatsoever. It is not ‘diagnostic’ of anything beyond the potential existence of a problem, or of an apparently loosely quantifiable difference. It does not, is actually perfectly unable to, tell us anything interesting about either causation or remediation. In particular, of course, it does not tell us anything neurological at all. It is possible that there is something neurological behind the finding and common to the group, but it is also possible that there absolutely isn’t.

She writes that “it is common to define conditions such as dyslexia or SLI in terms of cutoff points”. This is to take two steps where only one is justified. For example, liver disease. Patients will present as ‘unwell’. The symptoms will be vague and a little various. Some of the patients – but only some – will have liver disease. The unwellness they present with is not, itself, usually particularly diagnostic. Although perhaps we diagnose the liver disease (loosely, mainly) in terms of cutoff points – a blood parameter or two for example – we do know that liver disease exists in the real world and we have carefully, and objectively, measured the degree of clinical disease against blood parameters in many, many proven cases. We do not ‘define’ liver disease as those blood parameters. We define liver disease as a particular pathological process and degree affecting that organ which we have repeatedly and concretely demonstrated. Our diagnostic tests selectively indicate cases from the population of ‘unwell’ people who make up the tail end of today’s Bell curve of wellness, we can say, on the basis of profoundly grounded understanding of pathology and function.

We have no such background with ‘dyslexia’ – we have no objectively demonstrated syndrome, no satisfactorily demonstrated pathology (at least any pathology accepted outside the dyslexia field); we have not genuinely calibrated any objectively discovered data against its alleged signs. We have no consensus on said signs. It’s a specious taking of properly grounded science from one context into another way too uncertain for it to be validly applied. We are, in effect, diagnosing every case presenting as ‘unwell’ as liver disease because we cannot tell whether they are or they aren’t. Our would-be diagnostic tests do not reliably distinguish those which are from those which aren’t because we are so profoundly unable to describe the pathology underlying our apparent syndrome. We do not know enough about pathology or even function to devise a better test, we do not, in fact, agree on the pathology or even that there is any. As we have no consistent, known pathology, so we have no diagnostic tests. Membership of the group with weak literacy is not a diagnostic test. ‘Unwell’ is a description, not a diagnosis.

I think her final, rather despairing catch-all remark (“test scores are just one component of diagnosis: a key factor is whether or not the individual is having difficulty in coping at home, work or school”) is either an innocent giveaway indicating that the writer subconsciously knows perfectly well that the statistics are being grossly over-interpreted and misused, or a shabby attempt to imply to the reader that a plethora of very perceptive and highly diagnostic data are being collected and examined off-stage, in the wings, when you and I know there are not.

I think Hugo's 'driving test' illustration makes the point very well ('Suppose we set up a driving test whereby a thousand rando mly selected people are asked to drive a car across rural Wales between two points 50 miles apart.....'). As he says, there could be several different reasons for poor performance.

Throughout the time I was teaching 6th-formers and had special responsibility for those with literacy problems, I found it impossible to distinguish between 'dyslexics' and 'garden variety' poor readers and spellers (Stanovich's terms). Although I strongly suspected that virtually all the problems I saw were largely the result of poor early teaching, I could never totally rule out the possibility that some students had a particular innate 'glitch' (Sally Shaywitz's word) that had made it unusually difficult for them to learn to read and spell adequately - but I never saw this 'glitch' identified in a way that I found convincing.

In working voluntarily with hundreds of junior-school children (mainly but not entirely Year 3) since I retired 10 years ago, I have continued to feel that poor initial teaching is the main problem, though where children are seriously struggling, low general ability also sometimes seems a factor - the school has some indication of this because it has participated (though not every year) in the Durham Performance Indicators in Primary Schools scheme which gives standardised scores not only for reading and maths but also for general ability. For example, we have a pair of twins at present who obviously share the same background in terms of infant-school and home environment, but don't share exactly the same genes as they are not identical: one is above average in general ability and average in reading, while the other is a very poor reader but is also way below average in general ability, so that could well explain why his reading is so much weaker than that of his twin. It seems to me that intelligence possibly helps to explain why some children learn to read and spell very well in spite of poor teaching whereas others don't - but does that take us back to the discrepancy model?

I am sure it’s not only teaching or intelligence that are involved. For some reason, some children find it much more difficult to learn to read than others. This assertion is based on my experience of teaching children who have not learned to read well.

All the children I have taught have missed out on really good synthetic phonics teaching at school. All of them have been taught originally by a mixture of phonics, memorising whole words and guessing strategies. Some have made rapid progress from the first lesson with me onwards; synthetic phonics combined with stopping other ‘strategies’ was all they needed. Most have struggled, but made good progress within a few weeks.

However, two boys in particular have needed much, much longer than the others. Both have understood the alphabetic code and begun to read accurately quickly enough, but have taken an extraordinary length of time to acquire fluency. Both have continued to look at words they had read accurately before several times, and begun again to slowly work them out using the phonics I taught them. With both, the number of words they can read automatically has increased very gradually, with ongoing synthetic phonics teaching and lots of activities to improve fluency (as suggested by RRF supporters). Both can now read accurately but very slowly. Both come from supportive professional homes and have siblings with no reading problems.

One of these boys was born with ‘global learning difficulties’ and is now at a ‘special’ secondary school (which he loves). He has not been diagnosed as dyslexic and most of the others in his school cannot read nearly as well as he can, if at all. The other is articulate and his parents have been told he is ‘very bright’, which is the impression I have too; he has been 'diagnosed' as dyslexic. My guess is that there is something neurological involved, giving them both exceptionally poor short term memories, at least for written words.

Having said that, I think describing anyone as dyslexic is unhelpful, unless it is used simply to mean ‘has more difficulty learning to read than most people’. The quote from Hugo explains why I think that.

Elizabeth wrote:However, two boys in particular have needed much, much longer than the others. Both have understood the alphabetic code and begun to read accurately quickly enough, but have taken an extraordinary length of time to acquire fluency. Both have continued to look at words they had read accurately before several times, and begun again to slowly work them out using the phonics I taught them.

Yes, I've seen this sort of thing too, even in children who have apparently had a good phonics start at infant school. On another thread (I can't remember which) I've mentioned a pair of identical twin boys who attended a Jolly Phonics infant school. When they arrived in Year 3 at 'my' junior school, neither was at all fluent, but both were pretty competent at sounding out and blending, and they resorted to this automatically when they encountered unfamiliar words. Their fluency improved gradually during Y3, but was still way below what I would have expected. They are now in Y5 and I haven't heard them read for a while, but when I last heard them (in the summer of Y4, I think), I found that although both had improved further in fluency, one had improved a lot more than the other.

I also remember another case from a long time ago of a girl who had been very well taught from the start but took ages to become fluent. As it happened, her class teacher was also her aunt and worked a lot with her outside school - no doubt this helped, but it still didn't enable her to become fluent at the same rate as the rest of the class.

I have no formal assessment data, but both children come from supportive articulate professional families. The boy with global learning difficulties came to me aged ten with the sort of language I would expect of a younger child from a good home, i.e. good vocabulary, but immature and unable to understand inference. His older sister got a place at a top university. The other boy's oral language was excellent when he began with me, aged six and a half.

I think it is very interesting that Elizabeth & Jenny both describe their pupils who had the greatest difficulty in learning to read as having problems with 'fluency'. I have two pupils at the moment who, from the way Elizabeth & Jenny describe their pupils, have very similar problems.

I do suspect that a more common cause of reading problems is not the 'phonological deficits' beloved of the dyslexia industry but problems with Rapid Automatic Namimg (RAN). This is mentioned in Prof. Wolf's book, Proust & the Squid

I think that there is no doubt that dyslexic children have a problem with fluency. Last year I did some work with a fourteen year old dyslexic boy whose reading accuracy on a standardized reading test was a year above his chronological age but whose reading speed on a piece of text which had a reading age of thirteen, was 85 words per minute. Reading at this rate makes comprehension extremely difficult and is one of the reasons that dyslexic children need more time. A dyslexic child will complete most tasks successfully if given that little bit of extra time. You can give the poor slow learner all the time in the world but it probably won’t make any difference to the outcome.

Here’s what Sally Shaywitz had to say about the time factor in a Children Of the Code interview with David Boulton.

Slow Readers Need More Time:
Dr. Sally Shaywitz: I don't know if you want to get into it or not, because it's probably a little different than what your primary focus is on, but a very important thing is children grow up to be adolescents and young adults, and whatever they want to aspire to be, a teacher, a doctor, a lawyer, a writer, they often have to take a series of tests. One of the things we know is that because children who struggle to read don't develop that left word forming area that's responsible for being able to read more fluently, they read very slowly and require extra time.
That becomes a really important thing because so many children who've worked so hard all their lives, they come to the point where they need to take an SAT or an ALSED Law School Admissions Test or a GRE, and they require extra time if they're going to be able to show what they know. There's been a very strong - it's not a movement, I don't know the right word - but these children are more and more being denied the extra time that they require

Maizie and Jim work with secondary-school children, Elizabeth works (I think) only with children who are beyond Reception age, and my own voluntary work has been entirely with Key Stage 2 children until a year ago, when I started working with Reception children whom I have now followed into Year 1.

What I really want to know is this: if all children are given really good teaching in the first three years of school, will there always still be a few like the identical twin boys I mentioned who were still non-fluent in their fiurth year despite having good habits of sounding out and blending? Or is it possible to teach in such a way at Key Stage 1 that all children are reasonably fluent by the beginning of Year 3? I'm hoping I'll develop more of a feel for this as I work with more Reception and Y1 children, but this will be in only one school and I'd like to know about wider evidence. Is there anyone out there who has evidence specifically on the development of fluency when whole cohorts of children are taught well throughout KS1?

Most of the children I work with are between six and nine years old. I have one five year old now and have worked with ten to twelve year olds.

I would be interested in the development of fluency when whole cohorts of children are taught well throughout KS1 too. The problem is identifying schools where teaching is good in Reception (Foundation 2) and KS1. Children joining the school after Reception would have to be excluded from the study.

chew8 wrote:Is there anyone out there who has evidence specifically on the development of fluency when whole cohorts of children are taught well throughout KS1?

Isn't this related to what Diane McGuinness says about the 'dyslexics' studied by Wimmer. That the chidren were taught good phonics and the 'dyslexics' were the slower readers (less fluent). I appreciate that these were not English speaking children.

Yes, Maizie, the Wimmer study is probably relevant (I'll try to re-read it), though I wonder whether slowness and lack of fluency are exactly the same thing. The lack of fluency in the twin boys I mentioned showed itself mainly in the fact that they were still having to do a lot of overt sounding out in Year 3 - less in Y4, but still more than I'd have expected. I also work with Year 3 children who read slowly but manage most words without overt sounding out.

I think that fluency probably does eventually come if children are well taught as beginners (the twins' fluency had certainly improved last time I heard them read), but it's obviously better if it comes sooner rather than later as children probably won't enjoy reading until they are reasonably fluent and therefore won't get Stanovich's 'Matthew Effect' type of practice. Can all children be getting this kind of practice by Y3 if they have been well taught before that or are there always a few exceptions? I know of one school where there are no exceptions, but it's a private school.