Wednesday, July 18, 2007

So are autistics really going to take over the known world? We know there's been a staggering surge in irresponsible autism-related journalism. It's even hit the BMJ.

There's overwhelming evidence that 1 in 58 is not a genuine autism prevalence figure, but the product of shoddy and dishonest reporting (see Ben Goldacre here, looks like he's going to be in the BMJ on Friday, and the Times here). Nothing like irresponsible reporting to waste heaps of time and effort that could otherwise be spent in applying accurate information to help autistic people.

But what about that surge in autism? Is there one? This is not a popular position in our era of autism advocacy--but when in doubt, consult with the peer-reviewed data.

Here are 7 recently reported autism prevalence figures for children of various ages in the US and UK. All figures are for all autistic spectrum diagnoses combined. I've rounded them off to the nearest 5--autism prevalence figures don't come with pin-point precision. Here goes:

1 in 1751 in 1701 in 1601 in 150[1 in 150][1 in 150]1 in 851 in 58

Wow, that looks like a big autism surge, even without that last figure. And those low/no standards for autistics so successfully pushed by autism advocates would demand that the 1 in 58 be tacked on at the end (there it is, in red, from shame). Now it looks like autism is surging even more.

I've also put two of the figures in square parentheses. That's because they're not quite like the other non-red figures. I'll get back to this.

Maybe we should look at when these figures were published in peer-reviewed journals, just in case it's informative.

1 in 175 (2000)*1 in 160 (2001)**1 in 150 (2001)1 in 170 (2005)**1 in 85 (2006)*[1 in 150 (2007)]***[1 in 150 (2007)]***1 in 58 (not published)

That looks slightly less persuasive, but we could probably still argue that autism is surging, all the more so if we adopt autism advocacy standards for autistics and include the red-faced 1 in 58.

Or we could subject these studies to a bit of scrutiny. With two (non-red) exceptions, these studies meet two criteria: (1) they used DSM-IV or ICD-10 criteria for autism and the other autistic spectrum diagnoses; and (2) at least some of the counted children were directly assessed by the researchers using one or the other or both of the current standardized, quantified gold-standard autism diagnostic instruments.

The two exceptions are in square parentheses. They are US prevalence studies that don't meet my second criterion. They don't involve direct assessment of autistic children, instead relying on less reliable information from educational and/or medical records. But these two are very popular studies. I've included them (in parentheses) because their 1 in 150 has, since these studies were widely publicized early this year, often been reported as the prevalence of autism.

I've also paired up most of the studies. The ones with one asterisk (*) belong with each other. This pair of studies was done in the exact same geographic area. The ones with two asterisks (**) also belong with each other, and also were done in the same geographic area. And the ones with three asterisks (***) belong with each other too, and have some overlap in geographic area.

Regardless of geographic area, and keeping in mind the two criteria for studies I provided above, the pairs are paired in two different ways: studies using the same methodology with different birth year cohorts; and studies using different methodology with the same birth year cohort.

If autism is indeed surging, studies using the same methodology with different birth year cohorts should show autism prevalence increasing over time. And studies of the same birth year cohort using the same diagnostic criteria (DSM-IV or ICD-10) should show the same autism prevalence. If not, then the extent to which differences in methodology contribute to reported differences in prevalence would have to be contemplated--and this might inconvenience the "surging autism" contingent.

So if there really is an autism surge, we should find that the (*) pair represents two studies with the same methodology but different birth year cohorts, where the higher figure (1 in 85) is found in a later birth year cohort than the much lower figure (1 in 175). And we should find that the other two pairings, where the figures are the same (the *** pair, with 1 in 150) or nearly the same (the ** pair, with 1 in 160 and 1 in 170; this is not a significant difference), are studies of the same birth year cohorts done with whatever methodology, provided the same diagnostic criteria are used.

Well, it doesn't quite work out that way. In fact, it's the opposite. That (*) pair represents the same birth year cohort, different methodology. And those other pairs (*** and **) represent different birth year cohorts, same methodology.

That's keeping in mind that apart from the square parentheses (***) pair, and apart from the beet red 1 in 58, all the other studies meet both my own criteria--they use the same current diagnostic criteria, and they involve at least some direct assessment of children with one or the other or both of the current gold-standard diagnostic instruments.

How about ordering all the studies according to the years in which the children being studied were actually born. If that autism surge is autism reality, then we should see that surge, uh, surging right along as birth year cohorts become more recent. It's about time I named the studies, and I've kept the asterisks, just in case anyone's keeping track. Here goes again:

That didn't work out too well either. Now we don't have a surge at all, just a bunch of findings--spanning a decade of birth years--that are very close to each other, and (leaving out the crimson 1 in 58) one figure that looks like an outlier. But we can't attribute the 1 in 85 (it is actually 116.1/10,000) in Baird et al. (2006), a figure often rounded off to 1 in 100, to a surge in autism, because much lower prevalence figures have been found in several later birth year cohorts. Never mind that a much lower prevalence figure was found in the same cohort in Baird et al. (2000).

Indeed, what the two Baird et al. studies demonstrate is the dramatic effect methodology can have on reported autism prevalence within the same cohort--even when the same diagnostic criteria are used, even when there is an overlap in the standardized diagnostic instruments used (both Baird studies used the ADI-R), and even when the studies are conducted by an overlapping group of researchers. The two studies differ primarily in that the later study also used the ADOS, two of the diagnosing clinicians changed, and the method of case finding was altered. That was enough to double the reported prevalence within the same cohort.

In contrast, the two Chakrabarti and Fombonne studies show that applying the same methodology to different birth year cohorts results in the same autism prevalence. No surge in sight. The two in-parentheses studies, which are the now-famous CDC prevalence studies, show the same thing but with weaker methodology over a shorter timespan.

Autism advocates are free to seek that recent surge in autism--that catastrophic epidemic--in anecdotes, in education numbers or the CDDS, in sensationalist headlines and so on. This is all in keeping with the rotten standards of science and ethics they've imposed on autistics, and with their own steadfast resistance against verifiable information. But on the off-chance anyone's interested in the published, peer-reviewed data, I thought I'd go fetch some. If anyone finds any factual errors in the information I've presented, I'd greatly appreciate knowing. Accurate information is always good for autistics.

(Edit: Ben Goldacre's now had his say in the BMJ. You can find his column here.)

Our piece of the encyclopedia was limited to ~6,000 words, which wasn't nearly enough, particularly given that we had to write for a general readership (which may or may not have any knowledge of autism).

Researching and writing this review article was a both an enormous challenge and a fantastic opportunity. Whatever its limitations (I never have any difficulty spotting limitations in my work or work I'm involved in), I hope our short overview of a neglected area of research will encourage a more systematic and rigorous study of learning in autism, of how and why autistics learn well and learn poorly.

Wednesday, July 04, 2007

John Staddon, PhD (James B Duke Professor of Psychological and Brain Sciences and Professor of Biology and Neurobiology at Duke University), has a lot of published work in the area of the experimental analysis of behaviour (none of which I'm familiar with). As with every other Verbatim, providing a quote from Dr Staddon does not mean that I generally agree with his views--though in the case of this particular quote, it seems we both made the same error.

This shortest Verbatim in the short history of Verbatim is from a 2004 commentary Dr Staddon wrote in response to a review of one of his books:

I thought behavior analysis was science, not religion, but maybe I was wrong.

Reference:

Staddon, J.E.R. (2004). The old behaviorism: A response to William Baum's review of The New Behaviorism. Journal of the Experimental Analysis of Behavior, 82, 79-83.