My understanding is that the experiment was part of a broader study on the effectiveness of atomoxetine as a treatment for ADHD. The study appears to have been funded by Eli Lilly and the second author is reportedly an employee and minor shareholder in Eli Lilly.

Children were defined as having dyslexia if (a) there was at least a 22-point discrepancy (approx. 43 percentile ranks) between IQ and word-reading ability (defined by the WJIII Word Attack test OR the Word Identification OR the WJIII Basic Reading Cluster – an average of Word Attack and Word Id) or (b) the child had a standard score of 89 or lower on at least one of the WJIII tests.

The first thing that springs to mind is that these criteria are not particularly stringent. A standard score of 89 represents the 23rd percentile. It is still within a standard deviation of the mean. Further, the discrepancy criterion presumably means that a child with an IQ score of 122 and a reading score of 100 (exactly average) met criteria for the dyslexia group.

Using these criteria, 209 children who were allocated at random to an atomoxetine or placebo group. Numbers in each group were: dyslexia only (n = 29 atomoxetine, n = 29 placebo), ADHD + dyslexia (n = 64 atomoxetine, n = 60 placebo) and ADHD only (all of whom received atomoxetine).

The drug trial lasted for 16-weeks. Changes in reading skills were measured by:

WJIII Word Attack and Word Identification tests

WJIII Spelling

WJIII Reading Comprehension and Reading Vocabulary

Comprehensive Test of Phonological Processing (CTOPP)

Gray Oral Reading Tests-4 (GORT-4)

Test of Word Reading Efficiency (TOWRE)

It is unclear why but on many variables the Placebo group had higher pre-treatment reading/spelling skills than the Atomoxetine group. For example, within the Dyslexia only group the Placebo sub-group had an average standard score on the Word Attack subtest of 88.05 while the Atomoxetine group had an average score of 84.11. It was 83.86 vs 80.47 for Word Identification, 83.32 vs 77.32 for Spelling. This may seem a minor point but consider the effect of regression to the mean http://www.understandingminds.com.au/blog/2012/02/. The more extreme a score is at pre-test the more likely it is to “bounce’ back towards average at post-testing. Thus, the Atomoxetine group was more likely to “benefit” from regression effects than the Placebo group.

A summary of the interesting bits of the results:

Children with dyslexia who were treated with atomoxetine made greater gains than children with dyslexia given the placebo treatment on WJIII Word Attack, WJIII Basic Reading Cluster and WJIII Reading Vocabulary.

Children with dyslexia + ADHD who were treated with atomoxetine made greater gains than children with dyslexia + ADHD given the placebo treatment on the CTOPP Elision test (a peripheral/distal reading sub-skill).

So what?

At first glance it seems odd that children with dyslexia would respond better to a medication designed for ADHD than children who actually had ADHD. However, there are reasonable explanations. First, medication isn’t a panacea. It doesn’t cure ADHD. It improves symptoms. It is possible that the ADHD + dyslexia remained more impaired than the dyslexia group even when given medication. Second, children with dyslexia often have sub-threshold symptoms of ADHD that may in fact respond better to medication than the more severe symptoms seen in children actually diagnosed with ADHD.

It is possible that at least some of the effects were due to greater regression to the mean in the Atomoxetine vs Placebo group (see above).

It is possible that the medication simply improved test behaviour rather than reading ability per se.

The data are arguably the beginning of a research journey. They provide some preliminary support for the idea that psycho-stimulant medication can improve academic skills even in children without ADHD. However, I am sceptical about this. I can see how medication might improve test-taking behaviour in some children. I can also see how it might “smooth out” the inconsistency seen in children with attention problems and with dyslexia. However, medication doesn’t teach. It doesn’t matter how good your attention is; if you don’t know it you don’t know it.

The data points represent a weekly nonword reading test. The test items were constructed using the grapheme-phoneme conversion rules taught in the intervention program. If the student learns ~4-5 new GPCs weekly they should be able to read ~5 extra nonwords each week. The “flat lines” represent data from 2 x baseline periods and 2 x treatment periods in which the participant was receiving reading intervention only. One can see that not much progress was being made. It wasn’t that the boy wasn’t learning new things. It was more than his “recall” was inconsistent and his test-taking behaviour was poor.

Upwards growth in test scores were seen almost immediately upon beginning to take a stimulant medication. Later data, not shown in the graph, showed that removing the reading intervention so the only treatment was medication resulted in a return to a flat line. That is, the medication didn’t teach skills.

We have seen this pattern in several cases now; although the results of other cases have not been as dramatic as this first case. (I should also point out that the skeptic in me thinks that these data are too perfect and I need to see them replicated before I truly “believe” them).

Our very preliminary conclusions are that attention is necessary for new learning to take place and that the medication helps set the conditions for learning to happen (the batter into which the teaching is stirred if you will). However, medication will probably not make you a good reader sans the teaching.

At this point there is no justification for trialling medication in students who just have dyslexia. However, we hope to see a larger trial of the response of students with ADHD + dyslexia to reading intervention versus medication + reading intervention.

For help with dyslexia, ADHD, autism spectrum disorders and other developmental and learning disorders in the Gold Coast and Tweed regions contact the Understanding Minds Clinic.

Keywords:

Attention-deficit hyperactivity disorder;

response allocation;

reinforcement;

change

Background

Altered sensitivity to positive reinforcement has been hypothesized to contribute to the symptoms of attention-deficit hyperactivity disorder (ADHD). In this study, we evaluated the ability of children with and without ADHD to adapt their behavior to changing reinforcer availability.

Method

Of one hundred sixty-seven children, 97 diagnosed with ADHD completed a signal-detection task in which correct discriminations between two stimuli were associated with different frequencies of reinforcement. The response alternative associated with the higher rate of reinforcement switched twice during the task without warning. For a subset of participants, this was followed by trials for which no reinforcement was delivered, irrespective of performance.

Results

Children in both groups developed an initial bias toward the more frequently reinforced response alternative. When the response alternative associated with the higher rate of reinforcement switched, the children’s response allocation (bias) followed suit, but this effect was significantly smaller for children with ADHD. When reinforcement was discontinued, only children in the control group modified their response pattern.

Conclusions

Children with ADHD adjust their behavioral responses to changing reinforcer availability less than typically developing children, when reinforcement is intermittent and the association between an action and its consequences is uncertain. This may explain the difficulty children with ADHD have adapting their behavior to new situations, with different reinforcement contingencies, in daily life.

Psychometric tests are behavioural tests. We use measures of behaviour (which we can see directly) to infer something about a latent variable (something that can’t be seen or measured directly). Take tests of reading ability as an example. Reading occurs in the brain and is therefore a latent variable. We can’t see or measure it directly. The consequence is that different tests will provide different results depending on a number of factors, including the skills/items that the test samples and the normative population.

The Abstract

Identifying reading comprehension difficulties is challenging. There are many comprehension tests to choose from, and a child’s diagnosis can be influenced by various factors such as a test’s format and content and the choice of diagnostic criteria. We investigate these issues with reference to the Neale Analysis of Reading Ability (NARA) and the York Assessment of Reading for Comprehension (YARC).

Methods

Ninety-five children were assessed on both tests. Test characteristics were compared using Principal Components and Regression analyses as well as an analysis of passage content.

Results

NARA comprehension scores were more dependent on decoding skills than YARC scores, but children answered more comprehension questions on the NARA and passages spanned a wider range of difficulty. Consequently, 15–34% of children received different diagnoses across tests, depending on diagnostic criteria.

Conclusion

Knowledge of the strengths and weaknesses of comprehension tests is essential when attempting to diagnose reading comprehension difficulties.

For help with dyslexia, ADHD, autism spectrum disorders and other developmental and learning disorders in the Gold Coast and Tweed regions contact the Understanding Minds Clinic.

Like us on Facebook for updates on dyslexia related matters, information on other developmental disorders like autism spectrum disorders, Asperger’s and ADHD, and general mental health info.

Journal of Child Psychology and Psychiatry

Background

The clinical and neurophysiological effects of neurofeedback (NF) as treatment for children with ADHD are still unclear. This randomized controlled trial (RCT) examined electroencephalogram (EEG) power spectra before and after NF compared to methylphenidate (MPH) treatment and physical activity (PA) – as semi-active control group – during resting and active (effortful) task conditions to determine whether NF can induce sustained alterations in brain function.

Methods

Using a multicentre three-way parallel group RCT design, 112 children with a DSM-IV diagnosis of ADHD, aged between 7 and 13 years, were initially included. NF training consisted of 30 sessions of theta/beta training at Cz over a 10-week period. PA training was a semi-active control group, matched in frequency and duration. Methylphenidate was titrated using a double-blind placebo controlled procedure in 6 weeks, followed by a stable dose for 4 weeks. EEG power spectra measures during eyes open (EO), eyes closed (EC) and task (effortful) conditions were available for 81 children at pre- and postintervention (n = 29 NF, n = 25 MPH, n = 27 PA). Clinical trials registration: Train Your Brain? Exercise and Neurofeedback Intervention for ADHD, https://clinicaltrials.gov/show/;NCT01363544, Ref. No. NCT01363544.

Results

Both NF and MPH resulted in comparable reductions in theta power from pre- to postintervention during the EO condition compared to PA (ηp2 = .08 and .12). For NF, greater reductions in theta were related to greater reductions in ADHD symptoms. During the task condition, only MPH showed reductions in theta and alpha power compared to PA (ηp2 = .10 and .12).

Conclusions

This study provides evidence for specific neurophysiological effects after theta/beta NF and MPH treatment in children with ADHD. However, for NF these effects did not generalize to an active task condition, potentially explaining reduced behavioural effects of NF in the classroom.

For help with dyslexia, ADHD, autism spectrum disorders and other developmental and learning disorders in the Gold Coast and Tweed regions contact the Understanding Minds Clinic.

Like us on Facebook for updates on dyslexia related matters, information on other developmental disorders like autism spectrum disorders, Asperger’s and ADHD, and general mental health info.

Journal of Child Psychology and Psychiatry

Background

The etiology of Autism Spectrum Disorder (ASD) has been recently debated due to emerging findings on the importance of shared environmental influences. However, two recent twin studies do not support this and instead re-affirm strong genetic effects on the liability to ASD, a finding consistent with previous reports. This study conducts a systematic review and meta-analysis of all twin studies of ASD published to date and explores the etiology along the continuum of a quantitative measure of ASD.

Methods

A PubMed Central, Science Direct, Google Scholar, Web of Knowledge structured search conducted online, to identify all twin studies on ASD published to date. Thirteen primary twin studies were identified, seven were included in the meta-analysis by meeting Systematic Recruitment criterion; correction for selection and ascertainment strategies, and applied prevalences were assessed for these studies. In addition, a quantile DF extremes analysis was carried out on Childhood Autism Spectrum Test scores measured in a population sample of 6,413 twin pairs including affected twins.

Results

The meta-analysis correlations for monozygotic twins (MZ) were almost perfect at .98 (95% Confidence Interval, .96–.99). The dizygotic (DZ) correlation, however, was .53 (95% CI .44–.60) when ASD prevalence rate was set at 5% (in line with the Broad Phenotype of ASD) and increased to .67 (95% CI .61–.72) when applying a prevalence rate of 1%. The meta-analytic heritability estimates were substantial: 64–91%. Shared environmental effects became significant as the prevalence rate decreased from 5–1%: 07–35%. The DF analyses show that for the most part, there is no departure from linearity in heritability.