NAPLAN and learning difficulties

June 01st, 2012

May was a busy time in Australian schools with Grades 3, 5, 7 and 9 involved in the national literacy and numeracy tests (NAPLAN). The stress I see in parents and learning support colleagues during NAPLAN time often causes me to reflect on the purpose of the test(s) and how useful they are for students who have learning difficulties.

The Australian Curriculum, Assessment and Reporting Authority (ACARA) claim that the purpose of NAPLAN is to “measure the literacy and numeracy skills and knowledge that provide the critical foundation for other learning”. They also claim that introduction of NAPLAN has led to “consistency, comparability and transferability of information on students’ literacy and numeracy skills”. (Don Watson would have a field day with these weasel words).

NAPLAN is useful because it identifies students who are struggling with the broad academic skills. Having an objective measurement is important because research has shown that teachers are not particularly accurate in identifying struggling students. For example, Madelaine and Wheldall (2007) randomly selected twelve students from 33 classes and asked their teachers to rank the students based on perceptions of reading performance. They also assessed the students on a passage reading test. Only 50% of teachers identified the same poorest reader as the objective test and only 15% of teachers identified the same three lowest performing readers as the test. We can certainly argue about whether NAPLAN in its current form is the most effective and/or cost-effective method of gathering data on student achievement, however, it seems that we cannot rely on teacher judgment alone.

On the downside, NAPLAN represents a test, not an assessment. All good clinicians and educators know, there is a difference, or should be, between testing and assessment (see here and here). Assessment is a process that starts with the history and clearly defines the presenting problem or set of problems. The clinician develops an hypothesis or set of hypotheses on the basis of the history. They then gather data (e.g., observations, interviews, tests, and base rates) that is designed to shed light on the hypotheses. It is worth noting that a good clinician looks equally for data that confirms and disconfirms the initial hypotheses. Good assessment should lead directly to treatment and/or appropriate teaching for the presenting problem(s) and provide pre-treatment data that allows monitoring of progress. Testing on the other hand simply tells us how good or bad a student is on a particular test. For example, a student with a low score on a reading comprehension test can be said to have poor reading comprehension. The problem with tests is they don’t tell why a student performed poorly and, if they measure a complex process like reading comprehension, writing, or mathematical reasoning, they don’t tell what component of that complex process is weak.

That is precisely the problem with NAPLAN. The NAPLAN tasks are complex and provide little information useful for designing interventions for students with learning difficulties and for monitoring response to intervention. An example from NAPLAN illustrates this point.

A mathematics question asked: $4 is shared equally among 5 girls. How much does each girl get? An incorrect response tells us that the student can’t do the task. So what? The child’s teacher probably knew that already. What would be useful would be to know if the student failed the item because (1) they couldn’t read the question, (2) they didn’t know what ‘shared’ or ‘equally’ meant, (3) they didn’t recognise the item required a division operation, (4) they didn’t know to convert $4 to 400c to make the division easier, (5) they didn’t know the fact 40 divided by 5, (6) they knew all of the above but have attention problems and got ‘lost’ during the multi-step division process.

Similarly, if a student performs poorly on the writing component of NAPLAN no information useful for treatment is obtained. The test doesn’t tell us if the child (a) has a form of dyspraxia and struggles with handwriting, (b) has an impoverished spelling lexicon, (c) has poor knowledge of sound-to-letter conversion rules and therefore struggles to spell unfamiliar words, (d) poor knowledge of written grammatical conventions, (e) poor knowledge of written story grammar, (f) oral language weaknesses in semantics and/or grammar, (g) poor oral narrative skills, (h) attention problems so therefore s/he can’t keep his you know what together while doing a complex task, or (i) autism and therefore doesn’t give a toss about the writing topic. The list could go on.

Unfortunately, NAPLAN provides none of these specific data. It simply tells us how bad the child performs relative to some arbitrary benchmark. So where does this leave us? Or more to the point, where does it leave students who have learning difficulties?

Both of which lead me to think that NAPLAN is probably not all that useful for students who have learning difficulties or for the parents, clinicians and teachers who work with them. It also leads me to yearn even more for a Response-to-Intervention approach in which schools recognise learning problems early in the child’s school career, assess to define the problem(s), and provide evidence-based interventions that target the problem(s).