It is no secret that it is extremely difficult to determine when a student can be classified as one with a specific learning disability (SLD). And when we discuss the reasons for why this is the case, we tend to draw on the usual suspects: The measurements we use to assess student performance and aptitude are flawed, disability exists on a continuum, environmental factors are difficult to separate from other causes, the role of the teacher plays too heavily in decision making, SLD “looks different” across students—some have disabilities in reading, some in math, some in writing, and some in combinations of these areas.

There are many other factors that make accurate identification of SLD challenging, but the point here is that it is no easy task to reliably identify a student as one with SLD. Approaches to solving this problem have varied and have met with different levels of success. Early on, we used the discrepancy between student aptitude (IQ) and their achievement levels. The aptitude–achievement discrepancy approach was thought to capture what some argue is the salient characteristic of SLD—unexpected underachievement. Formulas for calculating the discrepancy were fairly efficient, even elegantly simple, and on the surface it seemed as if calculating the discrepancy gave practitioners an approach that could be consistently applied in a classification model. If a student presented with a discrepancy of a certain size, then that student was in fact determined to have an SLD.

While discrepancy offered in theory a reliable approach to SLD identification, the downfall of this method was that it lacked validity. It lacked validity in many different aspects of the word. In the most important sense, it lacked treatment validity—enough evidence shows that students with or without a discrepancy respond similarly to interventions designed to support their learning needs. Next, it lacked face validity—practitioners often recognized when a child required more support, but they couldn’t provide special education services because the student wasn’t demonstrating that magic number, which was arbitrarily different depending on the state in which the student resided. Finally, it suffered from poor consequential validity—meaning that although the discrepancy approach was developed to offer a reliable means of identifying students with SLD, what ultimately happened was the creation of a “wait to fail” model that seemed to lock schools into a system of waiting until a student’s learning gap was too great before they could intervene.

As we now know from decades of intervention research, this wait to fail model was a very strange way to approach disability. It had the unintended consequence of forcing schools not to intervene until it was often already too late, almost as if the child had been managing up until some critical point at which they fell off a cliff into the dangerous ground of low performance. We know now that this system compounded the difficulties that children with SLD faced. First, for students with SLD, it required that the student’s achievement drop to a point where many learning difficulties become intractable, and then, that students receive support in a system with limited resources to provide adequate instruction. Second, it created another group of students with environmentally created learning difficulties—the lack of an early identification and intervention system meant that children who were at risk for poor outcomes later were classified as learning disabled because special education typically presented the only mechanism through which a struggling learner could receive services. Instructional casualties tax the system in multiple ways. First, the lack of early intervention means that the child’s learning challenges will worsen before they are provided help. Next, because of the increased identification of students requiring intervention through special education, the system is prevented from providing instruction of sufficient intensity—there simply are too many students who require services, but not enough resources to provide them effectively.

Perhaps it is a moot question at this point, but all of this begs the question “Why did this happen?”

With the advantage of hindsight, it is clear that part of the reason this happened is because schools typically ran two education systems: a general education system and a special education system. Students who were unable to make adequate growth in general education were referred to special education. The falling off the cliff model supported this approach of running parallel systems. General education kept you moving upward, and the role of special education was to catch you at the bottom of the cliff, at which point the expectation then was that special education would either continue to serve you at the bottom (low expectations) or that it would somehow race you up the hill to catch your general education peers.

Through Response to Intervention (RTI), some of these issues have been resolved. As an integrated model of multilevel prevention and assessment, well-implemented RTI models can help to identify early on students who are at risk for poor learning outcomes and those who are suspected of having SLD, and then provide the appropriate level of intervention to support their learning needs. Many students who are at-risk for poor outcomes are well-served by an RTI model. With the additional instruction, a majority of students can achieve grade-level performance standards.

Sometimes, however, RTI models don’t work for all students. In RTI parlance, students who don’t respond to either the general education classroom instruction or the intervention are considered to be nonresponders. When confronted with a nonresponder, the school needs to figure out what to do next, and a reasonable question to ask is “Why hasn’t the child responded?” After a child has been served through an RTI system, a school should have quite a bit of information about him or her. For example, it is clear that the child is struggling to learn in environments that are generally effective for the child’s peers and that for some reason he or she hasn’t responded to interventions that worked for others who initially presented with similar concerns. Through the frequent progress monitoring and the ability to instruct in smaller groups, the intervention provider has also likely gained some insight into the child’s particular learning needs and probably has developed a theory about why the child is still struggling.

All of this is very helpful information and does several things to remedy the problems that we had been facing when SLD was identified simply by the discrepancy approach. However, RTI-only models have limitations in identifying students with SLD too. First, an RTI-only approach to identification assumes fidelity of implementation. Although there are many schools and teachers dedicated to strong implementation, this assumption is not supported across a lot of school settings. Second, not enough is known about expected response rates to clearly determine the point at which a student’s trajectory is consistent with SLD. Third, RTI provides a lot of data to support that the child is experiencing low achievement but it does nothing to explain why—RTI models do not address the other salient characteristic within the SLD definition, namely, that the child has a disorder in a basic psychological process that results in an imperfect ability to learn.

In Idaho, we set out to address the problems outlined in this blog post through a careful revision of our SLD identification policy and practice. In our next post, I’ll explain more about Idaho’s approach to SLD identification and how it is being implemented statewide.

Read what others had to say...

In response...

Thank you for so concisely describing the state of our struggle in LD identification! This will be an extremely helpful summary that I will be able to share with teachers, parents and administrators who are still confused about hows and whys... I look forward very much to read on about recommendations for decision-making "after" the RTI process. As it stands, the confusion regarding whether additional testing is needed has left our teams divided, and again, we are facing a situation wherein reliability and validity of our work is at risk. Thank you again!