Additional Resources

The purpose of this article is to discuss the identification of students not progressing adequately in Tier 2 of a Response-to-Intervention (RTI) model and to assist the reader in making informed decisions about the nature of researched methods and measures for Tier 3 identification. To that end, the article is divided into the following sections:

Overview

In an RTI framework, progress of students who are receiving Tier 2 interventions is monitored frequently (e.g., weekly or monthly) and compared to classroom averages. These progress data are used to inform instructional practice as well as make decisions about student movement between tiers of intervention. Based on these data, there are three possible outcomes for students receiving Tier 2 interventions: movement back into Tier 1, continuation of Tier 2 interventions, or movement into Tier 3 for more intensive interventions. This latter outcome includes the subset of students most at risk for academic failure and in the most need of specialized, intense supports. Because decisions about movement between tiers are so important, it is crucial to have a valid and reliable system to measure response to Tier 2 interventions.

While there is general consensus among researchers for measuring response to Tier 1 instruction (e.g., 8–10 weeks of progress monitoring; below cut score on curriculum-based measurement [CBM]), there is much less consensus for measuring response to Tier 2 instruction and when to begin Tier 3. Because a number of researchers associate Tier 3 interventions with special education services (Boardman & Vaughn, 2007; D. Fuchs, Compton, Fuchs, & Bryant, 2008; D. Fuchs & Deshler, 2007; D. Fuchs, Fuchs, & Compton, 2004; Vaughn, Linan-Thompson, & Hickman 2003), identification of students not progressing adequately in Tier 2 is critical for their academic success.

Prevalence of Students Not Progressing Adequately in Tier 2

D. Fuchs and Deshler (2007) estimate that the number of students, based on the assumption of a normal distribution, who do not show improvement in response to the increasingly intensive Tier 2 interventions and are moved into Tier 3 should fall between 2% and 7% of the general population. However, there is no clear methodological definition of how or when a student is to be identified as a nonresponder to intervention, what intervention is to be used, who is to deliver the intervention, or how nonresponsiveness is to be measured. This lack of clarity creates the potential for inconsistencies in identification of students not progressing adequately in Tier 2 and for highly variable prevalence rates at the school, district, state, and national levels (D. Fuchs et al., 2008).

Methods of Identifying Students Not Progressing Adequately in Tier 2

At least six methods are currently being promoted for identification of nonresponders to Tier 2. D. Fuchs and Deshler (2007) defined five methods: a) dual discrepancy, b) median split, c) final normalization, d) final benchmark, and e) slope discrepancy. Vaughn et al. (2003) described another method of identifying nonresponders to Tier 2 intervention: (f) exit groups. A description of each method is provided in Table 1.

Slope of improvement during treatment and performance level at the end of treatment. Slope and performance levels below a given point (e.g., 1 SD) in comparison with classroom peers.

Median split

Vellutino et al. (1996)

Slope of improvement never meets or exceeds the rank ordered median of the intervention group.

Final normalization

Torgesen et al. (2001)

Standard scores on a mastery test at the end of a tutoring intervention. A nonresponder would have to score below a given percentile rank (e.g., 25th percentile).

Final benchmark

Good et al. (2001)

Criterion-referenced benchmark at the end of the intervention. A nonresponder would have to score below a given benchmark (e.g., <40 on="" dibels="" orf="" td="">

Slope discrepancy

D. Fuchs et al. (2004)

Slope of academic performance compared to a normative cut-point referenced by the classroom, school, district, or nation.

Exit groups

Vaughn et al. (2003)

After 30 weeks of supplemental instruction, failing three times (once every 10 weeks) to meet criteria on the TPRI and TORF measures.

Research Evidence by Method

We reviewed the empirical literature on the six methods of identifying nonresponders to Tier 2 instruction and found 11 studies. Several of the studies (e.g., D. Fuchs et al., 2008, 2004) included more than one of the methods. The results are presented here by method.

Dual Discrepancy Method

Researchers used a dual discrepancy method to identify nonresponders to intervention in six of the studies. Speece and Case (2001) and Case, Speece, and Molloy (2003) conducted studies with at-risk groups comprising students in the bottom 25% of their classrooms (Ns = 144 and 53, respectively). The researchers provided the at-risk groups interventions for two 8-week periods. There was no information reported on frequency per week or duration per session. In both studies, the general education classroom teacher implemented the intervention. The interventions, designed by the researchers and teachers, included phonics instruction and partner-reading activities. Ongoing progress monitoring was measured using a CBM of oral reading fluency (ORF). A CBM evaluates a student's rate of progress on a given skill. An ORF probe consists of a student reading from three individual pages for 1 minute and having the number of words read correctly recorded.

In Speece and Case (2001), students were identified as nonresponders based on at least 10 ORF probes administered across the year. If their slope of progress across the year and level of performance (mean of the last two probes) at the end of the year were more than 1 SD below the slope and level of their classmates, they were designated as nonresponders. This method yielded 47 students, or a 6.7% prevalence rate. Likewise, Case et al. (2003) used the same identification criteria but judged nonresponsiveness several times during the school year. This allowed them to create three groups, never dually discrepant, infrequently dually discrepant, and frequently dually discrepant (FDD). The FDD group yielded 7 nonresponders, or a 2.8% prevalence rate.

McMaster, Fuchs, Fuchs, and Compton (2005); D. Fuchs et al. (2004; 2nd grade); and D. Fuchs et al. (2008) conducted studies with intervention groups (Ns = 176, 48, and 252, respectively) comprising the lowest performing students in each classroom based on a CBM probe of rapid letter naming (RLN). An RLN probe consists of students quickly naming upper- and lowercase letters in black print. In each of the three studies, interventions consisted of Peer-Assisted Learning Strategies (PALS). PALS is a structured peer tutoring program that emphasizes phonological awareness, decoding, and fluency (D. Fuchs et al., 2001). Teachers paired higher performing readers with lower performing readers. The activities were conducted in pairs. In each study, PALS was used for three 35-minute sessions per week for 7 weeks (McMaster et al., 2005), 10 weeks (D. Fuchs et al., 2008), and 10–12 weeks (D. Fuchs et al., 2004; 2nd grade). Students not making progress in the first 2 weeks of PALS were given one-to-one or small-group tutoring in each of the studies. Graduate assistants served as the tutors. Progress was monitored weekly by two word-level CBM measures, a nonsense word fluency (NWF) probe and word identification fluency (WIF) probe.

In each of the three studies, levels (e.g., mean of correct words per minute on the last two probes) and slopes (e.g., number of correct words per minute each time they were monitored) were calculated for each at-risk and average student. In McMaster et al. (2005), nonresponders were identified as 0.50 SD below the average readers in level and slope, yielding 66 students, or a 13.3% prevalence rate. In D. Fuchs et al. (2008), nonresponders were identified as 1 SD below average readers in level and slope, yielding an 8.6% prevalence rate. In D. Fuchs et al. (2004; 2nd grade), nonresponders demonstrated growth below 1.5 words per week (slope), and level below a 75-word benchmark. This yielded a prevalence rate of 2.2%.

Burns and Senesac (2005) conducted a study with students scoring at or below the 25th percentile on a district-administered test of reading (N = 151). Two interventions, the Help One Student to Succeed program (HOSTS; Blunt & Gordon, 1998) and Title 1 support were utilized. HOSTS is a structured comprehensive literacy program designed to supplement classroom reading instruction delivered by trained tutors (Bryant, Edwards, & LeFlies, 1995). Students received four 30-minute tutoring sessions per week for 15 weeks. Title 1 intervention varied. Students either received weekly individual reading instruction from a Title 1 consultant or small-group instruction. Half the sample received HOSTS, the other half Title 1. Progress was monitored and measured using the Dynamic Indicators of Basic Early Literacy Skills (DIBELS; Good & Kaminski, 2002. DIBELS is an assessment system using frequent measurement to assess progress in early literacy skill development. It includes measures of NWF and ORF. DIBELS ORF was measured twice during the 15-week period.

Median Split Method

Researchers used a median split method to identify nonresponders to intervention in four of the studies. Vellutino et al. (1996) conducted a study with students (N = 186) demonstrating low levels of reading ability as rated by their teachers (no specific criteria were provided). The intervention was daily one-to-one tutoring (30 minutes per session) for a minimum of 15 weeks to a maximum of 25 weeks (typically 70–80 sessions). Tutors were non-school personnel certified in reading, elementary education, or both. Student progress was measured on the Woodcock Reading Mastery Test–Revised (WRMT-R; Woodcock, 1987) several times over a 2-year period.

To identify nonresponders, Vellutino et al. (1996) charted the slope of improvement for each student over each administration of the WRMT-R. They then rank ordered the slopes of each child and determined the median. Nonresponders’ slope never met or exceeded the median. This yielded a total of 19 students, or a 1.4% prevalence rate.

D. Fuchs et al. (2004; 1st grade and 2nd grade) and D. Fuchs et al. (2008) also used the median split method to identify nonresponders. D. Fuchs et al. (2004; 1st grade) identified their sample (N = 54) from 20 1st-grade classrooms. The participants were the lowest performing 2–3 students per classroom as measured by ORF probes. These 54 students were assigned to one-to-one tutoring or PALS in the classroom. The tutoring sessions occurred for 10–12 weeks, three times per week, for 30–35 minutes per session. Ongoing progress was measured using a WIF probe. Participants, interventions, and measures of D. Fuchs et al. (2004; 2nd grade) and D. Fuchs et al. (2008) were described earlier.

In each of the three studies, slope of improvement was charted for each student over each WIF probe. Each slope was rank ordered and the median was determined. Students whose slope never met or exceeded the median were defined as nonresponders. This yielded prevalence rates of 3.5% (D. Fuchs et al., 2004; 1st grade), 3.5% (D. Fuchs et al., 2004; 2nd grade), and 9.8% (D. Fuchs et al., 2008).

Final Normalization Method

Researchers used a final normalization method to identify nonresponders to intervention in four of the studies. Torgesen, Alexander, Wagner, Rashotte, Voeller, and Conway (2001) conducted a study of students identified as having reading difficulties based on test scores (WRMT-R) and teacher ratings (N = 60). All students in the intervention group received 67.5 hours of one-to-one reading instruction (specific intervention not reported) in two 50-minute sessions per day for 8 weeks. Tutors were six special education teachers with at least 1 year of experience each. Reports from the tutors served as progress monitoring.

Final Benchmark Method

Researchers used a final normalization method to identify nonresponders to intervention in five of the studies. Good, Simmons, and Kame'enui (2001) conducted a study of students chosen based on teacher reports of reading difficulty (N = 378). They received an Accelerating Children's Competence in Early Literacy–Schoolwide intervention funded by the U.S. Department of Education and designed to improve the reading of students in Grades K–3 (Simmons, Kame'enui, & Good, 1998). Progress monitoring was measured using DIBELS and ORF.

Al Otaiba and Fuchs (2006) conducted a study of students chosen based on teacher recommendation (N = 104). These 104 students received a PALS intervention for 16 weeks, three times per week, for 20 minutes per session. Ongoing progress monitoring was measured using ORF probes.

Students who met criteria on TPRI and TORF in any of the three sessions were "exited" out of intervention. Students who did not meet criteria after 30 weeks were designated nonresponders. This yielded a prevalence rate of 2.4%.

Identification of Nonresponders in the Field Studies

In our research review of RTI field studies, all but two studies provided information on identifying nonresponders to Tier 2 instruction. Of the studies providing prevalence data, there was a range of 2.4% to 18% of students identified as not progressing adequately in Tier 2. Table 2 provides information (e.g., method, prevalence, etc.) on identifying nonresponders in each of the 11 studies found in our research review.

Conclusions and Recommendations

Clearly, depending on which method is used, there is potential for variation in the number of students identified as nonresponders. Our review of the empirical literature and review of field studies found prevalence rates of nonresponders ranging from 1.3% to 18%, depending on method.

Perhaps illustrating the variation in prevalence rates the best, D. Fuchs et al.'s (2008) longitudinal study of the same sample of first graders found considerable variation between percentages of nonresponders identified based on method used (e.g., dual discrepancy = 8.6%, median split = 9.8%, final normalization = 4.2%, final benchmark = 8.7%, and slope discrepancy = 7.6%). In addition, D. Fuchs et al. made the point that prevalence alone may be too narrow when making decisions about a method. Issues such as sensitivity (e.g., true positives) and specificity (e.g., true negatives) are also extremely important when selecting a measure and method to identify nonresponders to Tier 2 interventions. While choosing measures and methods of identifying nonresponders to Tier 2 instruction may seem daunting, D. Fuchs et al. (2008) did provide some recommendations for this important decision-making process. They reported three promising measures and methods (e.g., acceptable prevalence, sensitivity, and specificity): 1) final normalization using Test of Word Reading Efficiency (Torgesen, Wagner, & Rashotte, 1999) Sight Word Efficiency, 2) slope discrepancy using CBM WIF, and 3) dual discrepancy using CBM Passage Reading Fluency for level and CBM WIF for slope. This is, at least, a place to start for making decisions about adequate progress within the second tier of an RTI program for students who are in the most need of more intensive help.

References

Al Otaiba, S., & Fuchs, D. (2006). Who are the young children for whom best practices in reading are effective? An experimental and longitudinal study. Journal of Learning Disabilities, 39, 414–431.

VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225–256.