Districts and schools across the country have invested heavily in screening assessments to identify students who will need additional support to be successful in light of increasing standards (e.g. CCSS). These screening assessments often measure “general outcomes” (e.g. comprehension) and are very reliable predictors of state summative assessments. Unfortunately, they offer little guidance to teachers and administrators about what to do if a student is predicted not to be successful. These same screening assessments are often used to monitor student progress across the year. Many of these assessments have been normed, are equated for difficulty, and even have adaptive versions which allow their use 3-5 times per year. This ensures that a “reliable estimate of student growth” can be obtained. Thus, schools and teachers can make reliable and valid decisions about student progress, as well as which instruction and intervention supports seem to be effective for students. In order for a multi-tier system of supports (MTSS) to be effective and for Response to Intervention (RTI) implementation to be successful, schools must be able to determine not just “if” instruction and intervention supports are working, but which students they are working for, how well they are working, and under what conditions they are working (e.g. with a specific amount of intensity).

In my work with schools, I often see these highly reliable and valid “general outcome” assessments become unreliable and invalid when they are used too frequently to accurately measure student growth. While many assessment vendors will state that their assessments “can” be used every week to measure student progress, the question schools should be asking is “should” they. If a teacher obtains invalid results from an assessment, they may discontinue supports that may actually be helping a student or continue practices that may have no impact on student success. Schools must use caution to never assess more frequently than they can obtain a reliable estimate of growth. Doing so not only leads to poor decision-making but takes time away from valuable instruction. On the other hand, classroom-based formative assessments that are very closely tied to the curriculum or skill area where the instruction or intervention is focused can be administered weekly or even daily to measure both proficiency and progress. These types of frequent assessments can support problem-solving decisions regarding student response and provide insight regarding changes to the intervention that may increase the response rate. Additionally, when “common” formative assessments are used, schools can make comparisons across classrooms and interventions to determine the effectiveness of different types of support provided as part of an MTSS.

I often advise that there are three questions that should be asked every time a decision needs to be made about whether to administer an assessment:

Is this assessment valid and reliable enough for decision-making purposes?

Will it tell me something I didn’t already know about this student?

Will I adjust or improve my instruction based on the results?

If you can’t answer “yes” to all three questions, the assessment is probably not worth the instructional time lost nor the risk of inappropriate decisions that might follow the results.

One basic principle required for determining student response to instruction and intervention is setting growth targets or goals for improvement. Simply measuring growth without a goal or target will not help us understand if students are making progress that is catching them up or keeping them just as far behind as they were. While an argument can be made that one goal of MTSS/RTI is to ensure that students do not fall further behind academically, given the sheer investment schools and districts make in interventions and other support systems (e.g. human resources), it seems that a better standard of success should be a closing of the “gap”. The graphs below both exemplify growth. The first graph shows growth that is not closing the gap, while the second graph shows growth that is closing the gap.

Figure 1: Growth that is not closing the gap.Figure 2: Growth that is closing the gap. One method for setting targets for student growth is to use a benchmark that has been obtained for actual growth on the particular assessment being used. For example, if all students in a school take an assessment 3 times per year, it is very easy to determine what the median growth is from assessment 1 to assessment 2. If we use this as standard growth, then we can set a target at or above this for students who are receiving extra instruction or intervention supports. This can be especially helpful for students who are multiple years behind, for whom a target of being back to grade level in one year is probably not realistic.

Another method for setting growth targets for students is to simply choose a target that seems reasonable based on previous experience with the assessment and students with similar needs. This is helpful when there are no available data regarding the amount of growth students typically make on the assessment. The great thing about targets is that they can always be adjusted up or down based upon actual student response. In my next blog, I will share some critical elements necessary for success in any district-wide implementation of MTSS/RTI.

The National Center for Learning Disabilities, Inc., is a not-for-profit, tax-exempt organization under Section 501(c)(3) of the Internal Revenue Code. All contributions are tax-deductible to the extent permitted by law.