You are here

Fluency Norms Chart (2017 Update)

View the results of the updated 2017 study on oral reading fluency (ORF) by Jan Hasbrouck and Gerald Tindal, with compiled ORF norms for grades 1-6. You'll also find an analysis of how the 2017 norms differ from the 2006 norms.

In 2017, Hasbrouck and Tindal published an Update of Oral Reading Fluency (ORF) Norms, compiled from three widely-used and commercially available ORF assessments (DIBELS, DIBELS Next, and easy CBM), and representing a far larger number of scores than the previous assessments.

The table below shows the mean oral reading fluency of students in grades 1 through 6, as determined by Hasbrouck's and Tindal's 2017 data. You can also see an analysis of how the 2017 norms differ from the 2006 norms.

Oral reading fluency (ORF)

Of the various CBM measures available in reading, ORF is likely the most widely used. ORF involves having students read aloud from an unpracticed passage for one minute. An examiner notes any errors made (words read or pronounced incorrectly, omitted, read out of order, or words pronounced for the student by the examiner after a 3-second pause) and then calculates the total of words read correctly per minute (WCPM).

This WCPM score has 30 years of validation research conducted over three decades, indicating it is a robust indicator of overall reading development throughout the primary grades.

Interpreting ORF scores

ORF is used for two primary purposes: Screening and progress monitoring. When ORF is used to screen students, the driving questions are, first: “How does this student’s performance compare to his/her peers?” and then: “Is this student at-risk of reading failure?”

To answer these questions, the decision-makers rely on ORF norms that identify performance benchmarks at the beginning (fall), middle (winter), and end (spring) of the year. An individual student’s WCPM score can be compared to these benchmarks and determined to be either significantly above benchmark, above benchmark, at the expected benchmark, below benchmark, or significantly below benchmark.

Those students below or significantly below benchmark are at possible risk of reading difficulties. They are good candidates for further diagnostic assessments to help teachers determine their skill strengths or weaknesses, and plan appropriately targeted instruction and intervention (Hasbrouck, 2010. Educators as Physicians: Using RTI Data for Effective Decision-Making. Austin, TX: Gibson Hasbrouck & Associates.

When using ORF for progress monitoring the questions to be answered are: “Is this student making expected progress?” and “Is the instruction or intervention being provided improving this student’s skills?”

When ORF assessments are used to answer these questions, they must be administered frequently (weekly, bimonthly, etc.), the results are placed on a graph for ease of analysis, and a goal determined. The student’s goal can be based on established performance benchmarks or information on expected rates of progress. Over a period of weeks, the student’s graph can show significant or moderate progress, expected progress, or progress that is below or significantly below expected levels.

Based on these outcomes, teachers can decide whether to (a) make small or major changes to the student’s instruction, (b) continue with the current instructional plan, or (c) change the student’s goal (Hosp, Hosp, & Howell, 2007. The ABCs of CBM: A Practical Guide to Curriculum-based Measurement. NY: Guilford Press).

Using the data

You can use the information in this table to draw conclusions and make decisions about the oral reading fluency of your students.

Students scoring 10 or more words below the 50th percentile using the average score of two unpracticed readings from grade-level materials need a fluency-building program.

In addition, teachers can use the table to set the long-term fluency goals for their struggling readers.

Reprints

You are welcome to print copies for non-commercial use, or a limited number for educational purposes, as long as credit is given to Reading Rockets and the author(s). For commercial use, please contact the author or publisher listed.

A couple of things might be happening. For example, the student might just not like taking the test and so he take it less seriously than he should be; he might not be understanding the reading material as well as the grade progresses, since it gets harder; he might be slacking on that day; or, something more sinister might be at hand, such as a serious neurological illness he might be suffering.

As was stated earlier in the post, one must take into consideration that the level of text complexity increases throughout the year. The child may not forget how to read, but as the level becomes increasingly difficult, the child's fluency must also increase in order to be able to read the same amount of correct words per minute. The fluctuation could indicate that the child is having difficulty keeping up with the increased complexity of the text and needs more practice at that level.

The wcpm decreased when the student go from 6th to 7th grade because when they are promoted, the reading material gets harder. Students are faced with text a little harder than the previous grade, thus explaining the strange inconsistency in test scores. It's normal of course: when you are faced with the harder text you need more brain power to decode it, this takes more time.

What if the student is reading about the CWPM average, but at an independent level that is not grade level? For example my fourth grader reads 105 CWPM in Winter, but he is reading at a third grade benchmark.

I've been doing reading assessments with the QRI-4 by Leslie and Caldwell, so I'm wondering if you count ALL miscues for the CWPM score (total accuracy), or just the number of meaning change miscues (total acceptability) to get the oral reading fluency score.

This is a very useful matrix, especially with the information about improvement, rather than just being guided by percentiles. I recognize that silent reading cannot be measured similarly, given that miscues cannot be captured by the examiner, but is there a guideline for wpm read independent of comprehension results?

I am interested in using your chart for my learners. However, there is little information on howv it is used. For instance, how do you account for the fact that the score of a learner remains unchanged (the percentile) as the learner's number of correct words read in a minute over terms improves ?

The percentile is a normative score that reflects the student's relative standing compared to his same-grade peers, while words correct per minute is a raw or absolute score without reference to other students. As all students in a grade grow in wcpm, the only way a student could improve his percentile score (relative standing in the group) is to outpace his peers in wcpm growth. Use wcpm to measure a student's growth over time (i.e., progress monitoring), and use the percentile score to gauge how well he's doing compared to same-grade peers (i.e., decide if he needs to be referred for intervention).

It is because the child is aging and so is his peers. As everyone else improves the 50th %tile has to increase, too. You might ask, "Well, why is the end of the year WPM score higher than the fall WPM score for the next grade?" Remember, the text difficulty is increasing each year.

So to analyze a child's fluency skills to others, you could compare him to same aged peers reading the same text OR older peers at a higher level text OR younger peers at a lower level text OR all of these. In the past, this was a way for me to find a student's independent, instructional and frustration levels of reading. As a special education teacher, I used to write statements of strengths or deficits as, "As a sixth grader, John's fluency is commensurate with a fourth grade student at the 50th %tile reading a fourth grade level text." I would include the numbers of where a typical sixth grader would be at that time of year and include his %tile, too.

One of my favorite informal reading assessments is a Burns and Roe IRI.

In this case, the percentile is not an individual score per se, but rather a performance level. For example, students who are reading at the 50th percentile in the winter of first grade, read 23 words correct per minute (WCPM). So, if you want your students to read at the 50th percentile, you will want to follow that line across to check that their WCPM score improves at the suggested rate throughout the year and that they meet the fall, winter and spring WCPM benchmarks that are given.