Knee-Jerk Reforms on Remediation

By

The author and philosopher, Thomas Merton, once said that “the self-fulfilling prophecy perpetuates a reign of error.” The self-fulfilling prophecy that remedial education has failed now leads us to such a reign of error. The news media, policy makers and various higher education agencies are using flawed interpretations of data about remediation to make unsupported assertions and repeat them frequently, thus leading to erroneous policy decisions.

It began with a 2007 report by Mattorell and McFarlin. Supported by the Rand Corporation, these researchers used a regression-discontinuity design to study a large sample of Texas college students whose scores placed them just above and just below the placement level for remedial courses in mathematics and reading. They found that those who just missed the cut score and placed into remedial courses did no better in college-level classes, graduation rates, transfer rates and earnings than those students who just made the cut score. This finding was used to support the authors’ conclusion that remediation was of questionable value.

There is a major flaw in this conclusion. The authors assume that participation in remedial courses should result in participants performing better than students who did not take them. The purpose of remedial courses, however, is to level the academic playing field for underprepared students, not to enable them to outperform prepared students. Given that, the fact that there is little difference in performance between the two groups would indicate that the purpose of remedial courses had been accomplished.

A similar study using a regression discontinuity design was conducted by Calcagno and Long (2008) with similar results. Students in this study who scored just below the cut score and participated in remediation did no better in the corresponding college-level course than those who scored slightly higher and bypassed remediation. Calcagno and Long were more tentative about the meaning of their findings than Martorell and McFarlin. They pointed out that “[t]he results of this study suggest that remediation has both benefits and drawbacks as a strategy to address the needs of underprepared students.” They also admit that “the research design we used only allows the identification of the effect of remediation on a subset of students who scored just above and just below the cutoff score. Estimates should not be extrapolated to students with academic skills so weak that they scored significantly below the cutoff point”

Neither study explored the performance of students with lower assessment test scores in later college-level classes. And neither study is generalizable to the entire range of remedial courses and students. Yet these are the major studies used to justify the claim that all remediation has failed. Although none of the authors of these studies makes this claim, their work is used to justify it.

Meanwhile, there are other studies of remediation leading to different conclusions. In a 2006 study using the 1988 National Educational Longitudinal database, Attewell, Lavin, Domina and Levey found that “two-year college students who successfully passed remedial courses were more likely to graduate than equivalent students who never took remediation.” They also found that students who took remedial reading or writing were more likely to graduate from community colleges than students who did not take these courses. Bahr then later studied the effect of remediation on a large sample of students at 107 California community colleges. He found that students who successfully completed math remediation were just as successful in college-level math courses as those who did not require remediation. He concluded that “these two groups of students experience comparable outcomes, which indicates that remedial math programs are highly effective at resolving skill deficiencies.”

Boatman and Long, in a 2010 study, reviewed a sample of 12,200 students enrolled at public institutions in Tennessee. Although they found many negative effects for remediation, they also found some positive effects depending upon the subject area and the degree of student underpreparedness. Among their conclusions were that postsecondary decision makers “not treat remediation as a singular policy but instead consider it as an intervention that might vary in its impact according to student needs.”

If we look at all the major studies of remediation, we find conflicting findings and inconclusive results. Given these findings, it is difficult to understand how any credible scholar familiar with the available evidence can decisively conclude that remediation has failed. Nevertheless, there are those who misinterpret or ignore the available evidence to make this claim and are then widely quoted by others. It does not take long for the press and policymakers to echo these quotes. In fact, there is little data to justify this assertion and the studies on which the assertion is based do not support it.

A prime example showing the perpetuation of the reign of error can be found in a recent report from Complete College America. Entitled “Remediation: Higher Education’s Bridge to Nowhere,” the report contends at the outset that the “current remediation system is broken” and that “remediation doesn’t work.” As evidence for this assertion, the report claims “research shows that students who skip their remedial assignments do just as well in gateway courses as those who took remediation first.” This erroneously suggests that all remediation has failed. Because the authors do not bother to cite a reference for this claim, we can only assume that they are using the Martorell and McFarlin, Boatman and Long, and the Calcagno and Long studies to justify it. As previously noted, this is not what either publication actually says.

The “Bridge to Nowhere” report goes on to project that, according to their data, only 9.5 percent of those who take remedial courses will graduate from a community college within three years while 13.9 percent of those who do not take remedial courses will graduate within three years. The authors cite these figures as further evidence of the failure of remediation. We do not disagree with these figures but we do disagree with the interpretation of them. The authors of “Bridge to Nowhere” appear to be arguing that it is participation in remediation that accounts for this difference in graduation rates. This argument is based on the assumption that correlation implies causality, a well-known fallacy among researchers. Furthermore, as Bettinger and Long in a 2005 study point out, “a simple comparison of students in and out of remediation is likely to suggest negative effects due to important academic differences in the underlying populations.” Students placing into remediation are disproportionately characterized by known risk factors such as being minority, low income, first generation and underprepared. For such students it is likely that these factors account more for low graduation rates than participation in remediation.

We do not argue with the data provided in any of these reports, nor do we question the methodology used in obtaining the data. We also concur with many of the recommendations in “Bridge to Nowhere.” We would further agree that remediation as currently delivered in U.S. community colleges is very much in need of reform. However, we disagree that all remediation has failed everywhere for all students as many policymakers and news reporters seem to believe. There is simply no credible scientific evidence to support this belief. Unfortunately, that does not stop organizations like Complete College America and others from asserting that remediation has failed, thus creating a self-fulfilling prophecy.

A more recent example took place in the Connecticut Legislature this year. Based on the misinterpretations of research and the fallacious arguments discussed here, the Legislature required that remediation be limited to one semester in all its colleges and replaced by embedded support services in gateway courses. This also happens to be one of the major recommendations of Complete College America. Although we agree that this might be a good solution for some students, particularly for those at the top of the remediation score distribution, it is not a good solution for all students. In fact, there is no single solution for all underprepared college students. There are many tools validated by varying amounts of research available to address the needs of underprepared college students through improved remedial courses and a variety of separate or embedded support services.

Neither colleges and universities nor policymakers, then, should conclude that all remediation has failed and engage in knee-jerk attempts to eliminate it. We need to reform remediation and guide our reform efforts with accurate data and sound research. We need to explore various alternatives, including some of those proposed by Complete College America and others. Nonetheless, we disagree that eliminating all remedial courses is a wise course of action. As Bahr points out, remediation “is so fundamental to the activities of the community college that significant alterations to remedial programs would change drastically the educational geography of these institutions. This should give pause to those who advocate the elimination of remediation … as the implication of such changes would be profound indeed.” We believe that arguing for such profound change as the elimination of remediation for all students on the basis of so little evidence is not only ill-advised but will also undermine the goal of improving college completion.

Bio

Hunter R. Boylan is the director of the National Center for Developmental Education and a professor of higher education at Appalachian State University. Alexandros Goudas is an English instructor at Delta Community College.