National Evaluation Helps Set the Standard for Teach for America

Related Links

This article is part of a civilrights.org series that examines innovative education reform programs.

With so many different, sometimes conflicting, views on how to close the achievement gap for minority students, research and evaluation are playing an increasingly important role in identifying strategies that work.

Recently, a group of funders active in the educational field commissioned an independent research study to assess the effectiveness of one of the country's fastest growing teacher recruitment and preparation programs, Teach for America (TFA).

Although some education advocates had previously criticized TFA'a expedited training process, the evaluation study found that TFA teachers had a positive impact on student math skills and no impact on the quality of their reading skills.

In addition to providing useful lessons for TFA, the study also underscored the broader value of evidence-based educational research in guiding good policy and practice.

"To demonstrate a program's effectiveness, you need to use evidence that not only convinces parents and other stakeholders in the school community," says Steven Glazerman, Mathematica Senior Researcher and a co-author of the study. "You also need evidence that really matters to policy makers, funders, and the skeptics."

Founded in 1989, the mission of TFA is to address the educational inequities of children in the nation's low-income and minority communities. The program, which currently places more than 1,656 teachers in 22 different rural and urban areas, seeks to expand the available pool of talented teachers by recruiting promising college seniors with strong academic records and leadership capabilities - but without traditional teacher training or education-related degrees.

Once accepted into the program, TFA recruits participate in a focused 5-week summer course to prepare them to enter the classroom the following fall. Despite intensive training that includes four weeks of student teaching, critics argued that without more hands-on practice and supervision, the relative inexperience of TFA recruits could have a negative impact on their students.

To address this question, the Mathematica study compared the academic outcomes of a large group of students taught by TFA teachers with the outcomes of students taught by non-TFA teachers in the same schools and from the same grades.

The research team began the process by dividing the teachers into two distinct categories. The first - the "TFA group"--was composed of new TFA corps members and former TFA teachers still teaching in the school. The second group -the "control group"--included non-TFA teachers with all different levels of experience and training.

The study then used a two-stage evaluation process: a pilot study in Baltimore in the first year, and a full-scale evaluation of five additional regions - Chicago, Los Angeles, Houston, New Orleans, and the Mississippi Delta - the following year. The research was based on a comparison of findings from 17 schools, 100 classrooms, and almost 2,000 students in grades one through five.

To ensure comparable classes of children, the research team randomly assigned students entering the same grade at the same school to either the TFA or control group classrooms. The team assessed all students' basic skill levels in reading and math by administering a standardized test in both the fall and spring.

Based on a comparison of these scores, Mathematica found that TFA teachers had a positive impact on their students' math achievement relative to their control group counterparts - an impact that was roughly the equivalent of one additional month of math instruction.

In terms of reading outcomes, TFA teachers had no impact on reading achievement - that is, their students improved the same amount, on average, as students of their non-TFA colleagues.

Additionally, the study found little evidence that TFA had any impact on students' non-academic outcomes, such as attendance or disciplinary incidents.

The implications of the Mathematica study are significant in several respects.

First, the independently-commissioned study was helpful in diffusing some of the criticism of TFA opponents.

"A lot of critics used to say that TFA harmed students," says Glazerman. "I don't hear them saying that as much any more."

The study also helped TFA to identify new ways to strengthen its program, especially in terms of literacy-building.

Finally, for a wider audience of education stakeholders, the study reinforced the value of a rigorous, randomized evaluation as one of many important tools to improve student outcomes.

"We are still waiting for more information on its long-term impact," explains Glazerman, "but the best evidence is that this kind of study not only sets the standard for TFA, but for the larger education policy field."