Headlines (Campus Updates)

David Johnson, a professor of economics with Laurier’s school of business and economics has found that “Good” elementary schools are not necessarily those whose students score well on standardized, province-wide tests, but those that consistently out-perform other schools in neighbourhoods with similar socio-economic characteristics.

That is the most important finding of a new study of test scores of Ontario elementary school students by Johnson for The C.D. Howe Institute, an economic policy research institution.

Johnson finds that, although there is a strong relationship between the social and economic characteristics of the community in which a school is located and the school’s achievement levels on standardized tests, those factors explain less than half of the variation in school performance.

Some schools whose students come from homes with relatively less affluent and less well educated parents have high achievement results, while the results of some schools whose students come from homes with affluent, well-educated parents are lower than one might expect for schools in such advantaged neighbourhoods.

“I became interested in elementary school assessment data both after my own son sat through the Grade 3 assessment and after a number of years spent sitting on the Westmount School Council,” explains Johnson. “My first thought, also every one else’s first thought,was that assessment scores would be simply higher in schools with higher income and better educated parents and that the publication of assessment scores would not, in itself, be very informative about a school’s quality. I decided to do research into using these scores to actually identify better schools. This turned into an ongoing long-term research program. My book is the first major outcome from that program.”

Johnson acknowledges the controversy surrounding the release of annual test scores of Ontario’s Grade 3 and Grade 6 elementary school students and the compiling of “report cards” on the effectiveness of individual schools, which often find that schools in disadvantaged neighbourhoods do less well.

He argues that a fairer and more useful way of comparing schools is to look at test scores over a number of years, control for school community socio-economic characteristics, and focus on other factors – such as the managerial talents of principals, the quality of teaching, and the resources available to the school – that can make a difference in the classroom.

Johnson’s study breaks new ground by developing a way to compare schools with the same predicted test results but widely varying actual results. Having identified schools that perform better than expected, he reports on visits to a number of such “good” schools in southern Ontario in which he interviewed principals, teachers, and parents to gain insight into what those schools are doing that makes them successful. Among the factors he highlights is team approaches among teachers in the primary and junior divisions (kindergarten to Grade 3 and Grades 4 to 6, respectively) to preparing their students.

“I am able to identify schools that really are better than other schools that draw students from comparable neighbourhoods,” said Johnson. “After identifying schools that outperform similar schools, it was impossible for me to resist the desire to visit them to find out what made them better. Although non-economists may find it hard to believe, economic research can be both fun and policy relevant.”

Johnson argues that the benefits of standardized testing outweigh the relatively small cost per elementary school student. He also makes several recommendations for improving the testing process.

• First, the tests should be renamed the “Primary Assessment” and the “Junior Assessment,” to make it clear that the results reflect the effects of several years of teaching by groups of teachers at a school, rather than what has happened in Grades 3 and 6 alone.

• Second, he urges a number of important improvements in the presentation and analysis of the assessment data – for example, always including students who are exempted from writing the tests in the overall results, averaging results over a number of years to see if a school is improving over time, looking for ways to de-emphasize year-to-year fluctuations in results and, of course, looking hard for ways to compare results where students from similar social and economics are compared.

• Third, all participants want, for good reasons, the assessment process to take place as late as possible in the school year.

The study is called Signposts of Success: Interpreting Ontario’s Elementary School Test Scores. Those wishing to examine Johnson’s methodology and school-by-school results can access his detailed database and individual school data on March 29 on the C.D. Howe Institute’s web site at www.cdhowe.org.