Chicago Public Schools Project

THIS STUDY CONSIDERS AS OUTCOMES THE STUDENT'S GRADE PROGRESSION, CONDUCT, ACHIEVEMENT TEST SCORES, AND ACCUMULATION OF CLASS CREDITS ACCUMULATION AND EXAMINES THE RELATIONSHIP BETWEEN SURVEY-BASED MEASURES AND MEASURES THAT CAN BE CRAFTED FROM ADMINISTRATIVE DATA, SUCH AS ABSENCES, PRIOR GRADES, AND PREVIOUS DISCIPLINARY INFRACTIONS.

A growing body of evidence shows that cognitive assessments like IQ and achievement tests fail to capture non-cognitive (socio-emotional) skills that are important predictors of life outcomes and can be improved through education. Despite this evidence, most governments rely exclusively on achievement tests to evaluate schools, school districts, and even teachers. It is widely thought that non-cognitive skills cannot be reliably measured in practice, and there remains no uniform agreement about the most appropriate way to measure them.

The Spencer Foundation project on measuring non-cognitive skills compares the predictive validity of several different methods for eliciting quantitative measurement of students socio-emotional skills using a sample of approximately 650 ninth grade students currently enrolled in the Chicago Public School system. We will be collecting responses to a survey that includes several competing personality inventories in order to compare and contrast what is captured by these various measurement approaches and to estimate the extent to which different measures predict later outcomes.

In this study we are considering as outcomes the student’s grade progression, conduct, achievement test scores, and accumulation of class credits accumulation. We will also examine the relationship between survey-based measures and measures that can be crafted from administrative data, such as absences, prior grades, and previous disciplinary infractions. Administrative data may provide a low-cost way for schools to measure non-cognitive skills which may have higher predictive validity than the survey based methods that are being proposed by many schools.

Frequently Asked Questions

Currently, there are many possible measures of non-cognitive skills coming from different disciplines (psychology, economics, and education), making it difficult for school districts and policy-makers to decide which measure to use to assess students, teachers, and schools. Currently both grit and open mindset inventories have been proposed to be used in student or teacher evaluations in California, Massachusetts, and Alabama. This study will quantify the relationship between a subset of measures of non-cognitive skills from different disciplines and determine which measures are most predictive of later outcomes.

2To what extent do available school administrative records capture non-cognitive skills?

Recent evidence suggests that schools could use existing academic indicators like student grades, credits earned, absences, and disciplinary infractions as measures of non-cognitive skills (Kautz and Zanoni, 2015, Borghans et al., 2011, Duckworth et al., 2012, Jackson, 2012). This approach is attractive because schools already collect these data. There are, however, open questions as to whether these data capture the full range of non-cognitive skills and ones that predict life-relevant outcomes in the short run and the long run. This study will examine the extent to which academic indicators capture non-cognitive skills and the extent to which the traditional measures of non-cognitive skills predict selected outcomes above and beyond academic indicators.

3Is there evidence that self-reported measures depend on a student’s reference (peer) group or other contexts and incentives?

Contexts and incentives can influence psychological measures, which in turn can bias comparisons across groups of students. In schools, a major concern is “reference bias” which arises when respondents rate themselves relative to their immediate peers (part of their context), rather than the whole population. For example, questionnaires might ask students to rate themselves on the extent to which they are “hard-working,” which might mean different things to students with different peer groups. Reference bias is of particular concern when comparing skills across very different types of schools and can also reduce the predictive power of the skill measures.