CRESST Speaks at NCSA

June 25, 2018

CRESST researchers present at this year’s CCSSO National Conference on Student Assessment (NCSA) on a variety of topics, ranging from innovative, career readiness assessment items, to formative assessment tasks in mathematics, to braille versions of online assessments. Check out the presentations below.

This symposium focuses on a collaboration between the California Department of Education and UCLA/CRESST, aimed at supporting California’s K-12 assessment initiatives through the context of improving career-readiness inferences, while producing information, tools, and methodologies to inform assessment work. This symposium consists of three papers: (1) Digital Assessment of Problem Solving; (2) Examining Career Readiness Features in Existing Assessments; and (3) California’s Enhanced Assessment Grant: Increasing Students’ Postsecondary Opportunities for Success. The first paper focuses on the development of an innovative assessment item centered around a common career skill, problem solving. The second paper presents findings from examining career-readiness features in existing English language arts and math assessments. The third paper discusses the importance of the Enhanced Assessment Grant in the context of policy and preparing students for career opportunities. Implications for policy and the future of test design, analysis, and reporting will be discussed.

With the adoption of college- and career-ready standards, educators and administrators have been tasked with providing mathematics curricula that goes beyond procedural skills to emphasize higher order reasoning abilities and conceptual understanding in students. One approach advocated for helping to accomplish this involves the increased use of formative assessment. Yet despite this push, our research has found that mathematics teachers struggle to gather and analyze student work, provide feedback to students in relation to success criteria, and relate student solutions and errors to exemplars and success criteria. This session will focus on what we have learned during a multi-year NSF-funded study, describing the design features of the formative assessment performance tasks we developed, and reveal the results of our intervention study. Finally, we will discuss applications of our system for the development of new formative assessment tasks by teachers or districts.

Making changes to or sometimes replacing test items is often necessary to address student accessibility needs and to ensure fair testing conditions (e.g., an online test delivered on paper or in braille). With such changes, however, parameters from one assessment format can be of questionable accuracy vis-a-vis the alternative format. Furthermore, the number of students taking an alternative form (e.g., braille) is sometimes too small to conduct a traditional item-based linking study. It is nonetheless critical that performance on alternative assessment forms support the same inferences about student performance. This session describes a data-driven, judgement-based approach developed to link cut scores on alternative versions of an assessment. Discussion includes application of the approach to establishing cut scores on a braille version of an online assessment as well the methodology that can be applied to linking other types of alternative assessment formats. Time for audience discussion will be included.

English language proficiency assessments are a major source of information for states as they support their English language learners in making academic progress. While there is interest in these instruments to provide formative feedback that impacts instruction in the classroom, current scoring methods do not provide detailed enough scores to be of much usefulness. Diagnostic classification models (DCMs) may have potential in this regard, as they are a scoring approach designed to provide finer-grain information about students’ mastery of individual abilities. Yet, the measurement conditions and interpretation of DCMs are not the same as those of more common approaches, and questions remain as to whether their application to the state assessment context would be feasible or worthwhile. The discussants, including a state ELL assessment representative and a measurement researcher, will discuss their work investigating the potential uses of DCMs in state assessments and the questions that are faced.