Educators Meet to Review and Write Smarter Balanced Test Items

Over the course of a week in Chicago, Mathematics and ELA test items were reviewed for content alignment, bias and sensitivity, and new items were developed for field-testing

Classroom teachers from eight states convened July 30-August 3 to provide a “boots on the ground” perspective of student skill levels. Core to the Smarter Balanced mission, the annual processes of test item review and test item writing leverages the expertise of teachers (they know their students best!) in three distinct ways:

to review existing questions for bias and sensitivity and remove references, phrases, or situations that could prevent a student from accurately showing what they know and can do

to thoughtfully craft questions for the summative test that gauge knowledge and skills at the appropriate level of difficulty for each grade

Subtlety of Sensitivity and Bias

Sensitive topics and unintentional bias can be subtle, hiding in plain sight. Over the course of the meeting, educators developed a keen sense for references or phrasing that could be emotional triggers or be alienating for students who were not familiar with the subject. When test items are flagged, they are discussed as a team to determine if changes could be made to wording to ameliorate bias or if the question needed to be discarded.

A critical piece of reviewing test items focuses on removing references that could be insensitive to test-takers with disabilities. Often a minor wording change can make a question far more relatable. For example, changing “view the passage below” to “analyze the passage below” levels the question for students with visual impairment so they can focus on what the question is really asking of them.

Terry Gibbs-Burke, Director of Mathematics, gave another example of a test item that would require sensitivity review, “If a question on the test references hurricanes, we would want to remove this question so not to cause distress to students who may have lived through a recent natural disaster.”

An example of an unintentionally biased test question is one that frames a scenario around circumstances that would be familiar to urban youth but would not make sense to rural youth (or vice versa). By removing these kinds of experience-based references, students are less likely to stumble on aspects of the question that are not related to performance.

Ellen Irish, an Oregon educator, explains the steps that a test question (item) goes through before it can be added to the Smarter Balanced summative test.

How to Build Perfect Test Items

Whether a teacher was new to the Item Review & Item Writing meeting or had attended in years past, they all understood this in common: writing strong test questions is a complex, science-based process with a healthy dash of creativity. Many attendees at the meeting were there for the professional development as much as they were there to contribute to the work of the Consortium.

The training allowed them to hone their knowledge of stems, solutions, and the psychology of distractors so when they returned to their classrooms, they could write test questions for their students that gave them the best opportunity to succeed.

The educators who attended the meeting also understood their broader impact —they were helping shape questions used to assess millions of students across the country. Contributing to this process helps ensure the Smarter Balanced test is high-quality, accessible, and can provide the most accurate measure of student learning for our students. Many thanks to the educators who attended the 2018 Item Review & Item Writing meeting!

Visit the Educator-Created page to learn more about how teachers are involved in building the Smarter Balanced suite of instructional tools.