Item Response Theory models student ability using question level performance instead of aggregate test level performance. Instead of assuming all questions contribute equivalently to our understanding of a student’s abilities, IRT provides a more nuanced view on the information each question provides about a student. It is founded on the premise that the probability of a correct response to a test question is a mathematical function of parameters such as a person’s latent traits or abilities and item characteristics (such as difficulty, “guessability,” and specificity to topic).

In this video, data scientist, Kevin Wilson and software engineer, Alejandro Companioni, discuss the intricacies of this theory.