OpenComps: Validity and Causal Inference

With the start of my comprehensive exams beginning in 12 days, my studying has hit the homestretch. Thankfully, my advisor has inspired some confidence by telling me that my understanding of the math education literature is solid and I won't need any more studying in that area. That's good for my studying, and something I take as a huge compliment. So now I can focus for a while on preparing myself for the exam question Derek Briggs is likely to throw my way. Typically, one of the three people on a comps committee is tasked with asking a question related to either the quantitative or qualitative research methodology we learn in our first year of our doctoral program. Derek is a top-notch quantitative researcher, and I enjoyed taking two classes from him last year: Measurement in Survey Research and Advanced Topics in Measurement. Where this gets slightly tricky is that Derek didn't actually teach either of my first-year quantitative methods courses, so there's a potential I could get surprised by something he normally teaches in those classes that I didn't see. It's a risk I was willing to take after working with Derek more recently and more closely in the two measurement courses last year.

It certainly won't be a surprise if Derek asks a question that focuses on issues of validity and causal inference. He mentioned it to me personally and put it in a study guide, so studying it now will be time well spent. I feel like I've had a tendency to read the validity literature a bit too quickly or superficially, so this is a good opportunity for me to revisit some of the papers I've looked at over the past couple of years. Here's the list I've put together for myself:

AERA/APA/NCME. (1999). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association. [Just the first chapter, "Validity."]

Zumbo, B. D. (2009). Validity as contextualized and pragmatic explanation, and its implications for validation practice. In R. W. Lissitz (Ed.), The concept of validity: Revisions, new directions, and applications (pp. 65–82). Information Age Publishing.

Thankfully, some of these papers I've read recently for my Advances in Assessment course so the amount of reading I have to do is appreciably less than it might look. In my typical fashion, I'll study these in chronological order with the hopes that I get a sense for how the field has evolved its thinking and practice regarding these ideas over the past several decades.

Although I have little other graduate school experience to compare it to, I feel like this reading list is representative of what sets a PhD apart, particularly one earned at an R1 university. It's not necessarily glamorous, and its relevance to the day-to-day teaching and learning in classrooms might not be immediately obvious. But without attending to issues like validity and causal inference, we have a much more difficult time being sure about what we know and how we're using that knowledge. Issues of validity should be at the heart of any assessment or measurement, and when they're attended to properly we greatly improve our ability to advance educational theories and practice.