An Assessment Technique Using Research Articles

In entry-level courses it’s often a struggle to get students to see that the content has larger significance and intriguing aspects. In most science textbooks, for example, only well-established facts are presented, and they are supported by equally well-know research studies. Textbooks don’t usually identify areas of inquiry where the questions have yet to be answered or the findings so far are controversial. And yet often, this is the content most likely to interest students. But can you expect beginning students to read original sources, like research studies? Could you expect them to answer test questions about those articles?

A biology professor reports on his experience using research articles and asking test questions about them in an undergraduate course for students majoring in life sciences. Students were assigned a research article to read—the article was relevant to content being covered in class. It was posted on an accessible website. Sometimes the article was discussed during the lectures and sometimes it was the topic of a tutorial session (these were large classes that included tutorial sections). Either way the students had access to the articles before and during the assessment activity.

The students were then given test questions on these articles. The questions were set at three levels on the Bloom taxonomy. Questions at level one were straightforward, testing students’ scientific literacy and conceptual understanding. Questions at level two focused on students’ abilities to link prior knowledge or textbook content to material in the research article. The goal was to see whether students could correlate different components and understand scientific reasoning. At level three, the questions asked students to link the research content to daily life—integrating it with their current knowledge and applying it in a creative way. Three different research articles were assigned and the test on each counted 10 percent, for a total of 30 percent of the course grade.

An elaborate system used to evaluate student responses revealed that students had read, understood, and were able to write about the research articles. A majority of the students were even able to correctly answer level three type questions. And students responded favorably to this approach. They felt it positively affected their motivation in the course and showed them interesting and relevant aspects of the content. “The results showed that the approach strongly motivated students to step out of their comfort zone (textbook) and to develop high-order cognitive skills, including correlation, application, and synthesis.” (p. 289)

A good deal of the success of this approach can be attributed to the criteria used to select the research articles. The author notes that finding suitable articles was a “major challenge” in developing this approach (p. 284). But the criteria used help to explain the success of this assessment strategy. Articles had to meet four criteria. First the article had to be relevant. It had to link with the content being covered in the course. Second, it had to be interesting. It had to address some topic that would capture students’ curiosity. It could be addressing a question so far unanswered or on some controversial issue. Third, the article needed to be comprehensible. Students had to be able to understand it, or at least most of it. It could not be overly complicated as these students had little (if any) previous experience reading research material. And finally, it had to be heart stirring—the author’s way of saying it had to be an impressive piece of work, something that would inspire students and set high expectations for their future work as scientists.

I love this idea, but feel a bit overwhelmed with the work it would require to prepare the exam.
This also relates to assigning research projects/papers where the students find their own sources — I should be using Bloom's taxonomy to require students to demonstrate multiple levels of competence.