Authors

Document Type

Article

Journal/Book Title/Conference

Journal of Engineering Education

Volume

99

Issue

4

Publisher

American Society for Engineering Education

Publication Date

2010

First Page

397

Last Page

408

Abstract

BACKGROUNDPeer review is a beneficial pedagogical tool. Despite the abundance of datainstructors often have about their students, most peer review matching is bysimple random assignment. In fall 2008, a study was conducted to investigatethe impact of an informed algorithmic assignment method, called Un-weightedOverall Need (UON), in a course involving Model-Eliciting Activities(MEAs). The algorithm showed no statistically significant impact on theMEA Final Response scores. A study was then conducted to examine theassumptions underlying the algorithm.

PURPOSE (HYPOTHESIS)This research addressed the question: To what extent do the assumptions usedin making informed peer review matches (using the Un-weighted Overall Needalgorithim) for the peer review of solutions to Model-Eliciting Activities decay?

DESIGN/METHODAn expert rater evaluated the solutions of 147 teams’ responses to a particularimplementation of MEAs in a first-year engineering course at a large mid-westresearch university. The evaluation was then used to analyze the UONalgorithm’s assumptions when compared to a randomly assigned control group.

RESULTSWeak correlation was found in the five UON algorithm’s assumptions: studentscomplete assigned work, teaching assistants can grade MEAs accurately,accurate feedback in peer review is perceived by the reviewed team as beingmore helpful than inaccurate feedback, teaching assistant scores on the firstdraft of an MEA can be used to accurately predict where teams will needassistance on their second draft, and the error a peer review has in evaluating asample MEA solution is an accurate indicator of the error they will have whilesubsequently evaluating a real team’s MEA solution.