In a controlled experiment using Comrade, a computer-supported peer review system, student reviewers offered feedback to student authors on their written analyses of a problem scenario. In each condition, reviewers received a different type of rating prompt: domain-related writing composition prompts or problem/issue specific prompts. We found that the reviewers were sensitive to the type of rating prompts they saw and that their ratings of authors’ work were less discriminating with respect to writing composition than to problem-specific issues. In other words, when students gave each other feedback regarding domain-relevant writing criteria, their ratings correlated to a much greater extent, suggesting that such ratings are redundant.