The results of the plagiarism analysis showed that every single student in the class had cheated. All failed the assignment and initially the school planned to note the offense on each student’s official transcript. Goodbye, good schools! One might think that the school would question whether there were issues with either the software’s analysis or how the results were interpreted. After all, 100% of a group of “A” students with no history of trouble was flagged. The school did not question the results. The parents did, however. After some digging, the findings are quite troubling.

First, the software by default looks for any phrases of three words or more that match between two submitted papers. Each “offense” of a “copied” three-word phrase is tagged. Get tagged too many times, and you’re identified as a cheater. Let’s think about this criteria applied blindly without further thought. Assume students are writing about Tolstoy’s War and Peace. Two students start a sentence with “Tolstoy said that…” or “The meaning of…” or “The book refers to…”. They are now guilty of plagiarism. The software assumes nobody could have such phrases in common without copying from one another. Such tags are useful as a starting point. But, to be applied correctly, someone needs to review the papers and validate if any of the phrases really appear to be copied or just innocent matches like the examples. Nobody did that.