As the Chronicle of Higher Educationreported last Friday, these researchers have developed software that creates a value of trust for Wikipedia entries by analyzing the content history. Entries written or edited by Wikipedians with a history of long-standing unedited materials are considered generally trustworthy. On the other hand, articles written by individuals whose entries are edited frequently by others, or deleted entirely, are considered more suspect. Trustworthy passages appear as they do in regular Wikipedia, but passages by authors with a history of questionable content are highlighted orange, with darker orange indicating greater untrustworthiness. For an example of how this works, see the sample articles on UCSC’s web site.

Obviously, the software does not actually fact check articles. And there is a potential problem in more arcane topics, where a history of no edits might suggest a lack of experts in the field rather than article reliability. Nevertheless, in more general topics, by indicating how frequently an author’s work is being over-written, the software potentially can flag content written by someone with a history of providing inaccurate information, or of Wikipedia vandalism, and suggest caution. Users faced with a sea of orange, the researchers say, should look elsewhere. For cleaner articles, the old adage applies: trust but verify.