Fact-checker agreement—and disagreement

In recent months we’ve pored over a pair of scholarly works on the topic of fact-checking. One, Checking the Fact-checkers in 2008: Predicting Political Ad Scrutiny and Assessing Consistency by Michele A. Amazeen, we reviewed earlier this year. The second, a doctoral dissertation by former PolitiFact writer Lucas Graves, contains much material we’ll address in one way or another over the coming months. In this article we’ll tie together threads from both works in an examination of fact-checker agreements and disagreements.

What does it mean when fact checkers agree? Amazeen’s paper, which we’ll call Checking in keeping with the tradition from our review, sought to use fact-checker agreement to support the perception of fact-checker reliability. Amazeen worked from the premise that agreement between fact-checkers represents a form of triangulation that in turn lends support to the presumption of accuracy.

In our review of Checking we charged that Amazeen misapplies the triangulation principle by focusing on a set of three mainstream fact checkers. By excluding other fact checks from other sources, Checking narrowed the distance between the differing points of view. Triangulation works best with a greater distance between points of view. Agreement between the left-leaning fact checkers at Media Matters for America and the right-leaning fact checkers at Newsbusters means more than agreement between FactCheck.org and PolitiFact, for example, at least in terms of triangulation.

Checking’s approach to triangulation suffers from another weakness: What does it mean when Amazeen’s trio of fact checkers all make an error in their evaluation of a claim? Here at Zebra Fact Check we’ve noted some examples where Checking’s elite trio of FactCheck.org, the Washington Post Fact Checker and PolitiFact all made essentially the same error on a fact check.

Fact-checker agreement, then, means little without a reasonable assurance of accuracy. Fact-checker agreement on inaccurate fact checks may mean the fact checkers share a point of view that helps lead to the error.

So how about fact checker disagreement?

Fact-checker disagreement, as with fact-checker agreement, tells us almost nothing by itself. However, if we know which fact checker is right when fact checkers disagree, the information may help us decide which fact-checkers do the best job checking facts.

How do we know which fact checker is wrong and which is right when the fact checkers disagree? Zebra Fact Check fact-checks other fact checkers, but why should anyone accept its judgments? Don’t the mainstream fact checkers have advantages over operations like Zebra Fact Check, such as multiple editors?

We concede that the mainstream fact checkers hold some advantages. But at the same time, the contest over accurate fact-checking always boils down to who has the best argument. If a fact-checker with disadvantages produces stronger work than the advantaged fact-checkers, it follows that the disadvantaged fact-checker is probably right.

This contest over perceived reliability came to mind as we read Lucas Graves’ dissertation, Deciding What’s True: Fact-Checking Journalism and the New Ecology of News (hereafter “Deciding“).

Deciding dissects the origin of fact-checking at length, arguing that the mainstream fact-checking trend largely represents journalists marking off their territory against a bloggers’ insurgency.

Their academic credentials and establishment status — and the perceived quality of their work — distinguish them from what fact-checkers see as a stew of online vitriol and partisanship.

The mainstream fact checkers say, in effect, “We, not you, shall decide what is true and what is not.”

Part of keeping control of its claimed territory involves staving off challenges to its authority. Note how Graves describes the way the mainstream fact-checkers strategize their responses to criticism:

Fact-checkers anticipate criticism and develop reflexes for trying to defuse it. “We’re going to make the best calls we can, in a pretty gutsy form of journalism,” Bill Adair told NPR. “And when we do, I think it’s natural that the people on one side or the other of this very partisan world we live in are going to be unhappy.” One strategy is to responding only minimally or in carefully chosen venues, and always asserting their balance, often by showing the criticism they receive from the other side of the spectrum.

In our experience, the Washington Post Fact Checker stands head and shoulders above the rest of the “elite three” in responding substantively to criticism.

To exemplify a weaker strategy of response we’ll use PolitiFact’s response to our recent fact check of the “Mostly False” grade affixed to Rep. Trey Gowdy’s complaint that the State Department has not responded to a document request. We found PolitiFact blatantly neglected the context of Gowdy’s remarks and pointed out the omitted context to the writer and editor of the piece.

PolitiFact chose a minimal response and the venue: None and nowhere.

An isolated case? That’s not our experience. In November 2014 PolitiFact opined that it was forced to give Rudy Giuliani a “False” rating since it could find no information supporting his claim that murder conviction rates are about equal for whites and blacks.

We couldn’t find any statistical evidence to support Giuliani’s claim, and experts said they weren’t aware of any, either. We found some related data, but that data only serves to highlight some of the racial disproportion in the justice system.