Tuesday, 7 March 2017

Google's Algorithms Are Already Outperforming Pathologists

One of the more difficult things a doctor can do is diagnose cancer. That's not just because of the life-changing effects such a finding can have, but because distinguishing an abnormal but benign bunch of cells from one that's potentially deadly is surprisingly subjective. Trained physicians can and often do disagree. In one study of breast biopsies, for example, diagnostic agreement was as low as 48 percent; individual physicians agreed with the consensus view a little more than 75 percent of the time. Both of those numbers are frighteningly low, and researchers are looking to computers to improve them.

Why do human doctors so often disagree? The problem isn't that they don't know what they're looking for—they generally have a set of cues, steps they go to produce a diagnosis.

But they can disagree about what they're seeing, and how it fits the set of cues. Not only can they disagree with each other; a classic 1968 study found that, when given a copy of a stomach ulcer they'd already diagnosed, physicians disagreed with themselves, rending different diagnoses. Nearly five decades ago, researchers were drawing attention to what that study called a "generally terrifying" level of disagreement.

Researchers back then found that a simple algorithm was more consistent. That's not surprising: An algorithm is just a set of rules to be followed. Human beings, in all our subjectivity, tend to apply these rules inconsistently.