Tuesday, 24 August 2010

Who renders judgment?

There was this attitude that experts should be disqualified [from participating] by the very fact that they had published on the subject—that because they had published, they were therefore biased.

The worry about epistemic egalitarianism is that true expertise is being missed or even suppressed. Leiter points to cases in which Wikipedians have overruled experts, and it is easy to equate this kind of egalitarianism with "teach the controversy," science-bashing movements, and other perversions.

Sanger gives a bit of context for where this attitude comes from:

There's a whole worldview that's shared by many programmers—although not all of them, of course—and by many young intellectuals that I characterize as "epistemic egalitarianism." They're greatly offended by the idea that anyone might be regarded as more reliable on a given topic than everyone else. They feel that for everything to be as fair as possible and equal as possible, the only thing that ought to matter is the content [of a claim] itself, not its source.

Having been a programmer, I can attest to the prevalence of this attitude, and add that it is usually coupled with a naive sense of entitlement to render judgment -- and that both the attitude and the habit are hard to overcome. A contractarian might view the recognition of expertise as a voluntary ceding of autonomy to an authority. Programmers usually have radical views about intellectual property, so it should be no surprise that they have radical views about authority generally. (I suppose that, like programmers, philosophers tend toward epistemic egalitarianism (modulo a general discomfort with Heidegger).)

But nowhere in either the attitude of epistemic egalitarianism or the habit of rendering judgment is any support for the kind of anti-expertise Leiter is worried about. We can dismiss arguments that rely on claims to authority without dismissing arguments made by authorities.

A. Scientific expertise is fairly narrow and it can easily be misapplied in public policy domains. B. Few scientific experts are trained in neighboring fields as to judge the interactions among their expertise and other experts.

When scientific experts get it wrong in matters of policy they do not tend to run the costs of their errors.

Note that none of these (2-4) points mean we should not seek expert advice or base policy on scientific knowledge. (The fifth one may incline us to be very cautious about scientific experts.) But points 2-4 do encourage transparency of the sort that EE insist on in order to let (skeptical) non-experts weigh in on and scrutinize expert authority in decision-making processes.

Scientists aren't (and shouldn't be) disinterested in the social or political implications of their studies. My own interests -- in the way things work, in money, in electronics -- led me to become an engineer. Changing interests -- in how we come to know things, in dialog, in scientific practice -- led me to switch tracks and become a philosopher. To pretend otherwise would be foolish. But the fact that I have held these and other interests does nothing to diminish (or inflate) whatever small contribution I have made (or will make) to those respective fields.

A key challenge for social epistemology (of science) is in determining what sort of group (of scientists) is entitled to make knowledge claims. Some typical criteria are publication, peer-review, consensus, and diversity. The first three recognize some basic institutions of science, while "diversity" defends against accusations of bias: the more diverse the group of individuals who assent to some claim, the more likely it will be that any possible objection will have been considered. I gather that this is supposed to follow from the diversity of interests that connect to their decision to join the group in question.

This diversity effect has always left me a bit uneasy. While I concede that a person's stance likely blinds them to alternatives, it need not do so. Biases can be overcome. One way to help overcome a particular bias in a group is to add people without said bias to the group. Another way is simply to point out the bias. That won't always work, but I wouldn't want to rule out the possibility.