Crooked Timber

Ever since I’ve had any involvement with the Philosophical Review, I’ve been struck by how many misapprehensions people around the field have had about it. So I thought it was worthwhile clearing some of those misapprehensions up. (This post has been prompted by some recent conversations with people who I was surprised to find sharing the misapprehensions.) I’m obviously no longer at Cornell, but I am rather fond of the Review, and I think in many ways it is much better run than is commonly thought.

I don’t think anything I’ll be saying here is secret, indeed much of it can be found on the Review‘s own webpages. But just to be safe, I’m going to gloss over some details about how things ran when I was at Cornell. In part that’s also because I don’t know how much has changed in the last 30 months, but in part it’s because I’m not an editor there any more, so I shouldn’t be revealing trade secrets!

The Review‘s refereeing is considerably blinder, in my experience, than most journals. Obviously external referees do not know the identity of the authors. But nor do the editors who are reading the paper at first, and making many of the decisions. I know many other journals have blind editing, but I know many do not. (Philosophy Compass doesn’t, for instance, though we have largely invited submissions, so it is a little less crucial.)

Now keeping the authors’ identities from the editors did lead to some complications. It meant we occasionally had awkward conversations with the journal manager about who could be a good referee for the paper. (The manager would know when we suggested sending an article to the author, but wouldn’t know when we suggested sending it to their colleague, student, or dissertation advisor.) It also meant that the editors couldn’t do some of the basic administrative jobs that accrue when a submission arrives, since they couldn’t see the incoming email. These jobs all had to be done by the manager, who of course had a lot of other jobs to do, especially when an edition was being put together. It would have sped things up if the editors could have taken over some of these jobs when the manager was flooded with other tasks. But I think, and I suspect most people in the profession think, that the priority should be preserving blind review, and that’s what the Review thought too.

The other misconception about the Review is that all the refereeing is done by Cornell faculty. In practice, the Review seemed to send out more articles than most journals. Every article is read by the editors, usually by both editors. If an article needs to be refereed, and pretty much every article that gets published is refereed by a non-editor, then the editors select referees the way every other journal does. That means that articles are sent to colleagues a fair bit of the time, i.e., to other Cornell faculty, but more frequently they are sent outside.

The point is not that Cornell faculty never referee papers. Every journal makes heavy use of the faculty at its home institution. It’s rather that the Review is not particularly striking in its use of ‘in-house’ referees. I did more reviewing for Noûs when I was at Brown than I did for the Review when I was at Cornell before becoming an editor. The vast majority of the articles that appeared in the Review while I was there were recommended for publication by non-Cornell philosophers, as well of course as being recommended by the editors. So the Review really isn’t an ‘in-house’ journal in the way that many people think it is. (And, for all I know, that it really was 50 years ago.)

This isn’t to say that the Review is at all perfect. It has quirks like every other journal. The response time for articles fluctuates somewhat, and sometimes well over what’s acceptable. Certainly I would have liked to do a better job of speedily returning papers when I was an editor. (I gather things now are better in this respect than they have been though.) But it is to say that the Review is rather different, and I think rather better, than many people think it is.

I’ve mentioned this before, but I wanted to again commend Philosophers’ Imprint for having their own LaTeX stylesheet, which rather conveniently is included these days in standard LaTeX distributions. (It’s really amazing what the standard distribution includes these days. I never knew I might want to format something in ANU exam format, but now that I have the chance…)

This has led me to wonder whether there’s an opening here for cutting some journal costs. What follows are basically musings out loud about ways in which you might run the typesetting end of an open-source journal.

These are very atheoretical thoughts about where the disagreement dispute is at.

Local vs global evaluation of agents

At the lunch referred to in the earlier post, we were talking about what kinds of people are drawn to the equal weight view of disagreement, as opposed to views that give peer disagreement less weight. One thought was that it was people who are more confident in their own opinions who dislike the equal weight view.

On reflection, I don’t think that’s right. What really motivates me is that I prefer to use very localised judgments about the reliability of a person. I know in my own case that I have any number of intellectual blindspots, some of them extremely narrowly drawn. (I’m pretty good on evaluating baseball players for instance, unless they happen to play for the Red Sox or Yankees.) When I see someone making an odd judgment on p, I don’t think that they’re in any sense ‘intellectually inferior’, I just think they have odd views about p. And that’s exactly the kind of question that I would use their views on p, and not any independent views they may have, to answer.

Who is being more dogmatic?

Relatedly, I’ve heard a few people describe the equal weight view as a more conciliatory view, and alternative views as less conciliatory. I think this is a mistake twice over.

For one thing, think about the case where you think E is not strong enough evidence for p, because there is a just realistic enough alternative explanation for E, but your (apparent) peer is simply dismissive towards these alternative explanations. He (and it’s easiest to imagine this is a ‘he’) says that only a crazy sceptic would worry about these alternatives. The equal weight view now says that you should firmly believe p, and agree that worries about the alternative, although coherent, are inappropriate. That doesn’t seem particularly conciliatory to me. (Nor does it seem rational, which might be why we never see much discussion of the equal weight view’s use in dismissing seemingly legitimate doubts.)

For another, think about things from the perspective of the irrational agent. For example, consider a case where a rational agent’s credence in p is 0.8, and an irrational agent’s credence is 0.2, and antecedently they regarded each other as peers. I say that both of them should move to a credence of around 0.8 – or maybe a touch less depending on how strong a defeater the irrational agent’s judgment is. The equal weight view says that the rational agent’s credence should move down to 0.5. That is, if I’m the irrational agent, I can accuse the other person of a rational error unless they come half-way to my view. That’s despite the fact that my view is objectively crazy. A view that says that when you’re wrong, you should concede ground to the other person seems more conciliatory than a view that says that you should demand that everyone meet you halfway, even people with a more accurate take on the situation.

Me on political philosophy vs me on epistemology

In an old Analysis paper on land disputes and political philosophy, I was rather hostile to a view on land disputes that purported to resolve any conflict in a way that was fair to both parties. Partially my hostility was because I didn’t think the resolution was particularly fair. But in part it was because the appropriateness of the resolution really relied on this being a genuine conflict in the first place. It seemed to me then, as it seems to me now, that identifying situations where two parties have an equal claim to something (in that case land, in this case perhaps truth or rationality) is much harder than figuring out what to do in such a case.

Somewhat paradoxically, I have a weak preference for us not having too nice a mechanism for solving disputes where parties have a genuinely equal claim. That’s because if we had such a mechanism, we’d be over-inclined to use it. And that would mean we’d end up treating as equals, or more exactly as equal claimants, parties who really weren’t equal in this respect. I think in practice, the way to resolve most disputes is to figure out who is right, and award the prize to them.

Independence

One of the motivations behind some versions of the equal weight view is that we should only use evidence that is ‘independent’ of the dispute in question to decide whether someone is a peer or not. (Nick Beckstead correctly notes this in the comments on the earlier post.) I think this is all a mistake. And as evidence for that, I present the case of Richard Lindzen.

Lindzen is an atmospheric physicist at MIT, and was involved in writing the 2001 IPCC assessment on climate change. That doesn’t make him the world’s foremost expert on climatology, but it does suggest he’d know more about it than me. Surprisingly, he turns out to be a climate change denier. (I’m not sure whether ‘denier’ or ‘delusionist’ is the correct current term; I have trouble keeping up.) I think that’s crazy, and I think the objective evidence, plus the overwhelming scientific consensus, supports this view.

Now what should an equal weight theorist say about the case? They can’t say that I can use the craziness of Lindzen’s views on climate as reasons to say he’s not a peer (or indeed a superior), because that would be giving up their view.

They could try saying that I could appeal to the views of other experts, but I think that misses the point. After all, the other experts are just more evidence, and Lindzen has that evidence just as much as I do. And he dismisses it. (I think he thinks it’s a giant conspiracy, but I’m not sure.) So even if I’m going to believe in global warming because of my reliance on other experts, I have to say that I’m going to trust my judgment of the testimonial evidence over someone else’s judgment of that very same evidence, even though I thought antecedently he would know better than me what to do here.

We could try saying that his dismissal of all the experts proves he is irrational. After all, he’s not an equal weight theorist! (That won’t bear much weight with me, but it might with the equal weight theorists.) But this is just to concede the point about independence. After all, we are judging his ability to make judgments about p not on independent grounds, but on grounds of how well he does on p. That seems like a violation of independence.

The debate, at this point, seems to resemble the complaint I made in Disagreeing about Disagreement. The equal weight theorist needs to treat the status of their theory of disagreement very differently to other epistemological theories. If Lindzen refuses to infer to the best explanation in this case, say, then we can’t dismiss his views unless we can criticise him on independent grounds. But if he refuses to take his peer’s judgments as strong evidence, we don’t need independent grounds to criticise that. This double standard seems objectionable.

Disagreement about evidence vs disagreement about conclusions

I’ve been trying to think about which cases I’m actually disposed to change my views in the face of peer disagreement. I think they are largely cases when there is a legitimate dispute about just what the evidence is. So think of some cases, common in sports, where we are trying to judge a simple factual question on the basis of visual evidence. The most common of these, in my experience, are so-called ‘bang-bang’ plays at first base. In that case we have to decide whether a baseball or a base runner got to a point earlier. And even with the situation right in front of us, this can be surprisingly hard.

Here are two salient facts about that case.

First, it is very hard to say, in a theoretically satisfying way, just what the evidence is in the case. I’m not a phenomenalist about evidence, so I don’t really want to say that the evidence is just how the case seems to us. In some sense, if it is possible to just see that, let’s say, the ball arrived first, then that the ball arrived first is in some sense part of my evidence. Perhaps I don’t know it is part of my evidence, and perhaps I don’t even believe it is true, but it is plausibly evidence for me.

Second, in a case like this, deferral to peers seems like a very natural thing to do. If there are six people watching TV, and I have a different opinion about what happened to the other five, then I’ll usually conclude I was wrong. Let’s assume, at least for the argument, that this is rational.

Here’s a hypothesis. It’s rational to defer to peers when it is unclear what your evidence is. It is less rational to defer to peers when it is unclear what the right response to the evidence is, at least when the peers have the wrong response. To consider the analogy above, I shouldn’t be so willing to defer to peers about disagreements about who will win the game, when we all have the same evidence about that.

The strongest cases for the equal weight view in the peer disagreement literature are, I think, cases where the evidence is not entirely clear. (At least on an externalist view of evidence.) Perhaps those are the cases where the equal weight view is correct.

Being a philosopher, and in particular an epistemologist, in Scotland is a lot of fun, so I highly recommend this job. Of course, I’ve mostly experienced life here with a government that was more sympathetic to both Scotland and universities than the new government promises to be. But there are so many good people here, both at faculty and student level, that I think whoever gets this job will be very happy with it.

I got into a bit of an argument with Stew Cohen about views on disagreement at lunchtime Friday. (Sorry dear colleagues, and co-residents of St Andrews, for disturbing your lunch unduly – I should have been less, er, disagreeable when arguing.) One of the upshots was I have a slightly clearer idea about what I dislike about “Equal Weight” approaches to disagreement.

Start with the abstract case. A person, call her Z, believes p on the basis of some parts of her total evidence E. Another person, call him RL, believes ¬p on the basis of some parts of his total evidence, which is also E. Z had antecedently, i.e., before finding out this odd view of RL’s, that RL was an epistemic peer on this question. What should Z do?

I say that it depends on what E is, and what p is, on their relation to each other and her relation to them.[1] And so, I think, says everyone.

Let’s consider a particular case of this. E contains the fact that thousands of other people, each of them just as qualified as RL, believe p on the basis of (other parts of) E. Then it would be perverse to let RL’s judgment make more than a trivial difference to Z’s position. If you said that Z had to “split the difference” between her current view and RL’s, that wouldn’t be an equal weight view, it would be a view that the last person you hear from trumps everyone else. And note that’s true even if Z has no independent reason to believe that RL is mistaken, incoherent or otherwise impaired.

So now we’ve established an existential – for some such E and p, it is reasonable to treat RL’s opinion as more or less irrelevant to the correct judgment. I think that kind of situation is not just possible (as we’ve now proven) but reasonably common. In particular, I think there are such situations even when E does not contain any evidence about third party experts. I’m not going to offer here an argument for that position.

But I am interested in what could be an argument that this kind of possibility is rare, though it is possible. Note that any argument, such as the early arguments for the equal weight view, which purport to show that such a situation is impossible are clearly mistaken. We know the situation is possible. The point of the equal weight view must, I think, be that such situations are (a) rare, and (b) only arise in special circumstances. What could those circumstances be?

It might be possible to argue that judgments of others are always trumps, so no E that lacks testimonial evidence could continue to support p in the basis of peer testimony that ¬p, when the peer in question also has evidence E. But we would need a reason to believe that. It seems to me that stated this barely, the position attributes a kind of magical power to evidence about the judgment of others, and we shouldn’t believe in any such magic.

Perhaps the position could be that I misdescribed the case, by including the testimony of others are evidence. Perhaps, as Moran, Hinchliff and others have argued, testimony provides non-evidential warrant. So we should say that Z’s warrant includes evidence E and testimony T. When she hears RL’s position, that changes T, but doesn’t change E. I think this ends up with the result that the equal weight theorists want, namely that when RL is the first person she hears, his opinions should make a big difference to her credences, but when he’s the 1000th, they should only make a small difference. But if it turns out the equal weight view rests on the non-evidential view of testimony, that would be an interesting surprising discovery. Since I prefer something like the Maitra-Nolan view of testimony, which is an evidential view, that would at least explain why I don’t like the equal weight view of disagreement.

Perhaps the argument could be that “second-order evidence” always trumps “first-order evidence”. In this context, second-order evidence is evidence about the force of first-order evidence. Now assume that and when RL is the only peer she speaks to, his opinion is very strong second-order evidence about the force of E, but when she speaks to many peers, his opinions are very weak second-order evidence. Then we get the result that she should pay him a lot of attention iff she hasn’t spoken to many other peers.

My gut feeling is that this reverses the right order of things; that first-order evidence really trumps second-order considerations. But set that aside. It’s possible that E contains a lot of non-testimonial second-order evidence for Z’s position. Perhaps E contains a carefully worked out epistemological theory that entails that E supports p, and Z believes that theory for the right reasons. Presumably RL does not believe that theory, but that will just show (by Z’s lights) that he turns out not to be a very good epistemologist. So even if second-order evidence is always a trump, Z can still (more or less) ignore a peer, as long as she has other second-order evidence.

So that’s where I think things stand. For some kinds of evidence E, the crude equal weight view is obviously wrong. That is, there are some pairs 〈E, p〉 such that E supports p, and this support is immune to defeat by peer disagreement. Those are cases where E already contains a lot of peer judgments supporting p. I doubt, for good old-fashioned Quinean reasons, that strength of evidence invariably tracks kind of evidence. So if some such pairs 〈E, p〉 exist, there will be other such pairs 〈E′, p′〉 where E′ does not contain judgments of other peers, but the support E′ provides for p′ is also immune to defeat by peer disagreement.

I think it’s hard to say just what the contemporary manifestations of the equal weight view are committed to. But I think they must be committed to the rarity of immunity to defeat by peer disagreement. If not, I don’t quite know what the view is. (It can’t be just that defeat by peer disagreement is possible can it? Does anyone disagree with that?) And given these considerations, I don’t know why you’d think this kind of immunity is rare.

1 I’m more and more coming to the opinion that a lot of questions that are discussed in contemporary epistemology are like this one: “A train leaves New York for Chicago travelling west at 75mph. What’s the probability it will snow along the way?” I suspect the Sleeping Beauty puzzle might be like this too. It isn’t obvious that the question has an answer, any more than this question about snow has an answer.

I’m headed off to St Andrews tomorrow for my annual stint as a professorial fellow at Arché. I’m looking forward to it a lot. I end up having most of my ideas for the year while I’m over there, and spend the intervening 10 months writing stuff up.

This trip will be busier than most. I’m scheduled to do three talks already in St Andrews – one on judgments and evidence, one on easy knowledge, and one more that will probably be on paradoxes but might be on non-demonstrative arguments in philosophy. And I’ll be attending at least three conferences, one on methodology this weekend, a conference on evidence later in the month, and a big conference on logical consequence.

But that’s only a normal level of busyness in St Andrews. What’s going to be a little different this time is how much I’ll be travelling. I’m going to Stirling (perhaps twice), Leeds, Konstanz and Oxford for talks, and spending some time holidaying in London (for the weekend after the Oxford trip) and the Highlands.

I don’t know whether that means blogging will be light or heavy. On the one hand, I won’t have as much time as I have in New York. On the other, there will be more ideas flying around than normal. I’m looking forward to it a lot!

I imagine many readers of this blog will have already seen Paul Bloom’s NYT magazine article on the moral life of babies. There’s a lot of interesting stuff in there, but I wanted to focus on something about false beliefs that surprised me. Here’s what Bloom says.

The new studies found that babies have an actual understanding of mental life: they have some grasp of how people think and why they act as they do. The studies showed that, though babies expect inanimate objects to move as the result of push-pull interactions, they expect people to move rationally in accordance with their beliefs and desires: babies show surprise when someone takes a roundabout path to something he wants. They expect someone who reaches for an object to reach for the same object later, even if its location has changed. And well before their 2nd birthdays, babies are sharp enough to know that other people can have false beliefs. The psychologists Kristine Onishi and Renée Baillargeon have found that 15-month-olds expect that if a person sees an object in one box, and then the object is moved to another box when the person isn’t looking, the person will later reach into the box where he first saw the object, not the box where it actually is. That is, toddlers have a mental model not merely of the world but of the world as understood by someone else.

I think the Onishi and Baillaregon paper he is referring to is Do 15-Month-Old Infants Understand False Beliefs?, which isn’t behind a paywall. It’s a very interesting study. It certainly seems like a decent challenge to the (Rutgers-inspired) view that children don’t understand that beliefs can be false until they are nearly 4.

I’m not an expert on any of this stuff, so I’m probably missing a lot, but here’s a crude summary of what the experiments seem to show. The more difficult you make the task you set an infant in a false belief task, the later they make the correct ‘predictions’. That’s not too surprising, but what is surprising is how much difference this can make. If you just see what a baby expects by tracking where it looks, then 15-month olds have expectations that allow for the falsity of beliefs. If you try to get the child to explain the action of an agent with false beliefs, or complete some other kind of demanding verbal task, they don’t allow for false beliefs until they are well into their 4th year.

This seems to suggest, to me at least, a kind of System 1/System 2 story. The automatic system that controls things like eye movements, surprise reactions and the like, allows for false beliefs from a very early age. But when babies get the capacity to reflectively reason (which I assume is much later than 15 months) their reflective thought seems not to allow for false beliefs. They don’t incorporate false beliefs into their explicit reasoning for years and years after their automatic processes are sensitive to them.

As I said, this is all a guess based on non-expert reading of a few studies. The main thing I wanted to highlight here is that people should be reading the new studies, especially people (like me) who didn’t realised that the experimental data on infant’s theory of mind look a lot more complicated now than they did in the 1980s.