The Excessive Wondering of Shieva Kleinschmidt

November 06, 2006

Still Not Impossible

Jonathan Ichikawa wrote some helpful comments responding to my Not So Impossible post below, and I wanted to respond in a new post, since this'll give me a chance to also (hopefully!) clarify what I was up to in the first place. My attempted clarificatory bit first:

Recall, the Bayesian’s claim was

(B) It is impossible to go from not being in a position to know E É H to being in a position to know it just by receiving evidence E.

As far as I can tell, I don’t actually need to take much of a stand on what properties ‘E ÉH’ has. The counterexample schema I’m proposing is simply this:

Take some instance of ‘EÉH’, I, and some sortal, s, such that I falls under sortal s, but the following statement, does not:

(S) Nothing falling under sortal s is true.

Let a be some agent who is strongly justified in believing (S), and in virtue of this, justifiably believes that I is not true; further, if it weren’t for this justified belief in (S), a would justifiably believe I. Further, a’s background beliefs are such that, if a were to acquire evidence E, then a would cease to believe that (S).

That’s it.

One might worry about whether an agent really can be justified in believing (S) for a sortal with the relevant properties, but I think I can set up a case where we have a dim-witted yet very rational agent, who believes it on the basis of testimony. Or, we could have a much smarter agent who believes it on the basis of Philosophical argument plus some non-standard intuitions (perhaps in a community too isolated for the agent to be exposed to conflicting intuitions). All of these seem _possible_. Similar things can be said in response to worries about whether one can justifiably believe that I is not true, in virtue of belief in (S).

In my post, I gave an example of how you might fill in some bits of the counterexample. I used the claim about material conditionals ‘cause I thought it was easy, though as I’ve indicated, lots of other sortals would have worked at least as well.

So, on to Jonathan’s worry:

Jonathan thinks that the agent in my case is in a position to know that I, prior to acquiring evidence E. He says, roughly: a could reason, “Suppose that E. Given my background beliefs, it follows that H. Therefore, given my background beliefs, EÉH.” That is, valid inference of q from p, entails pÉq. Since a is in a position to know all of the lines of the argument above, a is in a position to know the relevant conditional.

But I don’t yet see why I should endorse the claim that a must be in a position to know all of the lines of the above argument. Why, for instance, must a think that H follows from E? Perhaps, for instance, that premise also falls under sortal s. Or perhaps a has independent motivation for rejecting it (and this motivation would be undercut if a rejected (S), but not in virtue of the role s is playing), or relevantly similar motivation for rejecting the validity of the argument. And all I need is that it is a possible for a rational agent to be in a state like this.

(Further, it’s perhaps worth pointing out that a might still know the relevant facts about how he/she will process the relevant evidence. Here are some beliefs we might claim a has: “Suppose that I believe that E. Given my background beliefs, I will thereby come to believe that H.” Though whether we’ll want to characterise a’s beliefs in this way will depend on how we respond to the worries above, and also on which sortal we take s to be (since none of the propositions mentioned as the content of the meta-beliefs should fall under s). But am I right in thinking that, on its own, taking a to have the relevant meta-beliefs won’t get me into trouble?)

Also, it doesn’t seem strange to me for someone in a’s position to deny that valid inference of q from p entails pÉq, and be wary of conditional proof in general. But one having those views might require some accompanying, strange views about how to understand the meaning of ‘É’.

Jonathan also suggests this line of reasoning: a might think, “Suppose E and ~H. Given my background beliefs, H follows from E. And E certainly follows from E and ~H. So given the assumption (and my background beliefs), H and ~H. But that’s impossible. So ~(E and ~H). But that’s just equivalent to EÉH.”

As far as I can tell, the responses I gave to the last bit of reasoning can apply here as well. And it might be worth noting, prior to acquiring evidence Ea denies it’s true that EÉH, but this needn’t require endorsing (E and ~H).

Am I missing something, though? This isn't my area at all. Also, it's epistemically possible to me that I've made some really basic error. Any help you can give me would be much appreciated!

Comments

so let me see if i got this straight: you're suggesting that a could reject "If God said S, then God said something"--but could still count among the ranks of the rational? pardon my prejudice, but i don't see any overwhelming pressure on the bayesian to admit a to the rationality club, regardless of what additional wacky intuitions a has. am i missing the point?

Sure, a completely rational agent could think that indicative conditionals, like "If God said S, then God said something", lack truth-value. There are some philosophers I can think of who think that. And not that I'm tempted to accept the view, but what's irrational about it?

But if the case bothers you, you can instantiate E and H to some other propositions such that H doesn't follow logically from E (and yet, it's reasonable to believe that H if one believes that E, given that one doesn't believe (S)).

Am I missing your point?

(I hope you're doing fantastically, by the way! And keep me in the loop about the gunk theorems . . .)