Thursday, September 27, 2018

Philosophical skepticism is usually regarded as primarily a thesis about knowledge -- the thesis that we don't know some of the things that people ordinarily take themselves to know (such as that they are awake rather than dreaming or that the future will resemble the past). I prefer to think about skepticism without considering the question of "knowledge" at all.

Let me explain.

I know some things about which I don't have perfect confidence. I know, for example, that my car is parked in Lot 1. Of course it is! I just parked it there ninety minutes ago, in the same part of the parking lot where I've parked for over a year. I have no reason to think anything unusual is going on. Now of course it might have been stolen or towed, for some inexplicable reason, in the past ninety minutes, or I might be having some strange failure of memory. I wouldn't lay 100,000:1 odds on it -- my retirement funds gone if I'm wrong, $10 more in my pocket if I'm right. My confidence or credence isn't 1.00000. Of course there's a small chance it's not where I think it is. Acknowledging all of this, it's still I think reasonable for me to say that I know where my car is parked.

Now we could argue about this; and philosophers will. If I'm not completely certain that my car is in Lot 1, if I can entertain some reasonable doubts about it, if I'm not willing to just entirely take it for granted, then maybe it's best to say I don't really know my car is there. There is something admittedly odd about saying, "Yes, I know my car is there, but of course it might have recently been towed." Admittedly, explicitly allowing that possibility stands in tension, somehow, with simultaneously asserting the knowledge.

In a not-entirely-dissimilar way, I know that I am not currently dreaming. I am almost entirely certain that I am not dreaming, and I believe I have excellent grounds for that high level of confidence. And yet I think it's reasonable to allow myself a smidgen of doubt on the question. Maybe dreams can be (though I don't think so) this detailed and realistic; and if so, maybe this is one such super-realistic dream.

Now let's imagine two sorts of debates that we could have about these questions:

Debate 1: same credences but disagreement about knowledge. Philosopher A and Philosopher B both have 99.9% credence that their car is in Lot 1 and 99.99% credence that they are awake. Their degrees of confidence in these propositions are identical. But they disagree about whether it is correct to say, in light of their reasonable smidgens of doubt, that they know. [ETA 10:11 a.m.: Assume these philosophers also regard their own degrees of credence as reasonable. HT Dan Kervick.]

Debate 2: different credences but agreement about knowledge. Philosopher C and Philosopher D differ in their credences: Philosopher C thinks it is 100% certain (alternatively, 99.99999% certain) that she is awake, and Philosopher D has only a 95% credence; but both agree that they know that they are awake. Alternatively, Philosopher E is 99.99% confident that her car is in Lot 1 and Philosopher F is 99% confident; but they agree that, given their small amounts of reasonable doubt, they don't strictly speaking know.

I suggest that in the most useful and interesting sense of "skeptical", Philosophers A and B are similarly skeptical or unskeptical, despite the fact that they would say something different about knowledge. They have the same degrees of confidence and doubt; they would make (if rational) the same wagers; their disagreement seems to be mostly about a word or the proper application of a concept.

Conversely, Philosophers C and E are much less skeptical than Philosophers D and F, despite their agreement about the presence or absence of knowledge. They would behave and wager differently (for instance, Philosopher D might attempt a test to see whether he is dreaming). They will argue, too, about the types of evidence available or the quality of that evidence.

The extent of one's philosophical skepticism has more to do with how much doubt one thinks is reasonable than with whether, given a fixed credence or degree of doubt, one thinks it's right to say that one genuinely knows.

How much doubt is reasonable about whether you're awake? In considering this issue, there's no need to use the word "knowledge" at all! Should you just have 100% credence, taking it as an absolute certainty foundational to your cognition? Should you allow a tiny sliver of doubt, but only a tiny sliver? Or should you be in some state of serious indecision, giving the alternatives approximately equal weight? Similarly for the possibility that you're a brain in a vat, or that the sun will rise tomorrow. Philosophers in the first group are radically anti-skeptical (Moore, Wittgenstein, Descartes by the end of the Meditations); philosophers in the second group are radically skeptical (Sextus, Zhuangzi in Inner Chapter 2, Hume by the end of Book 1 of the Treatise); philosophers in the middle group admit a smidgen of skeptical doubt. Within that middle group, one might think the amount of reasonable doubt is trivially small (e.g., 0.00000000001%, or one might think that the amount of reasonable doubt is small but not trivially small, e.g., 0.001%). Debate about which of these four attitudes is the most reasonable (for various possible forms of skeptical doubt) is closer to the heart of the issue of skepticism than are debates about the application of the word "knowledge" among those who agree about the appropriate degree of credence.

[Note: In saying this, I do not mean to commit to the view that we can or should always have precise numerical credences in the propositions we consider.]

7 comments:

There is no ontological or epistemological distinction between a skeptic or one who believes with certainty in an intellectual construct, because skepticism itself is just that, an intellectual construct. Both are badges of honor worn with pride, for honor itself is nothing more that a gift that one gives to themselves. Outside of any exclusive circle of definition and agreement, we really don't know anything, and I'm not certain that is something to be proud of. I suppose one could call our paradigm of reductio ad absurdum credence, what do you think?

Actually Eric, I'm a sixty-five year old incorrigible noumenalist, as well as a micro-panpsychist for two simple reasons. First, those two paradigms are the most parsimonious explanation of things, and second, due to my childish, pre-prejudicial nature, the one that pre-dates common sense, I believe that it's best to stick with the simplest explanation of things unless there is compelling evidence to suggest otherwise. Having said that, my models also correspond to the Parmenidean reality/appearance distinction by stating that consciousness in all of its forms is the appearance and not the reality, and idealists just hate when I say that. My models also do not accommodate the paradigms of magic or solipsism. No, I'm no skeptic, I'm willing to take a risk by setting the bar higher, but to each their own.......

I've been thinking along these lines recently as a way to deflate Gettier cases. Say every morning you roll a 100-sided die and choose to believe that whatever number comes up is the temperature outside and then dress accordingly. This is a very bad way to assess the weather, and even if you roll an 83 and it does happen to be 83 degrees outside, you wouldn't call your true belief justified.

Alternatively maybe you check a weather app on your phone, and in your experience this app has been very reliable: displaying temperatures below freezing when there's snow out, in the 90s when you see neighbors frying eggs on cars, etc. So you're pretty justified in believing its numbers. But what if there's a bug in the program such that every once in awhile (1% chance per day, say), it says the temperature outside is 99 degrees? If it does happen to be 99 out, I think you've got a Gettier case--a justified true belief that we don't want to call knowledge.

But if you squint, I think the weather app and the d100 only differ by degree (oops, pun) and not kind (justified vs. not). In both cases you have an algorithm for reporting the temperature, but the reliability of one algorithm is much better than the other. The weather app gets its information from some weather service, which uses observations and modeling to spit out numbers. It being accurate means the distribution of numbers will cluster around the true temperature, but there's still some probabilistic distribution. The 100-sided die also has a distribution, but it's flat across a wide range and consequently very inaccurate. But it's still better than a million-sided die, which we can say confidently because we're familiar with terrestrial temperatures.

So as you suggest, instead of talking in terms of knowledge and justified true belief, we can talk in terms of credences. Your credence in the d100 is very low, but your credence in the weather app is very high. Even if you know about the bug in the weather app, you're still going to trust its results or bet on it. There will sometimes be funny results (wrong temperatures, or right temperatures for the wrong reasons), but you expect that for any credence other than 1. In other classic Gettier cases, we can have credences based on the reliability of the method for gathering evidence; eyes usually do a very good of telling us what's on the side of the road, even if sometimes there are fake barns.