Post navigation

Cartesian skepticism and the Sentinel Islander thought experiment

Cartesian skepticism has been a hot topic lately at TSZ. I’ve been defending a version of it that I’ve summarized as follows:

Any knowledge claim based on the veridicality of our senses is illegitimate, because we can’t know that our senses are veridical.

This means that even things that seem obvious — that there is a computer monitor in front of me as I write this, for instance — aren’t certain. Besides not being certain, we can’t even claim to know them, and that remains true even when we use a standard of knowledge that allows for some uncertainty. (There’s more — a lot more — on this in earlier threads.)

Here’s an analogy that shows how serious the problem of circularity is for your position.

Suppose that a few decades from now you possess a really high-fi pair of virtual reality goggles, plus some sensitive motion sensors. You kidnap a North Sentinel Islander who knows nothing about virtual reality or computers, and you tell him that the goggles and sensors are magical devices that can grant him access to an actual land, LaLa Land, which is far away.

The islander learns to navigate LaLa Land successfully, even carrying out tasks within it. If you ask him questions about LaLa Land, he answers them “correctly”. He even claims to know things about LaLa Land, which he takes to be real. We know better, because we understand that the goggles do not deliver veridical sensory information. They are fostering an illusion. LaLa Land doesn’t exist in the real world.

The islander could argue, KN-style:

1. I assume that the goggles deliver veridical information about LaLa Land.

2. On the basis of that assumption, I am able to navigate LaLa Land successfully and satisfy my goals.

Is he right? Obviously not. We can see that he is being fooled, and we can diagnose the problem with his argument: it’s blatantly circular.

How is your argument any better than his?

If we assume that our perceptions are veridical, we are making an equivalent mistake to the Sentinel Islander when he assumes that the VR goggles deliver veridical information about LaLa Land.

On the other hand, if we don’t know that our perceptions are veridical, and we can’t even judge the likelihood that they are veridical, then we are in no position to claim knowledge — justified true belief — regarding the external world.

We’re not obligated to lie and say that the Sentinel Islander possesses knowledge about LaLa Land simply because he tried really hard, doing everything that it’s reasonable to expect someone to do in his situation.

KN:

I think we should say that her beliefs about LaLa Land are indeed justified, but not true.

They aren’t justified, because they rest on the unjustified assumption that the VR headset is delivering veridical information about a real place.

keiths:

Please stop trying to loosen the definition of knowledge (or justification, or truth). The failure to reach a desired, predetermined conclusion is not an excuse for relaxing definitions and lessening rigor.

KN;

Nor is the need to reach a desired, predetermined conclusion an excuse for insisting that philosophy is as liner and precise as writing a computer program.

Cartesian skepticism wasn’t my “desired, predetermined conclusion.” I was quite happy believing that my perceptions were generally accurate, and I would have happily continued believing it if the argument against it weren’t so compelling.

I changed my mind on the basis of reasoning and evidence. That’s how it should be.

You are changing your reasoning when it leads you to an undesired conclusion. That’s how it shouldn’t be.

I’m still interested in your responses to the questions I raised in this comment:

KN,

In my view, the truth of a claim is independent of whether a particular “epistemic position” exists or is occupied (unless, of course the claim is about epistemic positions and their occupancy).

That’s what I was getting at in this exchange:

KN:

The problem with the Sentinel Islander scenario is that there isn’t anyone who actually occupies an epistemic position compared to us that we would have relative to the Islander.

keiths:

Sure there is. In the Cartesian demon scenario, it’s the demon. In the brain-in-vat scenario, it’s the designers of the vat apparatus.

Either way, it doesn’t matter. An error is still an error even if no one is aware of it.

Suppose that everyone on earth dies in a viral epidemic, except for the Sentinelese. An islander stumbles upon a goggle/sensor set and learns to operate it. He comes to believe that LaLa Land is real, and he claims to know things about it.

No one on earth knows that he is wrong. Does that make him right? Of course not.

Would you agree that the islander’s knowledge claims are false even if no one is aware of that?

Suppose you are in an analogous position. A race of aliens envatted you some time ago, but they’ve since gone extinct. At this point no one, including you, is aware of the envatting.

I would say that it is true that you are envatted. Would you disagree?

keiths: Consider two people, Almuerzo and Borodin. Almuerzo walked through town this morning and saw a ‘For Sale’ sign in the yard of the stately Victorian at the corner of Jackson and Elm. Borodin, who has been in solitary confinement for years, remembers that Victorian and often asks about it. The guards never answer, even when bribes are offered, and Borodin has no other source of information about the outside world.

Almuerzo justifiably believes that the house is for sale.

If the house is indeed for sale, since Almuerzo believes it and you say he is justified in doing so, then, since on your view, knowledge is JTB, you ought to say Almuerzo knows it. But you don’t.

His belief is justified* but not justified. He knows* that the house is for sale, but he doesn’t know it.

I omit those details for obvious reasons: 1) they don’t need to be repeated every single time we talk about this stuff, because the implicit asterisks are always there when claims are being made about the external world; and 2) they aren’t relevant to the point I’m making here, which is that contra KN, justification depends on more than doing one’s best, as demonstrated by the Almuerzo/Borodin scenario.

This justification* biz is new. At least to me. Don’t you want to say that Almuerzo is has any justification [without the asterisk] at all for believing the house is for sale? Has he ONLY justification* and no evidence whatever? Why is that?

The asterisk means the same thing here as when it is appended to “know”. It signifies a dependence on the assumption that our perceptions are generally veridical.

If perceptions aren’t generally veridical, then Almuerzo doesn’t actually know that the “For Sale” sign was there this morning — or the house itself, for that matter.

Don’t you want to say that Almuerzo is has any justification [without the asterisk] at all for believing the house is for sale?

Be careful with the phrase “any justification.” When we’re talking about knowledge — justified true belief — we mean sufficient justification. Almuerzo doesn’t have sufficient justification for claiming to know (without the asterisk) that the house is for sale, because he doesn’t know that his perceptions are veridical.

You talk about perception being ‘generally veridical’ and have mentioned things like demons, dreaming and BIVs. But let’s suppose (for the sake of argument only) that perception IS generally veridical. What about pranksters? Dretske talks about mules cleverly disguised as zebras, and there are many famous examples of papier-mache farm facades. Won’t they defeat claims to knowledge even given general veridicality? And thus defeat claims to knowledge* (or justification*)? When is justification ‘sufficient’ on your view?

You talk about perception being ‘generally veridical’ and have mentioned things like demons, dreaming and BIVs. But let’s suppose (for the sake of argument only) that perception IS generally veridical. What about pranksters? Dretske talks about mules cleverly disguised as zebras, and there are many famous examples of papier-mache farm facades. Won’t they defeat claims to knowledge even given general veridicality? And thus defeat claims to knowledge* (or justification*)?

You talk about perception being ‘generally veridical’ and have mentioned things like demons, dreaming and BIVs. But let’s suppose (for the sake of argument only) that perception IS generally veridical. What about pranksters? Dretske talks about mules cleverly disguised as zebras, and there are many famous examples of papier-mache farm facades. Won’t they defeat claims to knowledge even given general veridicality? And thus defeat claims to knowledge* (or justification*)?

They defeat certainty, but not knowledge*.

The crucial difference is that if you know that perception is generally veridical, you can build up a model of the external world, and that model can be used to estimate the likelihood that you are being fooled in a given instance.

If you don’t know that perception is veridical, then all bets are off. You know nothing about the external world and are therefore unable even to estimate the likelihood that you are being fooled.

For example, I’ve seen people arguing against brain-in-vat scenarios on the basis of technological infeasibility. They estimate the computational demands that would be placed on the vat and argue that no conceivable technology could meet those demands.

The problem with those arguments is that they inadvertently assume the veridicality of perception. To know whether a technology is feasible, you need to know certain things about the physics by which the technology operates. We know a lot about physics, but it’s the physics of our (potentially virtual) world, not necessarily the physics of the real world.

Limitations imposed by the virtual world’s physics may not apply to the real world’s physics, so technologies that are infeasible in the virtual world may be quite realizable in the real world.

To evaluate feasibility based on the physics of the (potentially virtual) world is to inadvertently assume the veridicality of perception.

keiths: To know whether a technology is feasible, you need to know certain things about the physics by which the technology operates. We know a lot about physics*, but it’s the physics* of our world*, not necessarily the physics of the world.

What is the likelihood that the last time you thought you saw a zebra it was a cleverly disguised mule?

Mung: keiths: To know whether a technology is feasible, you need to know certain things about the physics by which the technology operates. We know a lot about physics*, but it’s the physics* of our world*, not necessarily the physics of the world.

Fixed that for ya!

It’s the physics* of the only world* that matters*.

And if “we need only to know* certain things about the physics by which the technology operates” (and we operate) in our world, why isn’t that just knowledge [no asterisk]? It’s what everybody means by the term, after all. It seems to me that, in spite of your denials, what you are calling “know” is certain knowledge and what you are calling “know*” is fallible knowledge. You cannot actually calculate a single “likelihood”–you just think you must have them if you rule out heavyweight (i.e., philosophically SKEPTICAL) defeaters. But you really have no basis for the belief that your non-philosophical defeaters leave you with “likelihoods.” It may well be that every “zebra” you’ve ever seen has been a cleverly disguised mule and every red silo you’ve ever seen has been a fake. You actually have no idea at all.

A real skeptic is willing to take the position that they don’t know anything. You try to pussy out by saying you know* things. But precisely the same arguments you think are successful against knowledge are also successful against knowledge*. And, obviously, if you really think you know* things in spite of not having the slightest idea of “likelihoods” that you’re correct, then there’s no reason that you can’t know things.

What is the likelihood that the last time you thought you saw a zebra it was a cleverly disguised mule?

Very low (assuming the veridicality of my perceptions, as stipulated).

And if “we need only to know* certain things about the physics by which the technology operates” (and we operate) in our world, why isn’t that just knowledge [no asterisk]?

You’re getting things jumbled up here. The technological feasibility of a Cartesian scenario depends on the physics of the world in which it is implemented, not of the world it implements. Make sure you understand this — it’s important.

It seems to me that, in spite of your denials, what you are calling “know” is certain knowledge and what you are calling “know*” is fallible knowledge.

No. For the nth time, I think that knowledge is possible, but not absolute certainty; and I think that knowledge* is not knowledge, fallible or otherwise, because it is not justified. Knowledge* depends on an unjustified assumption — that our perceptions are veridical — and therefore is not itself justified.

You cannot actually calculate a single “likelihood”…

As I keep explaining to you, likelihood estimates need not be numerical.

–you just think you must have them if you rule out heavyweight (i.e., philosophically SKEPTICAL) defeaters. But you really have no basis for the belief that your non-philosophical defeaters leave you with “likelihoods.” It may well be that every “zebra” you’ve ever seen has been a cleverly disguised mule and every red silo you’ve ever seen has been a fake. You actually have no idea at all.

Not so. I’m able to rule them out — assuming the general veridicality of my perceptions, as you stipulated — in the same way I’m able to rule out other conspiracy theories.

A real skeptic is willing to take the position that they don’t know anything. You try to pussy out by saying you know* things.

I’m concerned with whether my position is correct, not with whether some angry old insurance regulator thinks I’m a “pussy” rather than a “real skeptic”.

But precisely the same arguments you think are successful against knowledge are also successful against knowledge*.

Obviously not. To evaluate knowledge* claims, you take the veridicality of perception as a given. Having done so, the chief argument against knowledge — that perception might not be veridical — is closed off and unavailable.

And, obviously, if you really think you know* things in spite of not having the slightest idea of “likelihoods” that you’re correct…

In the case of knowledge*, the likelihoods become conditional: “the likelihood of X given that my perceptions are generally veridical.”

…then there’s no reason that you can’t know things.

Sure there is — the fact that I don’t know that my perceptions are veridical.

That you make some arguments for perceptual errors ‘chief’ and others not ‘chief’ is not relevant to whether you or anyone knows things. Similarly, your assertion that all the zebras you’ve ever seen weren’t really cleverly disguised mules because you apparently have this feeling that you can rule out the possibility (along with other all other ‘conspiracy theories’ isn’t worth much, because you can’t tell us what makes something a conspiracy rather than a skeptical concern. You concede your talk of likelihoods is just gas, because you have no way of calculating a single one of them so don’t really know+ whether any are greater or less than .5. No one ever was so firm both that one needs likelihood and that one can’t estimate them and still managed to ignore the implications of that.

You simply have two sorts of defeaters–those you think are cool and so you are a pussy about, and those you think are uncool–so you are a big tough anti-conspiracy guy about. This dichotomy results in you believing that you don’t know–but you do know* your own name.

You have no actual likelihoods for either group–just blind fear for the one group and blind faith for the other. You can’t define ‘generally veridical’ except to say things like, ‘You know, where I’m likely correct’; or relevant defeaters except to refer to them as ‘the ones nobody can know to be false.’

It’s a huge batch of hand-waving because it’s a silly, ad hoc position. Even we old insurance regulators can recognize* huge piles of bullshit when we see it. Imagine if you had to deal with an actual smart person! (Hint–I’d just go home instead if I were you. Safer.)

So if knowledge* is unjustified true belief, what is unjustified false belief?

Is keiths claiming that the difference between knowledge* [unjustified true belief] and unjustified false belief is truth? I find that rather hilarious.given his recent comments to fifth in the truth, reason, logic thread.

That you make some arguments for perceptual errors ‘chief’ and others not ‘chief’ is not relevant to whether you or anyone knows things.

You’re getting things jumbled up again. Here’s what I wrote:

To evaluate knowledge* claims, you take the veridicality of perception as a given. Having done so, the chief argument against knowledge — that perception might not be veridical — is closed off and unavailable.

…because you apparently have this feeling that you can rule out the possibility (along with other all other ‘conspiracy theories’ isn’t worth much, because you can’t tell us what makes something a conspiracy rather than a skeptical concern.

Jesus, walto. The idea that there is a vast, coordinated scheme to present disguised mules as zebras is a conspiracy theory, and it can be dismantled the same way that other conspiracy theories can be dismantled. Do I really need to spell it out for you? If I ask you whether you’ve seen zebras, will you seriously answer “I don’t know, because they might all have been disguised mules”?

You concede your talk of likelihoods is just gas, because you have no way of calculating a single one of them so don’t really know+ whether any are greater or less than .5.

Your stalled car is straddling the railroad tracks, and a high-speed train is bearing down on you. If you cannot calculate the numerical likelihood of death, are you helpless to act? Or would you do what a rational person would do, which is to get out of the car and run?

You have a mental block about numerical probabilities, walto.

No one ever was so firm both that one needs likelihood and that one can’t estimate them and still managed to ignore the implications of that.

We can estimate them. What you’re failing to grasp is that the estimates need not be numerical.

You simply have two sorts of defeaters–those you think are cool and so you are a pussy about, and those you think are uncool–so you are a big tough anti-conspiracy guy about. This dichotomy results in you believing that you don’t know–but you do know* your own name.

You’re overlooking the crucial difference between Cartesian scenarios — such as the brain-in-vat scenario — and run-of-the-mill conspiracy theories, like the idea that there is a vast, coordinated scheme to present disguised mules as zebras.

If you assume the general veridicality of perception, as stipulated, then you can reject the zebra conspiracy theory based on information you’ve gathered via perception. The conspiracists don’t control all of your sensory information; just the parts they can influence via their mule disguises. You can leverage the information that they don’t control to determine that the zebras aren’t real, by administering DNA tests, for example.

Compare that to a brain-in-vat scenario in which all of your sensory information is under the control of the vat designers. If they want you to see a zebra in front of you, they send the appropriate visual information into your brain. What can you possibly do to determine whether the zebra is real? A DNA test certainly won’t help you, because the vat designers will arrange for the result to come back as positive for zebra. No matter what you do, they can arrange to maintain the illusion.

The difference between a true Cartesian scenario and your zebra conspiracy is night and day.

keiths: If I ask you whether you’ve seen zebras, will you seriously answer “I don’t know, because they might all have been disguised mules”?

But if you ask anyone whether they’ve seen cows, will they seriously answer “I don’t know because I might be dreaming or a brain in a vat.” Please. This response just highlights how self-contradictory and ridiculous your position is. You want your cake but are afraid to eat it.

keiths: Your stalled car is straddling the railroad tracks, and a high-speed train is bearing down on you. If you cannot calculate the numerical likelihood of death, are you helpless to act? Or would you do what a rational person would do, which is to get out of the car and run?

To assume the general veridicality of our perceptions is to make the same mistake as the Sentinel Islander, when he assumes that the VR headset is delivering veridical information about a distant place.

It’s a mistake for the islander, and it leads him to make bogus knowledge claims about LaLa Land. I’m still waiting for an explanation, from you or KN, of why it isn’t a mistake for us to make the analogous assumption about the veridicality of our perceptions.

Your stalled car is straddling the railroad tracks, and a high-speed train is bearing down on you. If you cannot calculate the numerical likelihood of death, are you helpless to act? Or would you do what a rational person would do, which is to get out of the car and run?

walto:

Exactly. I believe in the reality of cars rushing at me. You don’t.

Good grief. If you think a Cartesian skeptic wouldn’t get out of the way of a speeding train, then you don’t understand Cartesian skepticism at all. You’re making the same mistake as KN: thinking that Cartesian skepticism asserts the non-veridicality of perception. It doesn’t. It simply asserts that we cannot know that our perceptions are veridical.

Also, what happened to the numerical calculations that you considered so essential a short while ago? Not so essential when a train is bearing down on you, are they?

I see you’ve understood nothing. Go back and reread Elon Musk. You’re the one wedded to “likelihoods” — not me. I’ve said from the start that it’s an absurd quest. You apparently agree–when you’re afraid.

keiths: When the car is stalled on the railroad tracks, you get out and run, walto.You’ve assessed the likelihood of harm, should you choose to remain in the car, and found it too high for your liking.

You estimate likelihoods all the time, just like the rest of us.

And just like the rest of us, you don’t limit yourself to numerical estimates.

God, you are confused and self-contradictory. Of course I estimate likelihoods all the time. I simply don’t need to do so to know things. Neither do you. You’re terrified of the bus for good reason. You simply forget this when you engage in what I guess you take to be ‘doing philosophy.’ Then you think you need them and don’t have them when looking at a moving bus. It’s a simple contradiction.

Sure we do, and I’ve demonstrated this again and again via a dialogue:

That exchange is nonsensical because the knowledge claim clashes with the likelihood assessment. It remains nonsensical if you change the last line to this:

Yes that’s still utterly confused, as I pointed out each of the last about 15 times you’ve posted it.

Rather than continue this pointless exercise, let me suggest that you read a couple of good things on this issue. First, from Robert Nozick’s book Philosophical Explanations, I think the section on Skepticism–pp. 168- 248 is very good (although Kripke has pounded it). And/or if you are committed to the closure of knowledge under known entailment (I recently finished an article according to which that principle can’t be true, BWTHDIK?), then I recommend Keith DeRose’s 1995 paper, “Solving the Skeptical Problem” which is also good.

keiths thinks that because KN has ceased to respond to his nonsense that KN agrees with him. How he knows this remains a mystery. Apparently it has something to do with likelihoods that can be non-numeric.

No one but you can possibly know what you are looking for and no one but you can possibly know whether you think you have received a response.

Given that you don’t know that you are not a brain in a vat and given that you do not know that you are not a sentinel islander being deceived by a VR headset, why should any rational person take anything you say seriously?