Kornblith Against Reliabilism

In the April edition of Analysis, Hilary Kornblith proposes a reliabilist solution to the bootstrapping problem. I’m going to argue that Kornblith’s proposal, far from solving the bootstrapping problem, in fact makes the problem much harder for the reliabilist to solve. Indeed, I’m going to argue that Kornblith’s considerations give us a way to develop a quick reductio of a certain kind of reliabilism.

Let’s start with a crude statement of the problem. The bootstrapper, call them S, looks at a device D1 that happens to be reliable, though at this stage S doesn’t know this. We assume that S is a reliable reader of devices. S then draws the following conclusions.

Note that S need not, and in the story does not, know that the grounds for (1a) and (2a) are good grounds. S does, we’ll suppose, know that (1a) and (2a) entail (3a). If reliabilism is true, then it seems S knows, or at least justifiably believes, that (1a), (2a) and (3a) are true, since each belief was produced by a reliable process. S then repeats the process a few more times, each time going through a version of the following triptych of inferences.

It isn’t clear whether Kornblith thinks the problem for reliabilism is that it lets S infer (4) or that it lets S infer (5). As we’ll see, the response he offers is a reason for S to not infer either (4) or (5).

In any case, it isn’t clear that the inference from (4) to (5) is in any sense reliable. It’s true that S has a lot of information that D1 has worked well. On the other hand, if D1 were disfunctional, S would not have that information. In fact, if D1 had been unreliable at, say, t8, then S would not have any evidence about how accurate D1 was at t8, since the relevant step, (2h) as it turns out, would fail. Since S’s information harvesting technique can only produce evidence when D1 works, and not when it doesn’t work, it isn’t clear that the fact that the evidence base includes only cases where D1 works is evidence that D1 is generally reliable. So I’ll focus here on the worry that S shouldn’t be able to infer (4) by using D1 itself. (The points in this paragraph draw on some considerations raised by Jonathan Weisberg (forthcoming).)

Kornblith’s point is that the process S uses to get to (4) is, overall, an unreliable process. For imagine that S then goes through the same reasoning with D2, which is in fact unreliable.

(6a) D2 says that p1 at t1 (since at t1 D2 appears to say p1).
(7a) p1 is true at t1 (since D2 says that p1 at t1).
(8a) D2 is accurate at t1 (by deductive inference from (1a) and (2a)).
…
(6z) D2 says that p26 at t26 (since at t26 D2 appears to say p26).
(7z) p1 is true at t26 (since D2 says that p26 at t26).
(8z) D2 is accurate at t26 (by deductive inference from (26a) and (2a)).
(9) D2 has been accurate the last 26 times I’ve used it (deductive inference from (8a)-(8z)).

Kornblith says that S uses the same process to get to (9) as to get to (4). And that process is clearly an unreliable process, since it doesn’t distinguish between (4) and (9). So S isn’t justified in believing (4), since the process that produced (4) is unreliable.

I don’t think this is properly responsive to the problem. The worry was that S was using a reliable process to derive (4), and hence S’s belief that (4) was unjustified, contrary to a strong intuition that it is not. Kornblith’s response is that S is using an unreliable process to derive (4), and so S’s belief is unjustified. But what the reliabilist needs is that S is not using a reliable process, not merely that S is using an unreliable process. The former claim does not follow from the latter, if S is using more than one process. And, worryingly for the reliabilist, that seems to be exactly what’s going on here.

If I’m preparing beans and rice to eat, there is a process I run through to prepare the meal. That process has some subprocesses; there is at least one for the beans and one for the rice. Let’s assume that I make the beans first, then the rice. Then when I finish preparing the rice, I finish two processes at once, the rice-preparing process and the meal-preparing process. There’s nothing particularly special about this case; any particular action can be the conclusion of any number of processes. For similar reasons, any particular belief can be the termination of any number of cognitive processes. Some of these may be quite short processes, some of them longer processes.

That seems to be what’s going on in S’s case. When S reaches step 4, two processes terminate. One is the very long, and very unreliable, process of figuring out whether D1 is reliable by comparing D1’s outputs with what we know about the world via D1. Another is the very short, and very reliable, process, of drawing logical conclusions from what S has come to know through (2z).

Let’s assume, with Kornblith, that the long process is really a process. And let’s also assume the following two conditionals, (R1) and (R2), which are characteristic of a certain kind of reliabilism.

(R1) Any belief is justified if it is the outcome of a reliable process.
(R2) Any belief is unjustified if it is the outcome of an unreliable process.

I claim these assumptions lead to a contradiction. For S’s belief in (4) is the outcome of two processes, the last two steps of which overlap, one of which is reliable, and the other of which is unreliable. So it is both the outcome of a reliable process, and the outcome of an unreliable process. That is no contradiction, any more than it is a contradiction to say that the belief is the outcome of both a long process and a short process. What is a contradiction is to say that the belief is both justified and unjustified. But that’s just what follows from our earlier conclusion, plus the conditionals connecting reliability to justification. So I conclude that (R1) and (R2) can’t both be true.

To be sure, that doesn’t mean that reliabilism is doomed in all shapes, since there are plenty of reliabilist theories that don’t endorse both (R1) and (R2). Alvin Goldman (forthcoming), for instance, says that justification requires reliability and evidential support. That implies that (R2) is true, but (R1) is false. Of course, any theory that rejects (R1) has no need for a fancy solution to the bootstrapping problem, since they are not required to say that (2a) was a legitimate step.

So I conclude that Kornblith’s attempt to save reliabilism from the bootstrapping problem in fact leads strong versions of reliabilism, those that accept both (R1) and (R2), into contradiction. Weaker versions of reliabilism, including versions which qualify (R1), avoid the contradiction, but also have more direct means of avoiding the bootstrapping problem in the first place.

15 Replies to “Kornblith Against Reliabilism”

I’m wondering whether yours is a genuine objection to Kornblith’s attempt to solve the bootstrapping problem. Your objection seems to rely on the assumption that there is no satisfactory solution generality problem, for, if there is a satisfactory solution to the generality problem, your objection does not seem to go through. However, as far as I can see, the generality problem is a much more fundamental problem than bootstrapping and, if there is no satisfactory solution to the former, then there is no point in trying to solve the latter. In other words, any attempt to solve the latter presupposes that there the generality problem can be/has been dealt with.
Now, your argument might be that what Kornblith really needs is not just any solution to the generality problem but a solution that makes the process that leads to forming (4) unreliable. If so, I agree, but I suspect that on the most promising attempts to solve the generality problem the process would come out unreliable.

I agree this is related to the generality problem. But I wasn’t assuming there is no solution available. Rather, I was noting an extra constraint on a solution, a constraint that I don’t really see any hope at all of being met.

The constraint is that every process with a common terminus has to match up in terms of reliability. On the natural understanding of what a process is, it seems that the bootstrapping example shows that this constraint cannot be satisfied.

One way to meet the constraint would be to propose a solution to the generality problem that made it the case, somehow, that each belief was the product of a unique process is available. That would satisfy the constraint trivially. But I doubt such a solution is possible. After all, it seems the (unreliable) process that goes from (1a) to (4) is a process, and so is the (reliable) process that goes from (2z) to (4).

Without such a solution, I don’t think we should be using phrases like “the process that leads to forming (4)”. I don’t think that is uniquely denoting.

The constraint is that every process with a common terminus has to match up in terms of reliability.

But isn’t this exactly the generality problem? Identifying in a principled manner for each belief a unique epistemologically relevant belief-forming process (or at least a set of equally reliable processes)?

(Btw, when I was talking of “the process that leads to forming [the belief that] (4)”, I was not assuming that there is only one such process but that, under the hypothesis that there is a satisfactory solution to the generality problem, there is only one epistemologically relevant process—i.e. only one process whose (un)reliability would make belief epistemologically (un)justified according to the reliabilist).

No, I don’t think that’s the generality problem. Or, at least, it isn’t the only generality problem.

I thought the generality problem was that reliability is a property of classes of processes, not of individual processes. And any one process is a member of infinitely many classes, some of them arbitrarily reliable, some of them arbitrarily unreliable.

A solution to that problem involves identifying each process with a particular class that it is representative of. That way we can assign a reliability measure to each process.

The puzzle I’m raising will remain even when that’s solved. (Or at least could in principle remain.) I’m interested in the mapping from beliefs to processes used to form the belief, and whether that is one-one. The generality problem, I thought, is interested in the mapping from processes to (relevant) classes of processes, and whether that is one-one. So at least in principle they are different problems.

Thanks for this, Brian. This certainly gives me something to think about. Let me try out an idea.
You suggest that, as I describe the case, there’s one long process which produces the belief that D1 is generally accurate (or even that it’s been accurate these last 26 times), and since this long process is, as I’ve argued, unreliable, by (R2) we get that the resulting belief is unjustified. But then we can break the process up into smaller segments, and, since each of these is reliable, we get the result, by repeated applications of (R1), that the very same belief is justified. Contradiction.
Now this assumes, I think, that the conjunction of two reliable processes is automatically reliable. But this isn’t right. If one has some sort of threshold view about how much reliability is required for justified belief—who knows? say, 90%—then two processes each of which were individually (90%) reliable might, when used serially, result in a process which is only 81% reliable, and thus below the threshold. One can’t just rate segments of a process as sufficiently reliable for justification or not, and then assume that if each of the individual segments is adequate for justification, then the process which is composed of the segments conjoined will be adequate for justification as well. One must look at the entire process.
Now you might think that this problem is obviated because one of the segments involves nothing more than deductive inference, and since that is perfectly reliable, this problem can’t arise for the case you describe. But I don’t think that’s right. Here I want to side with Gil Harman. There is no such thing as deductive inference. On coming to believe that p entails q, two agents who antecedently believed p may justifiably come to believe quite different things: one may come to believe q, while the other comes to give up p. What one does with knowledge of the entailment, and so what inference one draws, is deeply dependent on background beliefs. So even when an agent does, in the situation described, come to believe that q, one must see this as not merely an application of some deductive principle, but rather as something far more holistic. And any such inference will not be perfectly reliable. So the problem of the previous paragraph returns.
The result of all this is that I’m not convinced that the long process can legitimately be broken up in the way you suggest. And, without that, you can’t derive a contradiction from reliabilism.
Is that right?

There are two separate points here I think, both of which I should address.

I actually agree that it isn’t automatic that a sequence of reliable processes will be reliable. In fact, I think that’s the problem here. The worry I had was just with the case where

(a) the last step is, on its own, reliable; but
(b) the sequence as a whole is not reliable.

Then the final belief is produced by a reliable process and an unreliable process. Problem.

Now I’m not sure we’re disagreeing with much up to this point, although perhaps I see this as more of a problem for reliabilism!

The Harman-style approach to deduction seems like a good response to these worries. If we deny that there is any such thing as a process of deductive inference, then it might be that the “path” from (2z) to (4) is unreliable.

I’m a bit worried though about how the details of that would work out. What Harman argues is that the cognitive process isn’t just moving from (1a)-(2z) to (3) and then (4). It’s also choosing to make that move rather than rejecting one of the earlier steps. But given that the earlier steps were generated by reliable processes, it isn’t obvious to me that the choice S makes is in any way unreliable.

Here’s another way of putting the same point. I originally said S was following the process: Draw deductive inferences from what you know. And I said that’s a reliable process. Now perhaps the process instead is: Notice what is deductively implied by what you know, and then, given the choice between believing those implications and rejecting what you know, believe the implication. That looks like a reliable process too, but it also grounds a move from (2z) to (4), given what’s come before.

I now think that the Harman point may have been irrelevant to this issue.

Suppose a belief is produced by a long and complex process. You want to allow that we can describe the source of the belief in two different ways: as the outcome of a long process, or as the outcome of just a short process, a process which is just the terminal segment of the longer one. And when we do this, it may well be that the short process is reliable and the long process is unreliable (or vice versa), and this results in a problem for the reliabilist (or at least a certain sort of reliabilist): we get the result that the belief in question is both justified (because reliably produced) and unjustified (because unreliably produced). This problem—if it is a problem for reliabilism—is, of course, far more general than the bootstrapping problem.

For me, this just provides us with a reason for thinking that, when we evaluate the process which produced a belief as reliable or unreliable, we need to consider the entire process and not just some terminal segment of it. And this shouldn’t be surprising. Suppose I come to believe that p in a completely unreliable way, and suppose, too, that I come to believe that q in a completely unreliable way. I now notice that I believe p and I believe q, so I come to believe their conjunction. Is my belief that p&q justified according to the reliabilist? As you’ve presented this, we can think of this in two different ways. On one way, I’ve come to believe the conjunction as a result of a long process, a process which has two unreliable parts to it (the parts that prompted the belief that p and the part that prompted the belief that q), and this total process is itself unreliable. On the other way of looking at it, I’ve come to believe the conjunction as a result of conjoining things I already believe, and we can stipulate that this is a reliable process. So now I’m both unjustified and justified in believing the conjunction. But surely this second way of viewing things is not the way that any reliabilist should view the matter. My belief in the conjunction must be viewed as the outcome of one long process, one which, in this case, is unreliable. We can’t just break off the final piece of the process, look only at the part that involves conjoining beliefs one has, and now evaluate the belief in the conjunction on the basis of the reliability of this terminal segment. This would be no more reasonable than evaluating the reliability of newspapers, say, by looking at how reliably they get the story which is sent to the press operator onto the paper. I suspect newspapers are all pretty reliable at that! But that doesn’t make me think that there is any sense in which newspapers are all pretty reliable. And the same, I believe, should hold for evaluating processes of belief acquisition by breaking off terminal segments in the way that you do.

So, in the end, I think it’s no surprise that you can get conflicting verdicts on justification out of reliabilism if you allow processes to be individuated in these different ways. But I think that there is good independent reason for thinking that one of the ways in which you individuate processes would not answer to the motivating idea behind reliabilism. This is not meant, of course, to provide a solution to the generality problem.

I guess I see the generality problem as a more general (!) problem than you do.

As far as I can see, if one has a satisfactory solution to the generality problem, then, for each of S’s beliefs, there is going to be one and only one relevant belief-forming process that led to its formation and, according to reliabilist, the reliability of that process is (roughly) what determines whether S’s belief is justified.

Now, in many cases, it is true that, as you suggest, one can carve up the process into subprocesses each of which has a different reliability (just like in many cases you can describe the process as belonging to different classes of processes), but, as Hilary suggests in his comment #5, the reliability of the overall process is likely to be the same as the product of the reliabilities of each of its subprocesses and, as he suggests in #7, the overall process (not the last subprocess) is likely to be the relavant process according to the reliabilist (btw this is what I meant with my cryptic final remark in #1).

Of course, this is no solution to the generality problem but seems to put a constraint on what a solution would look like and, as far as I can see, it seems to be a very reasonable constraint.

I agree entirely that we can’t just look at the final stage. Or, perhaps more accurate, we can’t just look at the intrinsic properties of the final stage.

But the final stage in the argument I’m considering is a little different to the newspaper case, or getting p&q from arbitrarily formed p and q. It’s a case where we get p from a reliable source, and get q from a reliable source, and infer p ↔ q from those. Logical deductions from premises formed by reliable methods looks pretty reliable to me! It’s certainly different from the arbitrary conjunction steps.

And this point is also relevant to Gabriele’s point. I think this isn’t true.

the reliability of the overall process is likely to be the same as the product of the reliabilities of each of its subprocesses.

Given the natural solution to the generality problem, that isn’t right. In general, it won’t be right for any of the “transmission failure” cases. To simplify the example in the paper a little, consider (1), (2) and (3) here.

(1) The wall looks red to me. (By introspection)
(2) The wall is red. (By observation)
(3) So my colour vision is working (with respect to the colour of this wall).

I think Hilary is right that this is an unreliable way to discover (3). But, surprisingly, it is a reliable way to get (1), and it is a reliable way to get (2), and getting from (1) and (2) to (3) is (or at least could be) an instance of a reliable deductive mechanism.

The point of these cases is that sometimes the reliability of the conclusion will be evaluated over a much broader modal range than the premises. In those cases, deduction can take us from something reliable to something unreliable.

I still think the Harman point gives the reliabilist a way of rejecting (3) here, and that this is the step where things get ugly — the trouble doesn’t begin only at (4) or (5) above.

To take your transmission failure example from the last post: the reliabilist doesn’t have to deny knowledge of the fact that (1) and (2) entail (3), he just has to say that only an unreliable subject would reach a belief in (3) by drawing the inference. But here he could plausibly argue that the effort to infer (3) from (1) and (2) would naturally result in a loss of knowledge of (2) rather than a gain in knowledge that (3). Arguably, a reliable subject who is actively contemplating the question of the accuracy of his colour vision will experience a shift in way he makes judgments about the colours of objects before him. Now that he is wondering whether his colour vision is accurate, and has become self-conscious about the real possibility of error on that score, he will — insofar as he is reliable — form only the graded judgment that it is very likely that that wall is red. (In situations in which he was not self-conscious in this manner a reliable subject could instead just make the unqualified judgment that the wall was red.)

I hope it doesn’t sound ad hoc to suggest that subjects re-base their judgments when the methods they employ to form those judgments are under scrutiny. I think there is some psychological evidence to suggest that we do this. This would explain why it sounds so natural for me to say that I know my team won last night, and that I know that the newspaper reported the win, but also, when you press me on whether there might have been a misprint on this occasion, to respond by saying that now I’m not completely certain about the win (rather than responding that a misprint is ruled out by my knowledge of the game’s outcome and the report).

The point of these cases is that sometimes the reliability of the conclusion will be evaluated over a much broader modal range than the premises. In those cases, deduction can take us from something reliable to something unreliable.

This is surely an interesting phenomenon which calls for some kind of explanation. But why is it still an objection to what Hilary argues in his Analysis-paper? His answer to the bootstrapping-problem simply seems to be that bootstrapping as a method is unreliable (at least in the actual world). Surely it is somewhat strange that you can construct an unreliable method by combining a number of severally highly reliable methods/processes. But nevertheless, Hilary’s original point still stands – or am I missing something here?

That sounds interesting. We might, to put it in other words, want to have an interest-relative solution to the generality problem. If someone is simply using their colour vision (or some other measuring device) then the relevant range of uses is the range of cases where that measuring device would normally be used. But if someone is testing their colour vision (or another measuring device) the range is somewhat broader; it perhaps includes a class of cases where the device does fail.

I’ll have to think more about whether that (or something like it) could get out of the puzzles. It doesn’t sound psychologically ad hoc; there is something plausible about that kind of process. But I’m worried about whether it will leave us with anything that looks like traditional reliabilism. In particular, it seems like it will leave us without a reliabilist response to scepticism, since when we’re trying to figure out whether our perception is one of those measuring devices that work, we have to include a very broad range of cases, and perception may not be reliable over all those cases.

I think that if I’m right, then Hilary leads the reliabilist to a reductio. What Hilary showed, I think, was a conditional: If reliabilism is true, then some of the beliefs of the bootstrapper are unjustified. But I think he didn’t undermine the original conditional: If reliabilism is true, then those beliefs of the bootstrapper are justified.

It’s possible that both these conditionals are true. If reliabilism is false, they will both be true. And that’s what I suspect is the case.

just one more question for clarification: If bootstrapping is indeed a method (or process) and if a bootstrapper is someone who follows this method and if bootstrapping is in fact an unreliable method, then how can any of the bootstrapper’s beliefs that are based on bootstrapping nevertheless be justified (barring epistemic overdetermination)? I still have the feeling I’m missing something here…

I know this thread is more or less closed, but since we’re about to discuss Kornblith’s paper and your reply to it in a research seminar here in Cologne, I nevertheless feel like adding something to the above discussion (btw. why your paper is actually filed under “works no longer in progress” now?).

Let’s say we grant Kornblith that for evaluating a belief B’s reliability, only one token process is actually relevant, namely the most complete or inclusive token process that generated B (in whatever sense of complete – maybe in the sense of the whole psychological history of the belief, maybe in the sense of some contextually more limited history). Then, it seems, when we look at the argument (1) to (3) in your comment #9, the relevant complete token for each of the steps (1), (2) and (3) will be a different one. And, assuming a solution to the generality problem, there will be only one process type in each case that is relevant for evaluating reliability. But then, since the relevant token and type of process will be different for (1), (2) and (3), it shouldn’t actually come as a surprise that they can differ in reliability in just the way Hilary envisages.

Now, one might state your point not as an objection, but merely as an explanatory puzzle instead. For, one might nevertheless wonder how the relevant process for (3) can be unreliable, given that the relevant processes for (1) and (2) are reliable, and given that the transition from (1) and (2) to (3) is perfectly reliable. However, once one has realized that there is no algorithm to obtain the reliability of a complete process from the subprocesses it is composed of, even that should not be so puzzling anymore. The only remaining question is: Why exactly can’t the reliability of a complete process not always be obtained from its component processes, even when the component processes are extremely reliable? One answer that I can think of might be: It may be right to think of the complete process token as being composed out of the relevant subprocess tokens, but it could simply be confused to think of the relation between the complete process type and the subprocess types in terms of composition as well. Types of processes, one might argue, are simply not composed out of types of subprocesses in any meaningful way. Analogously, types of people (say the type being smart) are just not composed out of types of the various subgroups of people (e.g. the type being smart is not composed out of the types being blonde and smart and being non-blonde and smart), even though the group of smart people is of course composed out of the group of blonde and smart people and the group of non-blonde and smart people.