In 2011 there was an apparent observation of neutrinos traveling faster than light. Wikipedia says of this, “Even before the mistake was discovered, the result was considered anomalous because speeds higher than that of light in a vacuum are generally thought to violate special relativity, a cornerstone of the modern understanding of physics for over a century.” In other words, most scientists did not take the result very seriously, even before any specific explanation was found. As I stated here, it is possible to push unreasonably far in this direction, in such a way that one will be reluctant to ever modify one’s current theories. But there is also something reasonable about this attitude.

One part of the problem of anomaly is this. If a well-established scientific theory seems to predict something contrary to what we observe, we tend to stick to the theory, with barely a change in credence, while being dubious of the auxiliary hypotheses. What, if anything, justifies this procedure?

Here’s my setup. We have a well-established scientific theory T and (conjoined) auxiliary hypotheses A, and T together with A uncontroversially entails the denial of some piece of observational evidence E which we uncontroversially have (“the anomaly”). The auxiliary hypotheses will typically include claims about the experimental setup, the calibration of equipment, the lack of further causal influences, mathematical claims about the derivation of not-E from T and the above, and maybe some final catch-all thesis like the material conditional that if T and all the other auxiliary hypotheses obtain, then E does not obtain.

For simplicity I will suppose that A and T are independent, though of course that simplifying assumption is rarely true.

…

Here’s a quick and intuitive thought. There is a region of probability space where the conjunction of T and A is false. That area is divided into three sub-regions:

T is true and A is false

T is false and A is true

both are false.

The initial probabilities of the three regions are, respectively, 0.0999, 0.0009999 and 0.0001. We know we are in one of these three regions, and that’s all we now know. Most likely we are in the first one, and the probability that we are in that one given that we are in one of the three is around 0.99. So our credence in T has gone down from three nines (0.999) to two nines (0.99), but it’s still high, so we get to hold on to T.

Still, this answer isn’t optimistic. A move from 0.999 to 0.99 is actually an enormous decrease in confidence.

“This answer isn’t optimistic,” because in the case of the neutrinos, this analysis would imply that scientists should have instantly become ten times more willing to consider the possibility that the theory of special relativity is false. This is surely not what happened.

Pruss therefore presents an alternative calculation:

But there is a much more optimistic thought. Note that the above wasn’t a real Bayesian calculation, just a rough informal intuition. The tip-off is that I said nothing about the conditional probabilities of E on the relevant hypotheses, i.e., the “likelihoods”.

Now setup ensures:

P(E|A ∧ T)=0.

What can we say about the other relevant likelihoods? Well, if some auxiliary hypothesis is false, then E is up for grabs. So, conservatively:

P(E|∼A ∧ T)=0.5

P(E|∼A ∧ ∼T)=0.5

But here is something that I think is really, really interesting. I think that in typical cases where T is a well-established scientific theory and A ∧ T entails the negation of E, the probability P(E|A ∧ ∼T) is still low.

The reason is that all the evidence that we have gathered for T even better confirms the hypothesis that T holds to a high degree of approximation in most cases. Thus, even if T is false, the typical predictions of T, assuming they have conservative error bounds, are likely to still be true. Newtonian physics is false, but even conditionally on its being false we take individual predictions of Newtonian physics to have a high probability. Thus, conservatively:

P(E|A ∧ ∼T)=0.1

Very well, let’s put all our assumptions together, including the ones about A and T being independent and the values of P(A) and P(T). Here’s what we get:

P(E|T)=P(E|A ∧ T)P(A|T)+P(E|∼A ∧ T)P(∼A|T)=0.05

P(E|∼T)=P(E|A ∧ ∼T)P(A|∼T)+P(E|∼A ∧ ∼T)P(∼A|∼T) = 0.14.

Plugging this into Bayes’ theorem, we get P(T|E)=0.997. So our credence has crept down, but only a little: from 0.999 to 0.997. This is much more optimistic (and conservative) than the big move from 0.999 to 0.99 that the intuitive calculation predicted.

So, if I am right, at least one of the reasons why anomalies don’t do much damage to scientific theories is that when the scientific theory T is well-confirmed, the anomaly is not only surprising on the theory, but it is surprising on the denial of the theory—because the background includes the data that makes T “well-confirmed” and would make E surprising even if we knew that T was false.

To make the point without the mathematics (which in any case is only used to illustrate the point, since Pruss is choosing the specific values himself), if you have a theory which would make the anomaly probable, that theory would be strongly supported by the anomaly. But we already know that theories like that are false, because otherwise the anomaly would not be an anomaly. It would be normal and common. Thus all of the actually plausible theories still make the anomaly an improbable observation, and therefore these theories are only weakly supported by the observation of the anomaly. The result is that the new observation makes at most a minor difference to your previous opinion.

We can apply this analysis to the discussion of miracles. David Hume, in his discussion of miracles, seems to desire a conclusive proof against them which is unobtainable, and in this respect he is mistaken. But near the end of his discussion, he brings up the specific topic of religion and says that his argument applies to it in a special way:

Upon the whole, then, it appears, that no testimony for any kind of miracle has ever amounted to a probability, much less to a proof; and that, even supposing it amounted to a proof, it would be opposed by another proof; derived from the very nature of the fact, which it would endeavour to establish. It is experience only, which gives authority to human testimony; and it is the same experience, which assures us of the laws of nature. When, therefore, these two kinds of experience are contrary, we have nothing to do but subtract the one from the other, and embrace an opinion, either on one side or the other, with that assurance which arises from the remainder. But according to the principle here explained, this subtraction, with regard to all popular religions, amounts to an entire annihilation; and therefore we may establish it as a maxim, that no human testimony can have such force as to prove a miracle, and make it a just foundation for any such system of religion.

The idea seems to be something like this: contrary systems of religion put forth miracles in their support, so the supporting evidence for one religion is more or less balanced by the supporting evidence for the other. Likewise, the evidence is weakened even in itself by people’s propensity to lies and delusion in such matters (some of this discussion was quoted in the earlier post on Hume and miracles). But in addition to the fairly balanced evidence we have experience basically supporting the general idea that the miracles do not happen. This is not outweighed by anything in particular, and so it is the only thing that remains after the other evidence balances itself out of the equation. Hume goes on:

I beg the limitations here made may be remarked, when I say, that a miracle can never be proved, so as to be the foundation of a system of religion. For I own, that otherwise, there may possibly be miracles, or violations of the usual course of nature, of such a kind as to admit of proof from human testimony; though, perhaps, it will be impossible to find any such in all the records of history. Thus, suppose, all authors, in all languages, agree, that, from the first of January, 1600, there was a total darkness over the whole earth for eight days: suppose that the tradition of this extraordinary event is still strong and lively among the people: that all travellers, who return from foreign countries, bring us accounts of the same tradition, without the least variation or contradiction: it is evident, that our present philosophers, instead of doubting the fact, ought to receive it as certain, and ought to search for the causes whence it might be derived. The decay, corruption, and dissolution of nature, is an event rendered probable by so many analogies, that any phenomenon, which seems to have a tendency towards that catastrophe, comes within the reach of human testimony, if that testimony be very extensive and uniform.

But suppose, that all the historians who treat of England, should agree, that, on the first of January, 1600, Queen Elizabeth died; that both before and after her death she was seen by her physicians and the whole court, as is usual with persons of her rank; that her successor was acknowledged and proclaimed by the parliament; and that, after being interred a month, she again appeared, resumed the throne, and governed England for three years: I must confess that I should be surprised at the concurrence of so many odd circumstances, but should not have the least inclination to believe so miraculous an event. I should not doubt of her pretended death, and of those other public circumstances that followed it: I should only assert it to have been pretended, and that it neither was, nor possibly could be real. You would in vain object to me the difficulty, and almost impossibility of deceiving the world in an affair of such consequence; the wisdom and solid judgment of that renowned queen; with the little or no advantage which she could reap from so poor an artifice: all this might astonish me; but I would still reply, that the knavery and folly of men are such common phenomena, that I should rather believe the most extraordinary events to arise from their concurrence, than admit of so signal a violation of the laws of nature.

But should this miracle be ascribed to any new system of religion; men, in all ages, have been so much imposed on by ridiculous stories of that kind, that this very circumstance would be a full proof of a cheat, and sufficient, with all men of sense, not only to make them reject the fact, but even reject it without farther examination. Though the Being to whom the miracle is ascribed, be, in this case, Almighty, it does not, upon that account, become a whit more probable; since it is impossible for us to know the attributes or actions of such a Being, otherwise than from the experience which we have of his productions, in the usual course of nature. This still reduces us to past observation, and obliges us to compare the instances of the violation of truth in the testimony of men, with those of the violation of the laws of nature by miracles, in order to judge which of them is most likely and probable. As the violations of truth are more common in the testimony concerning religious miracles, than in that concerning any other matter of fact; this must diminish very much the authority of the former testimony, and make us form a general resolution, never to lend any attention to it, with whatever specious pretence it may be covered.

Notice how “unfair” this seems to religion, so to speak. What is the difference between the eight days of darkness, which Hume would accept, under those conditions, and the resurrection of the queen of England, which he would not? Hume’s reaction to the two situations is more consistent than first appears. Hume would accept the historical accounts about England in the same way that he would accept the accounts about the eight days of darkness. The difference is in how he would explain the accounts. He says of the darkness, “It is evident, that our present philosophers, instead of doubting the fact, ought to receive it as certain, and ought to search for the causes whence it might be derived.” Likewise, he would accept the historical accounts as certain insofar as they say the a burial ceremony took place, the queen was absent from public life, and so on. But he would not accept that the queen was dead and came back to life. Why? The “search for the causes” seems to explain this. It is plausible to Hume that causes of eight days of darkness might be found, but not plausible to him that causes of a resurrection might be found. He hints at this in the words, “The decay, corruption, and dissolution of nature, is an event rendered probable by so many analogies,” while in contrast a resurrection would be “so signal a violation of the laws of nature.”

It is clear that Hume excludes certain miracles, such as resurrection, from the possibility of being established by the evidence of testimony. But he makes the additional point that even if he did not exclude them, he would not find it reasonable to establish a “system of religion” on such testimony, given that “violations of truth are more common in the testimony concerning religious miracles, than in that concerning any other matter of fact.”

It is hard to argue with the claim that “violations of truth” are especially common in testimony about miracles. But does any of this justify Hume’s negative attitude to miracles as establishing “systems of religion,” or is this all just prejudice? There might well be a good deal of prejudice involved here in his opinions. Nonetheless, Alexander Pruss’s discussion of anomaly allows one to formalize Hume’s idea here as actual insight as well.

One way to look at truth in religion is to look at it as a way of life or as membership in a community. And in this way, asking whether miracles can establish a system of religion is just asking whether a person can be moved to a way of life or to join a community through such things. And clearly this is possible, and often happens. But another way to consider truth in religion is to look at a doctrinal system as a set of claims about how the world is. Looked at in this way, we should look at a doctrinal system as presenting a proposed larger context of our place in the world, one that we would be unaware of without the religion. This implies that one should have a prior probability (namely prior to consideration of arguments in its favor) strongly against the system considered as such, for reasons very much like the reasons we should have a prior probability strongly against Ron Conte’s predictions.

We can thus apply Alexander Pruss’s framework. Let us take Mormonism as the “system of religion” in question. Then taken as a set of claims about the world, our initial probability would be that it is very unlikely that the world is set up this way. Then let us take a purported miracle establishing this system: Joseph Smith finds his golden plates. In principle, if this cashed out in a certain way, it could actually establish his system. But it doesn’t cash out that way. We know very little about the plates, the circumstances of their discovery (if there was any), and their actual content. Instead, what we are left with is an anomaly: something unusual happened, and it might be able to be described as “finding golden plates,” but that’s pretty much all we know.

Then we have the theory, T, which has a high prior probability: Mormonism is almost certainly false. We have the observation : Joseph Smith discovered his golden plates (in one sense or another.) And we have the auxiliary hypotheses which imply that he could not have discovered the plates if Mormonism is false. The Bayesian updates in Pruss’s scheme imply that our conclusion is this: Mormonism is almost certainly false, and there is almost certainly an error in the auxiliary hypotheses that imply he could not have discovered them if it were false.

Thus Hume’s attitude is roughly justified: he should not change his opinion about religious systems in any significant way based on testimony about miracles.

To make you feel better, this does not prove that your religion is false. It just nearly proves that. In particular, this does not take into an account an update based on the fact that “many people accept this set of claims.” This is a different fact, and it is not an anomaly. If you update on this fact and end up with a non-trivial probability that your set of claims is true, testimony about miracles might well strengthen this into conviction.

I will respond to one particular objection, however. Some will take this argument to be stubborn and wicked, because it seems to imply that people shouldn’tbe “convinced even if someone rises from the dead.” And this does in fact follow, more or less. An anomalous occurrence in most cases will have a perfectly ordinary explanation in terms of things that are already a part of our ordinary understanding of the world, without having to add some larger context. For example, suppose you heard your fan (as a piece of furniture, not as a person) talking to you. You might suppose that you were hallucinating. But suppose it turns out that you are definitely not hallucinating. Should you conclude that there is some special source from outside the normal world that is communicating with you? No: the fan scenario can happen, and it turns out to have a perfectly everyday explanation. We might agree with Hume that it would be much more implausible that a resurrection would have an everyday explanation. Nonetheless, even if we end up concluding to the existence of some larger context, and that the miracle has no such everyday explanation, there is no good reason for it to be such and such a specific system of doctrine. Consider again Ron Conte’s predictions for the future. Most likely the things that happen between now and 2040, and even the things that happen in the 2400s, are likely to be perfectly ordinary (although the things in the 2400s might differ from current events in fairly radical ways). But even if they are not, and even if apocalyptic, miraculous occurrences are common in those days, this does not raise the probability of Conte’s specific predictions above any trivial level. In the same way, the anomalous occurrences involved in the accounts of miracles will not lend any significant probability to a religious system.

The objection here is that this seems unfair to God, so to speak. What if God wanted to reveal something to the world? What could he do, besides work miracles? I won’t propose a specific answer to this, because I am not God. But I will illustrate the situation with a little story to show that there is nothing unfair to God about it.

Suppose human beings created an artificial intelligence and raised it in a simulated environment. Wanting things to work themselves out “naturally,” so to speak, because it would be less work, and because it would probably be necessary to the learning process, they institute “natural laws” in the simulated world which are followed in an exceptionless way. Once the AI is “grown up”, so to speak, they decide to start communicating with it. In the AI’s world, this will surely show up as some kind of miracle: something will happen that was utterly unpredictable to it, and which is completely inconsistent with the natural laws as it knew them.

Will the AI be forced by the reasoning of this post to ignore the communication? Well, that depends on what exactly occurs and how. At the end of his post, Pruss discusses situations where anomalous occurrences should change your mind:

Note that this argument works less well if the anomalous case is significantly different from the cases that went into the confirmation of T. In such a case, there might be much less reason to think E won’t occur if T is false. And that means that anomalies are more powerful as evidence against a theory the more distant they are from the situations we explored before when we were confirming T. This, I think, matches our intuitions: We would put almost no weight in someone finding an anomaly in the course of an undergraduate physics lab—not just because an undergraduate student is likely doing it (it could be the professor testing the equipment, though), but because this is ground well-gone over, where we expect the theory’s predictions to hold even if the theory is false. But if new observations of the center of our galaxy don’t fit our theory, that is much more compelling—in a regime so different from many of our previous observations, we might well expect that things would be different if our theory were false.

And this helps with the second half of the problem of anomaly: How do we keep from holding on to T too long in the light of contrary evidence, how do we allow anomalies to have a rightful place in undermining theories? The answer is: To undermine a theory effectively, we need anomalies that occur in situations significantly different from those that have already been explored.

If the AI finds itself in an entirely new situation, e.g. rather than hearing an obscure voice from a fan, it is consistently able to talk to the newly discovered occupant of the world on a regular basis, it will have no trouble realizing that its situation has changed, and no difficulty concluding that it is receiving communication from its author. This does, sort of, give one particular method that could be used to communicate a revelation. But there might well be many others.

Our objector will continue. This is still not fair. Now you are saying that God could give a revelation but that if he did, the world would be very different from the actual world. But what if he wanted to give a revelation in the actual world, without it being any different from the way it is? How could he convince you in that case?

Let me respond with an analogy. What if the sky were actually red like the sky of Mars, but looked blue like it is? What would convince you that it was red? The fact that there is no way to convince you that it is red in our actual situation means you are unfairly prejudiced against the redness of the sky.

In other words, indeed, I am unwilling to be convinced that the sky is red except in situations where it is actually red, and those situations are quite different from our actual situation. And indeed, I am unwilling to be convinced of a revelation except in situations where there is actually a revelation, and those are quite different from our actual situation.

Subjectively, I feel like I’m only capable of a fairly small discrete set of “degrees of belief.” I think I can distinguish between, say, things I am 90% confident of and things I am only 60% confident of, but I don’t think I can distinguish between being 60% confident in something and 65% confident in it. Those both just fall under some big mental category called “a bit more likely to be true than false. ” (I’m sure psychologists have studied this, and I don’t know anything about their findings. This is just what seems likely to me based on introspection.)

I’ve talked before about whether Bayesian updating makes sense as an ideal for how reasoning should work. Suppose for now that it is a good ideal. The “perfect” Bayesian reasoner would have a whole continuum of degrees of belief. They would typically respond to new evidence by changing some of their degrees of beliefs, although for “weak” or “unconvincing” evidence, the change might be very small. But since they have a whole continuum of degrees, they can make arbitrarily small changes.

Often when the Bayesian ideal is distilled down to principles that mere humans can follow, one of the principles seems to be “when you learn something new, modify your degrees of belief.” This sounds nice, and accords with common sense ideas about being open-minded and changing your mind when it is warranted.

However, this principle can easily be read as implying: “if you learn something new, don’tnot modify your degrees of belief.” Leaving your degrees of belief the same as they were before is what irrational, closed-minded, closed-eyed people do. (One sometimes hears Bayesians responding to each other’s arguments by saying things like “I have updated in the direction of [your position],” as though they feel that this demonstrates that they are thinking in a responsible manner. Wouldn’t want to be caught not updating when you learn something new!)

The problem here is not that hard to see. If you only have, say, 10 different possible degrees of belief, then your smallest possible updates are (on average) going to be jumps of 10% at once. If you agree to always update in response to new information, no matter how weak it is, then seeing ten pieces of very weak evidence in favor of P will ramp your confidence in P up to the maximum.

In each case, the perfect Bayesian might update by only a very small amount, say 0.01%. Clearly, if you have the choice between changing by 0% and changing by 10%, the former is closer to the “perfect” choice of 0.01%. But if you have trained yourself to feel like changing by 0% (i.e. not updating) is irrational and bad, you will keep making 10% jumps until you and the perfect Bayesian are very far apart.

This means that Bayesians – in the sense of “people who follow the norm I’m talking about” – will tend to over-respond to weak but frequently presented evidence. This will make them tend to be overconfident of ideas that are favored within the communities they belong to, since they’ll be frequently exposed to arguments for those ideas, although those arguments will be of varying quality.

“Overconfident of ideas that are favored within the communities they belong to” is basically a description of everyone, not simply people who accept the norm he is talking about, so even if this happens, it is not much of an objection in comparison to the situation of people in general.

Nonetheless, Nostalgebraist misunderstands the idea of Bayesian updating as applied in real life. Bayes’ theorem is a theorem of probability theory that describes how a numerical probability is updated upon receiving new evidence, and probability theory in general is a formalization of degrees of belief. Since it is a formalization, it is not expected to be a literal description of real life. People do not typically have an exact numerical probability that they assign to a belief. Nonetheless, there is a reasonable way to respond to evidence, and this basically corresponds to Bayes’ theorem, even though it is not a literal numerical calculation.

Nostalgebraist’s objection is that there are only a limited number of ways that it is possible to feel about a proposition. He is likely right that to an untrained person this is likely to be less than ten. Just as people can acquire perfect pitch by training, however, it is likely that someone could learn to distinguish many more than ten degrees of certainty. However, this is not a reasonable way to respond to his argument, because even if someone was calibrated to a precision of 1%, Nostalgebraist’s objection would still be valid. If a person were assigning a numerical probability, he could not always change it by even 1% every time he heard a new argument, or it would be easy for an opponent to move him to absolute certainty of nearly anything.

The real answer is that he is looking in the wrong place for a person’s degree of belief. A belief is not how one happens to feel about a statement. A belief is a voluntary act or habit, and adjusting one’s degree of belief would mean adjusting that habit. The feeling he is talking about, on the other hand, is not in general something voluntary, which means that it is literally impossible to follow the norm he is discussing consistently, applied in the way that he suggests. One cannot simply choose to feel more certain about something. It is true that voluntary actions may be able to affect that feeling, in the same way that voluntary actions can affect anger or fear. But we do not directly choose to be angry or afraid, and we do not directly choose to feel certain or uncertain.

What we can affect, however, is the way we think, speak, and act, and we can change our habits by choosing particular acts of thinking, speaking, and acting. And this is where our subjective degree of belief is found, namely in our pattern of behavior. This pattern can vary in an unlimited number of ways and degrees, and thus his objection cannot be applied to updating in this way. Updating on evidence, then, would be adjusting our pattern of behavior, and not updating would be failing to adjust that pattern. That would begin by the simple recognition that something is new evidence: saying that “I have updated in the direction of your position” would simply mean acknowledging the fact that one has been presented with new evidence, with the implicit commitment to allowing that evidence to affect one’s behavior in the future, as for example by not simply forgetting about that new argument, by having more respect for people who hold that position, and so on in any number of ways.

Of course, it may be that in practice people cannot even do this consistently, or at least not without sometimes adjusting excessively. But this is the same with every human virtue: consistently hitting the precise mean of virtue is impossible. That does not mean that we should adopt the norm of ignoring virtue, which is Nostalgebraist’s implicit suggestion.

According to Bayes’ theorem, if something has a probability of 100%, that must remain unchanged no matter what evidence is observed, as long as that evidence has a finite probability of being observed. If the probability of the evidence being observed is 0%, then Bayes’ formula results in a division by zero. This happens because a probability of 0% should mean that it is impossible for this evidence to come up, and indicates that one was simply wrong to claim that there was no chance of this, and a different probability should have been assigned.

The fact that logical consistency requires a probability of 100% to remain permanently fixed, no matter what happens, implies that it is generally a bad idea to claim such certainty, even in cases where you have absolute objective certainty such as mathematical demonstration. Thus in the previously cited anecdote about prime numbers, if SquallMage claimed to be absolutely certain that 51 was a prime number, he should never admit that it is not, not even after dividing it by 3 and getting 17. Instead, he should claim that there is a mistake in the derivation showing that it is not prime. Since this is absurd, it follows that in fact he should never have assigned a 100% probability to the claim that the number was prime. And since there was subjectively probably not much difference between 41 and 51 for him at the time, with respect to the claim, neither should he have claimed a 100% probability that 41 was prime.

As was argued in an earlier post, any belief, or at any rate almost any belief, is voluntary insofar as we choose to think, act, and speak as though it were true, and in theory it is always in our power to choose to behave in the opposite way. In practice of course we would not do this unless we had some motive to do it, just as the fact that it is in someone’s power to commit suicide does not make him do so before he has a motive for this.

Such a belief, since it involves affirmation or denial, has a basically binary character — thus I say either that the sun will rise tomorrow, or that it will not. This binary character can of course be somewhat modified by the explicit addition of various qualifiers, as when I say that “I will probably be alive five years from now”, or that “there is a 75% chance that Mike will come to visit next week” or the like. Nonetheless, even these statements have a binary character even if at one remove from the original statement. Thus I can say “Mike will come to visit,” or “Mike will not come to visit”, but also “there is a 75% chance etc” or “there is not a 75% chance etc”.

The interior apprehension of the mind, however, does not have the same binary character, but is more a matter of degree. Thus for example someone may argue that increasing the restrictions on gun ownership in the United States would be a good idea. “More gun control would be good,” he says. This is the affirmation of one side of a contradiction. Then suppose he is involved in a conversation on the matter, with another person arguing against his position. As the conversation goes on, he may continue to assert the same side of the contradiction, but he may grow somewhat doubtful inside. As he walks away from the conversation, he still believes that more gun control would be good. He still chooses to speak, think, and act in that way. But he is less convinced than he was at first. In this sense his interior apprehension has degrees in a way in which the belief considered as an affirmation or denial does not.

Thus, in our speech and behavior, beliefs are basically binary, but we possess various degrees of certainty about our beliefs. And such a degree is reasonably considered to be something like the probability, as far as we are concerned, that our belief is true.

Our foregoing method of reasoning will easily convince us, that there can be no demonstrative arguments to prove, that those instances, of which we have had no experience, resemble those, of which we have had experience. We can at least conceive a change in the course of nature; which sufficiently proves, that such a change is not absolutely impossible. To form a clear idea of any thing, is an undeniable argument for its possibility, and is alone a refutation of any pretended demonstration against it.

Probability, as it discovers not the relations of ideas, considered as such, but only those of objects, must in some respects be founded on the impressions of our memory and senses, and in some respects on our ideas. Were there no mixture of any impression in our probable reasonings, the conclusion would be entirely chimerical: And were there no mixture of ideas, the action of the mind, in observing the relation, would, properly speaking, be sensation, not reasoning. ‘Tis therefore necessary, that in all probable reasonings there be something present to the mind, either seen or remembered; and that from this we infer something connected with it, which is not seen nor remembered.

The only connection or relation of objects, which can lead us beyond the immediate impressions of our memory and senses, is that of cause and effect; and that because ’tis the only one, on which we can found a just inference from one object to another. The idea of cause and effect is derived from experience, which informs us, that such particular objects, in all past instances, have been constantly conjoined with each other: And as an object similar to one of these is supposed to be immediately present in its impression, we thence presume on the existence of one similar to its usual attendant. According to this account of things, which is, I think, in every point unquestionable, probability is founded on the presumption of a resemblance betwixt those objects, of which we have had experience, and those, of which we have had none; and therefore ’tis impossible this presumption can arise from probability. The same principle cannot be both the cause and effect of another; and this is, perhaps, the only proposition concerning that relation, which is either intuitively or demonstratively certain.

Should any one think to elude this argument; and without determining whether our reasoning on this subject be derived from demonstration or probability, pretend that all conclusions from causes and effects are built on solid reasoning: I can only desire, that this reasoning may be produced, in order to be exposed to our examination.

You cannot prove that the sun will rise tomorrow, Hume says; nor can you prove that it is probable. Either way, you cannot prove it without assuming that the future will necessarily be like the past, or that the future will probably be like the past, and since you have not yet experienced the future, you have no reason to believe these things.

Hume is mistaken, and this can be demonstrated mathematically with the theory of probability, unless Hume asserts that he is absolutely certain that future will definitely not be like the past; that he is absolutely certain that the world is about to explode into static, or something of the kind.

Suppose we consider the statement S, “The sun will rise every day for at least the next 10,000 days,” assigning it a probability p of 1%. Then suppose we are given evidence E, namely that the sun rises tomorrow. Let us suppose the prior probability of E is 50% — we did not know if the future was going to be like the past, so in order not to be biased we assigned each possibility a 50% chance. It might rise or it might not. Now let’s suppose that it rises the next morning. We now have some new evidence for S. What is our updated probability? According to Bayes’ theorem, our new probability will be:

P(S|E) = P(E|S)P(S)/P(E) = p/P(E) = 2%, because given that the sun will rise every day for the next 10,000 days, it will certainly rise tomorrow. So our new probability is greater than the original p. It is easy enough to show that if the sun continues to rise for many more days, the probability of S will soon rise to 99% and higher. This is left as an exercise to the reader. Note that none of this process depends upon assuming that the future will be like the past, or that the future will probably be like the past. The only way out for Hume is to say that the probability of S is either 0 or infinitesimal; in order to reject this argument, he must assert that he is absolutely certain that the sun will not continue to rise for a long time, and in general that he is absolutely certain that the future will resemble the past in no way.