Friday, January 31, 2014

Consider a version of consequentialism on which the right thing to do is the one that has the best consequences. Now suppose you're captured by an eccentric evil dictator who always tells the truth. She informs you there are ten innocent prisoners and there is a game you can play.

If you refuse to play, the prisoners will all be released.

If you play, the number of hairs on your head will be quickly counted by a machine, and if that number is divisible by 50, all the prisoners will be tortured to death. If that number is not divisible by 50, they will be released and one of them will be given a tasty and nutritious muffin as well, which muffin will otherwise go to waste.

Now it is very probable that the number of hairs on year head is not divisible by 50. And if it's not divisible by 50, then by the above consequentialism, you should play the game—saving ten lives and providing one with a muffin is a better consequence than saving ten lives. So if you subscribe to the above consequentialism, you will think that very likely playing is right and refusing to play is wrong. But still you clearly shouldn't play—the risk is too high (and you can just put that in expected utility terms: a 1/50 probability of 10 being tortured to death is much worse than a 49/50 probability of an extra muffin for somebody). So it seems that you should do what is very likely wrong.

So the consequentialist had better not say that the right thing to do is the one that has the best consequences. She would do better to say that the right thing to do is the one that has the best expected consequences. But I think that is a significant concession to make. The claim that you should act so as to produce the best consequences has a very pleasing simplicity to it. In its simplicity, it is a lovely philosophical theory (even though it leads to morally abhorrent conclusions). But once we say that you should maximize expected utility, we lose that elegant simplicity. We wonder why maximize expected utility instead of doing something more risk averse.

But even putting risk to one side, we should wonder why expected utility matters so much morally speaking. The best story about why expected utility matters have to do with long-run consequences and the law of large numbers. But that story, first, tells us nothing about intrinsically one-shot situations. And, second, that justification of expected utility maximization is essentially a rule utilitarian style of argument—it is the policy, not the particular act, that is being evaluated. Thus, anyone impressed by this line of thought should rather be a rule than an act consequentialist. And rule consequentialism has really serious theoretical problems.

Thursday, January 30, 2014

Suppose that objectively, hell lasts forever. But while the first objective year of hell is experienced subjectively as a year long, the second objective year "goes by faster" as we say, and only takes half a year, the the third objective year "goes by even faster" and only takes a quarter of a year, and so on. Thus, while the damned will always exist and always be suffering, they will only experience two years' worth of suffering over that objectively eternal suffering.

Now the difficult question is whether this is an orthodox view of hell. When Jesus talks about the suffering being everlasting, is he talking of subjective or objective time? We certainly wouldn't find the analogous view of heaven satisfactory. But heaven and hell aren't exact parallels: in heaven one is with God, and the absence of God is not much of a parallel to God.

Now, without affirming the model, it can still be of some use in apologetics. For suppose a non-Christian objects that nobody deserves an everlasting hell. One answer is Anselm's: an infinite crime deserves infinite punishment and some crimes against an infinite being are infinite. But given the above model or the alternate model here, one can say that an everlasting hell could involve only a finite amount of suffering. So one can say: if someone is damned, then either she committed a crime that deserves infinite punishment or her total suffering is finite. Since both options are compatible with everlasting hell, in neither case does the objection to an everlasting hell go through. And one can give this disjunctive answer while strongly inclined to think that the Anselmian infinite crime model of hell is superior, as long as the alternate model is not a heresy (if it is, I will of course withdraw it).

Wednesday, January 29, 2014

We're thinking of replacing my wife's netbook with a Chromebook, but printing is an issue. Our laser printer is over a decade old and while it works fine on our network (with a network adapter) it certainly doesn't support Google Cloud Print. Google really should have added support for local network printers. Their standard solution is a proxy that runs on some computer on the network via Chrome. But the standard way of doing that has two problems: (1) Chrome presumably takes up a lot of memory (I haven't checked just how much) and I don't want it running in the background all the time, and (2) it runs as a user application, not as a service, so a user for whom this has been configured needs to be actually logged in on the computer. Google has a solution to (2), but I didn't manage to get it working.

Fortunately, I managed to adapt the Linux python scripts from cloudprint to make a Windows solution, available here as "wincloudprint" (GPL3). Alas, installation is a bit of a bear due to license issues (I can't just include everything in one self-contained download): you need to install python, pywin32, SumatraPDF and wincloudprint. (SumatraPDF is used for handling the actual printing.) All the instructions are at the link. I don't know know that anybody other than myself will be interested in this, but I thought I'd share it.

Monday, January 27, 2014

If p partly grounds q and p is explanatorily irrelevant to r, then q does not strictly explain r.

Something's being ungettiered is explanatorily irrelevant to all philosophically important facts, except for knowledge facts.

That one's belief that p is ungettiered partly grounds that one knows p.

So facts about what one knows do not strictly explain anything philosophically important.

The concept of strict explanation here is a kind of purification of explanation where irrelevant elements are removed. For instance, that George was late for work because George was mugged by a Polish-Canadian is not a case of strict explanation, because that the mugger was Polish-Canadian is irrelevant to explaining George's lateness.[note 1] But maybe it is a case of explanation. (Wes Salmon says that irrelevancy destroys explanation, but maybe sometimes it just renders it unstrict.)

So, in the order of explanation among philosophically important facts, knowledge facts come last, if at all.

Assuming you pass at least one of your classes this spring, we will hire you in May.

We will hire you in May.

To a literalist it sounds like 2 makes the stronger commitment than 1.

But suppose that you get the lowest passing grade in one of your classes, and Fs in all the others. Then if I said 2, I could say: "Well, of course, but I was assuming half-decent performance." But if I said 1, I can't say that!

What's going on? Normally when I say what I will do, there are some unstated conditions. But when I get into the business of stating conditions, I had better list all of them, or at least all the ones that are likely to be as relevant as the ones I list.

Sunday, January 26, 2014

The hot thing in epistemology these days is "knowledge first" epistemology. What I think about epistemology and knowledge is best summed up as: "knowledge last (if at all)". I don't know of anything philosophically interesting, besides others facts about what one knows, that is explained by facts about what one knows—as opposed to by facts about what one believes, what one is justified in, what is true, what one understands, etc. Facts about knowledge come last in the order of explanation, if at all. But I'm not an epistemologist—just a formal epistemologist—so nobody should care much about what I think about epistemology.

Thursday, January 23, 2014

In its classical formulation, Pascal's Wager contends that we have something like the following payoff matrix:

God exists

No God

Believe

+∞

−a

Don't believe

-b

c

where a,b,c are finite. Alan Hajek, however, observes that it is incorrect to say that if you don't choose to believe, then the payoff is finite. For even if you don't now choose to believe, there is a non-zero chance that you will later come to believe, so the expected payoff whether you choose to believe or not is +∞.

Hajek's criticism has the following unhappy upshot. Suppose that there is a lottery ticket that costs a dollar and has a 9/10 chance of getting you an infinite payoff. That's a really good deal intuitively: you should rush out and buy the ticket. But the analogue to Hajek's criticism will say that since there is a non-zero chance that you will obtain the ticket without buying it—maybe a friend will give it to you as a gift—the expected payoff is +∞ whether you buy or don't buy. So there is no point to buying. So Hajek's criticism leads to something counterintuitive here, though that won't surprise Hajek. The point of this post is to develop a rigorous principled response to Hajek's criticism embodying the intuition that you should go for the higher probability of an infinite outcome over a lower probability of it.

A gamble is a random variable on a probability space. We will consider gambles that take their values in R*=R∪{−∞,+∞}, where R is the real numbers. Say that gambles X and Y are disjoint provided that at no point in the probability space are they both non-zero. We will consider an ordering ≤ on gambles, where X≤Y means that Y is at least as good a deal as X. Write X<Y if X≤Y but not Y≤X. Then we can say Y is a strictly better deal than X. Say that gambles X and Y are probabilistically equivalent provided that for any (Borel measurable) set of values A, P(X∈A)=P(Y∈A). Here are some very reasonable axioms:

≤ is a partial preorder, i.e., transitive and reflexive.

If X and Y are real valued and have finite expected values, then X≤Y if and only if E(X)≤E(Y).

If X and Y are defined on the same probability space and X(ω)≤Y(ω) for every point ω, then X≤Y.

If X and Y are disjoint, and so are W and Z, and if X≤W and Y≤Z, then X+Y≤W+Z. If further X<W, then X+Y<W+Z.

If X and Y are probabilistically equivalent, then X≤Y and Y≤X.

For any random variable X, let X* be the random variable that has the same value as X where X is finite and has value zero where X is infinite (positively or negatively).

The point of the above axioms is to avoid having to take expected values where there are infinite payoffs in view.

Theorem. Assume Axioms 1-5. Suppose that X and Y are gambles with the following properties:

P(X=+∞)<P(Y=+∞)

P(X=−∞)≥P(Y=−∞)

X* and Y* have finite expected values

Then: X<Y.

It follows that in the lottery case, as long as the probability of getting a winning ticket without buying is smaller than the probability of getting a winning ticket when buying, you should buy. Likewise, if choosing to believe has a greater probability of the infinite payoff than not choosing to believe, and has no greater probability of a negative infinite payoff, and all the finite outcomes are bounded, you should choose to believe.

Proof of Theorem: Say that an event E is continuous provided that for any 0≤x≤P(E), there is an event F⊆E with P(F)=x. By Axiom 5, without loss of generality {X∈A} and {Y∈A} are continuous for any (Borel measurable) A. (Proof: If necessary, enrich the probability space that X is defined on to introduce a random variable U uniformly distributed on [0,1] and independent of X. The enrichment will not change any gamble orderings by Axiom 5. Then if 0≤x≤P(X∈A), just choose a∈[0,1] such that aP(X∈A)=x and let F={X∈A&U≤a}. Ditto for Y.)

Now, given an event A and a random variable X, let AX be the random variable equal to X on A and equal to zero outside of A. Let A={X=−∞} and B={Y=−∞}. Define the random variables X1 and Y1 on [0,1] with uniform distribution by X1(x)=−∞ if x≤P(A) and X1(x)=0 otherwise, and Y1(x)=−∞ if x≤P(B) and Y1(x)=0 otherwise. Since P(A)≥P(B) by (7), it follows that X1(x)≤Y1(x) everywhere and so X1≤Y1 by Axiom 3. But AX and BY are probabilistically equivalent to X1 and Y1 respectively, so by Axiom 5 we have AX≤BY. If we can show that AcX<BcY then the conclusion of our Theorem will follow from the second part of Axiom 4.

Let X2=AcX and Y2=BcY. Then P(X2=+∞)<P(Y2=+∞), X2* and Y2* have finite expected values and X2 and Y2 never have the value −∞. We must show that X2≤Y2. Let C={X2=+∞}. By subdivisibility, let D be a subset of {Y2=+∞} with P(D)=P(C). Then CX2 and DY2 are probabilistically equivalent, so CX2≤DY2 by Axiom 5. Let X3=CcX2 and Y3=DcY3. Observe that X3 is everywhere finite. Furthermore P(Y3=+∞)=P(Y2=+∞)−P(X2=+∞)>0.

Choose a finite N sufficiently large that NP(Y3=+∞)>E(X3)−E(Y3*) (the finiteness of the right hand side follows from our integrability assumptions). Let Y4 be a random variable that agrees with Y3 everywhere where Y3 is finite, but equals N where Y3 is infinite. Then E(Y4)=NP(Y3=+∞)+E(Y3*)>E(X3). Thus, Y4>X3 by Axiom 2. But Y3 is greater than or equal to Y4 everywhere, so Y3≥Y4. By Axiom 1 it follows that Y3>X3. but DY2≥CX2 and X2=CX2+X3 and Y2=DY2+Y3, so by Axiom 4 we have Y2>X2, which was what we wanted to prove.

Wednesday, January 22, 2014

You might think that someone who has no hair is definitely bald. Not so. For only someone who has a normally hirsute head can be bald, and it can be vague whether a particular hairless person has a head. For instance, we can imagine aliens that have a part that resembles our heads to some degree--maybe they have eyes on it but all their other sensory organs and mouth are on their hand--and it will be vague whether they have heads. So, if Bob is such an alien and has no hair anywhere, then it is definitely true that either Bob is maximally bald (if he has a head) or Bob is not bald at all (if he has no head).

This is inspired by a remark by Kenny Pearce that, in the case he was writing about, something was a paradigm case of F without being at all an intense case of F. The lesson is that it can be important to keep degrees of being F apart from whether one is definitely F or not. This remark may damage this argument.

Sometimes it is rational to choose a zero probability of an infinite good over a certainty of a finite good.Suppose there are uncountably many benevolent people, each of whom is assigned a number in (0,1), the interval from zero to one non-inclusive. A random number Y is chosen in (0,1) with a continuous distribution (say, a uniform one, or a cut-off Gaussian). The people aren't informed of its value, but they know the setup of the story.

Person number x is now given this choice:

wager: if Y=x, then everyone gets $1; else, nothing happens.

don't wager: the person with number x/2 gets $1.

Then:

If everybody wagers, then everybody gets $1.

If nobody wagers, then all and only the people with numbers in (0,1/2) get $1.

So surely at least some, and probably all, should wager. But if you wager, you're choosing a zero probability of an infinite good (since the probability that your number matches Y is zero) over the certainty of a finite good. (The goods are to others, but since you're benevolent, that doesn't matter.)

Tuesday, January 21, 2014

If we live in an infinite universe, then when we look at total values and disvalues, total utilities, we will always run into infinities. There will be infinitely many persons, of whom infinitely many will provide instances of flourishing, after all. Now one might say: "So what? Our individual actions only affect a finite portion of that infinite sea of value and disvalue."

But this may be mistaken. For if there are infinitely many persons, presumably there are infinitely many persons who have a rational and morally upright generally benevolent desire. A generally benevolent desire is a distributive desire for each person to flourish. It is not just a desire that the proposition <Everyone flourishes> be true, but a desire in regard to each person, that that person flourish, though the desire may be put in general terms because of course we can't expect people to know who the existent persons are.

Now, if you have a rational and morally upright desire, then you are better off to the extent that this desire is satisfied (some people will think this is true even with "and morally upright" omitted). Thus, if you have a rational and morally upright general benevolence, then even if some men are islands, you are not. Whenever someone comes to be better off, you come to be better off, and whenever someone comes to be worse off, you come to be worse off. So if infinitely many people have a rational and morally upright general benevolence, whenever I directly do something good or bad to you, I thereby benefit or harm infinitely many people. And no matter how small the benefit or harm to each of these generally benevolent people, it surely adds up to infinity.

St. Anselm thought that our sins were infinitely bad as they were offenses against an infinite God. If we live in a multiverse, those of our sins that harm people also harm infinitely many people.

One might object that the generally benevolent person will only be infinitesimally benefitted or harmed by a finite harm to one person in the infinite sea of persons in the multiverse. That may be true of some very weakly benevolent people. But there will also be infinitely many generally benevolent people whose general benevolence will be sufficiently strong that the benefit or harm will be non-infinitesimal. After all, one can imagine a person who, if faced with a choice whether she should gain a dollar or a stranger she knows nothing about should gain a hundred dollars would always prefer the latter option. Such a person counts benefits and harms to other people at at least 1/100th of what such benefits and harms to herself would count as. And so if I deprive anybody of a hundred dollars, each such a generally benevolent person will, in effect, be harmed to a degree equal to a one dollar deprivation. As long as there are infinitely many generally benevolent people with at least that 1:100 preference ratio, the argument will yield that a non-infinitesimal harm to anybody results in an infinite harm. And plausibly there would in fact be infinitely many people with a 1:1 preference ratio, or maybe even a 2:1 preference ratio (they would rather that others benefit than themselves).

So we cannot avoid dealing infinite utilities if there are infinitely many persons. For each of our nontrivial actions will affect infinitely many persons, since infinitely many persons will have rational and morally upright desires that bear on the action.

Moreover, even denying the existence of an infinite multiverse, or of an infinite universe, won't get us off the hook. For even if we don't think such an infinitary hypothesis is true, we surely assign non-zero epistemic probability to it. The arguments against the hypothesis may be strong but are not so strong as to make us assign zero or infinitesimal probability to it. And a non-zero non-infinitesimal probability of an infinite good still has infinite expected utility.

Interestingly, too, as long as overall people flourish across an infinite multiverse, each such non-infinitesimally generally benevolent person will seem to be infinitely well off. Such are the blessings of benevolence in an overall good universe.

The above argument will be undercut if we think that one only benefits from the fulfillment of a desire when one is aware of that fulfillment. But that view is mistaken. An author who wrote a good book is well off for being liked even if she does not know that she is liked.

Monday, January 20, 2014

Imagine a sequence of blindfolded electrically charged people, at positions 1,2,3,... on a line in some coordinate system. Let f be a permutation of the natural numbers with the property that f(n) is even if and only if n is divisible by four—this permutation maps the even numbers onto the numbers divisible by four and the odd ones onto the numbers not divisible by four. Suppose the person at position n has electric charge f(n). You're one of the people. And that's all the empirical data you have.

Question 1: What is the probability that your electric charge is even?

Obvious Answer: The people at positions on the line divisible by four have even electric charge. So the probability that you have positive electric charge equals the probability that your position is divisible by four. But every position has exactly one person on it, and so the probability that your position is divisible by four is 1/4. Hence that's also the probability that your electric charge is even.

Inobvious Answer: Instead of mentally arranging people by position, arrange them by electric charge. There is exactly one person per natural number arranged by charge. And so the probability that your number is even is 1/2.

The point here is that P(charge is even)=P(position is divisible by four). When we focus on the arrangement by position we are inclined to assign 1/4 to the right hand side, and hence we assign it to the left hand side, too. When we focus on the arrangement by charge we are inclined to assign 1/2 to the left hand side, and hence we assign it to the right hand side, too.

Which is right?

Maybe we can say that one of the two orderings is more relevant for calculating probabilities. I suspect that anybody who takes this route will take the positional arrangement to be that one.

But is that really right? Imagine an angel that is assigning positions and charges to an infinite number of people subject to the rule that the person at position n gets charge f(n). The angel might start by first holding an infinite lottery where each person first gets given a position number, and then will use that position number to calculate the charge f(n) for the person. Or the angel might start by holding an infinite lottery where each person first gets given a charge number m, and then the position number is calculated as f−1(m). In the former case, our ansewr might seem to be 1/4, while in the latter it seems to be 1/2. We have no idea which the angel is going to do if the information listed is all we have, nor any idea whether it is going to be an angel, or a natural process, or whatever. Maybe then we should average the two probabilities, and get (1/2+1/4)/2=3/8 as our probability? But that doesn't seem right, either.

I suspect the right answer is that in this scenario there just is no answer to the question. And if that is right, then where there is a simultaneous infinity of cases in a reference class—as in some multiverse scenarios—there are no probabilities.

But what if instead of spatial arrangement we have temporal arrangement? Then I have an intuition that the temporal arrangement takes priority over the charge arrangement for the calculation of probabilities (and would even take priority over the positional arrangement, I guess). I don't know if I should keep or abandon this intuition. It might offer an important disanalogy between space and time.

Thursday, January 16, 2014

One of the fundamental concepts of bundle theory is a coinstantiation relation between properties. Interestingly, it may be possible to reduce coinstantiation to instantiation and entailment. Specifically, a bundle theorist may say that the Ps (some plurality of properties) are coinstantiated if and only if there is a property Q such that (a) Q entails each of the Ps and (b) Q is instantiated.

Wednesday, January 15, 2014

It is often said that if your credences are probabilistically inconsistent, e.g., because you assign probability 0.6 to p and 0.6 to its negation ~p, then you are subject to a Dutch Book, namely a bookie can present you a sequence of betting deals such that by your lights you will want to accept each one, but if you accept them all, then you are certain to lose money no matter how things turn out.

While this is often said, and there is indeed a theorem that roughly says the above, it's not exactly true when it's put as above.

Take the above case where you assign 0.6 to p and 0.6 to ~p. The standard way to construct a Dutch Book would be something like this. If you assign 0.6 to p, then you'd be happy to pay $5.50 for a ticket that wins $10 if p is true. And since you assign 0.6 to ~p, you'd be happy to pay $5.50 for a ticket that wins $10 if p is false. So if you're offered both bets, you'll be happy to accept, but then no matter whether p turns out to be true or false, you'll have paid out $11 but only win $10, a sure net loss of $1.

But the thought that you'll be happy to pay $5.50 for the ticket that wins $10 if p is true can be questioned. The justification for the thought goes like this: You will value the $10-if-p option at its expected value of (0.6)($10)=$6, calculated with your probability assignment. Hence, you will be happy to buy the $10-if-p option for any amount less than $6. And ditto for ~p.

However, this is not the only way to think about the case. The question whether to accept the first deal, namely to pay $5.50 for the chance to win $10 if p is true, can be thought of as the choice between the accept and reject moves in this game

p

~p

accept

$4.50

−$10

reject

$0

$0

Now the natural way to evaluate the value of the accept line is: (0.6)($4.50)+(0.6)(-$10)=−$3.30, since you assign 0.6 to p and 0.6 to ~p. And of course the value of the reject line is (0.6)($0)+(0.6)($0)=$0. So the reject move is the best one. And of course the same goes for the evaluation of the second deal offered by the bookie. So if you evaluate the choices according to the above methods, you will in fact reject both of the bookie's deals.

In fact, if you adopt the above way of calculating whether you should accept a deal or not, then in the case where there is just one proposition whose truth or falsity is at issue, and you assign equal positive probabilities to its truth and to its falsity, then you will come up with the very same decisions as the consistent decision theorist who assigns 0.5 to p and 0.5 to ~p. Since the consistent decision theorist is not subject to a Dutch Book, neither are you.

So what just happened? Well, what happened is that there are two ways of figuring out whether to pay $5.50 for the ticket that wins $10 if p is true. The standard way is to calculate the value of the ticket, using the obvious calculation (0.6)($10) = $6, and then compare that to the price $5.50 of the ticket. Basically, we are comparing two values: the value of the ticket and the value of a sure $5.50. We are, further, assuming that the value of a sure $5.50 is, well, $5.50. But the latter assumption can be questioned when the probabilities are inconsistent. For while you might say that the value of a sure $5.50 is just (1.0)($5.50)=$5.50, you might also break up that sure $5.50 according to the two options at issue, namely p and ~p, and calculate the value of that sure $5.50 as (0.6)($5.50)+(0.6)($5.50) = $6.60. (Of course, that value looks wrong, but we shouldn't expect things to look right with inconsistent probabilities!) And now we ask whether it's worth giving up that sure $5.50 for the ticket, and we will say that it's not, since the ticket's value is $6 while the sure $5.50 is worth $6.60. This calculation is equivalent to the one implicit in the game-based calculation above.

Here's a more formal way to look at it. When you're evaluating the value of a betting portfolio B that has only finitely many values, a natural thing to do is to break up the sample space into a partition E1,...,En with the property that B takes a constant value V(B,Ei) on each of the Ei. Then the value of B is naturally given by the formula:

V(B,E1)P(E1)+...+V(B,En)P(En).

If the probabilities are consistent, then it doesn't matter which partition is chosen for the calculation, as long as the value of B is constant on each element of the partition. But when the probabilities are not consistent, then in general the value depends on the choice of partition. The standard calculation makes the following stipulation:

Let E1,...,En be the coarsest partition with the property that B takes a constant value on each Ei.

But that is not the only reasonable stipulation available. Here is another:

When comparing the values of bets B1,...,Bk, let E1,...,En be the coarsest partition with the property that each of the Bi takes a constant value on each of the Ej.

The second stipulation leads to results equivalent to those coming from thinking about things in terms of the table I gave earlier. This stipulation does mean that the comparative values of two bets will in general depend on what other bets they are being compared to, and hence we do not satisfy independence of irrelevant alternatives. But things like that shouldn't surprise us given that we're reasoning with inconsistent probabilities!

Moreover, the above is not a complete get-out-of-Dutch-Book card for inconsistent reasoners. There still will be probability assignments subject to Dutch Books. But it will not be the case that every inconsistent assignment is subject to a Dutch Book.

Further, there is an interesting practical question. We have good reason to think that real agents have inconsistent probabilities. When they make decisions on the basis of inconsistent probabilities, we can ask: What should they do, given that their probabilities are inconsistent? Should they decide using the standard method that partitions the sample space according to rule (1) or should they partition it via rule (2)? There is some reason to think that rule (2) is actually the better one for inconsistent reasoners—after all, it less often leads to Dutch Books!

Tuesday, January 14, 2014

If p explains q1 and p explains q2, then p explains the conjunction of q1 and q2.

Now suppose that p says that persons x1,x2,...,x100 were the buyers of tickets to a fair lottery, with each buying one ticket. Let qi be the proposition that person xi did not win. Suppose that in fact x100 won. Then p explains q1 with a perfectly fine 99/100 stochastic explanation. And by the same token p explains q2, and so on up to q99. So by (1), p explains the conjunction of q1,...,q99. But the probability of that conjunction being true given p is only 1/100. So we have a stochastic explanation despite a low probability.

One can even rig cases where one has a stochastic explanation despite zero probability if (1) extends to infinite conjunctions.

Saturday, January 11, 2014

A simple is something that lacks proper parts. An extended simple is a simple that occupies a region of space that is more than a point.

If I am not simple, then I think with a proper part of me.

I do not think with a proper part of me.

So, I am a simple.

I am extended.

So, I am an extended simple.

So, there is an extended simple.

The thought behind (1) is that if I am not simple, then my brain and/or my soul are going to be parts of me in the true ontology, and surely if the true ontology contains them, then I think with them. The thought behind (2) is that if A is a proper part of B, and I think with A, then A is a better candidate than B for being me. And (4) follows from the fact that I am 182 cm tall.

While I am inclined to accept (6), I find the argument for (2) weak. I would find it stronger if one could conclude from the fact that I think with A that A thinks, but I don't see that that follows.

Maybe a better argument:

No particle occupies just one point.

All particles occupy space.

Some particles are simple.

Something that occupies space but does not occupy just one point is extended.

So, there is an extended simple.

The thought behind (7) is that in real life no particle has a wavefunction that is concentrated at one point.

Thursday, January 9, 2014

Here are some curious forms of argument that I want to play with. First:

Doctrine D is so absurd that no one could believe D while fully realizing its absurdity, except by a miracle.

Someone believes D while fully realizing its absurdity.

So, a miracle has occurred.

Given the human capacity for believing the unbelievable, it is going to be hard to support (1) for any interesting D (except maybe: p and not p).

Let's try this:

Doctrine D is so absurd that no one could reasonably believe D while fully realizing its absurdity, except by a miracle.

Someone reasonably believes D while fully realizing its absurdity.

So, a miracle has occurred.

In arguments of this sort, the difficulty has shifted to (5). But we might try the following. Start by observing that a person doesn't become unreasonable simply by having a trivial belief that isn't reasonable. But to center one's life one a belief that isn't reasonable might be enough to render one unreasonable:

If at least one of the beliefs central to x's life is not reasonable, then x is an unreasonable person.

x is not unreasonable.

One of the beliefs central to x's life is D.

x fully realizes the absurdity of D while believing D.

So someone reaosnably believes D while fully realizing its absurdity.

The conclusions of the above arguments were that a miracle has occurred. Can we conclude that D is true? Well, we would have to look at our best explanation of the miracle. If it involves God, then we have reason to think D is true. Here's an argument that avoids the detour through miracles.

Doctrine D is so absurd that no reasonable person would hold D as a belief central to her life while fully realizing D's absurdity unless she knew D to be true.

Some reasonable person held D as a belief central to her life while fully realizing D's absurdity.

So, somebody knew D to be true.

So, D is true.

I think the big difficulty with arguments of this form in the cases most familiar to me, namely with D a doctrine from the Christian tradition, is that people who are paradigm examples of rationality, like Thomas Aquinas, do not take the doctrine to be really absurd.

Sunday, January 5, 2014

Suppose Belgium is being attacked by a vicious enemy who is particularly targetting civilians—especially children—in a terroristic campaign. Currently, Belgium has an all-volunteer army. However, experts with excellent predictive track records estimate that conscripting 200,000 men for about a year will save almost as many lives, mainly those of children. These 200,000 draftees would be subject to the rigors and hardships of a tough year-long military campaign. Surprisingly, however, due to recent improvements in personal armor, the campaign is expected to result in fewer than 100 deaths of draftees.

There is nothing morally objectionable about the government having such a draft and those called up for it would have a moral duty to serve. Note that the projected death-rate expected from the campaign is about 50 times lower than the US military death-rate in WWII. Such a low death-rate makes this draft close to obviously right.

Now compare this to Judith Jarvis Thomson's arguments—like the famous violinist one—for abortion. In the Belgian draft case, in pregnancy and in Thomson's cases, people lose significant aspects of their ordinary freedoms and capabilities, and do so for the sake of saving the lives of others. I think the draft case underlines that Thomson's cases underestimate the degree to which we can be legitimately morally required to make significant sacrifices to save the lives of others.

Notice, too, three differences between the draft and violinist cases. While the particular people saved by the draft in my story are primarily children, and hence not the draftees themselves, the practice of instituting a draft in such dire wartime circumstances is one that all can potentially benefit from. If I am to be drafted and save the life of some child and I am considering running off, I should reflect that it's just a matter of my luck that the invasion happened now rather than when I was a child and when others would have been drafted to protect me. I need to do my bit or else I will be a freerider. It is not so in the violinist case. I am not a famous violinist. People are not at all likely kidnap anybody to provide life support for me! The pregnancy case here is like the draft case, but even more so. For we all not merely potential beneficiaries of the practice, but actual beneficiaries, since we all came into existence through pregnancy.

Second, in the draft case there is a not insignificant chance that the child whose life one's being drafted will save will be one's own child, while the violinist is a stranger. But in the case of pregnancy the child is almost always one's own (the exception is in cases of surrogate pregnancy). Again, the pregnancy case is more like the draft case, and even more in that direction.

The third difference may seem to play in a different direction, however. Both the pregnancy and the violinist cases involve direct use of one's internal organs. But in the draft, one's womb and one's kidneys are not being drafted—it is the use of external organs, like hands and legs, that is required of one. While there may be a small difference along these lines, I think the difference is not particularly significant. The soldier fights not just with hands and legs, but also with the brain, and is required to do so. To have one's kidneys get used, as in the violinist case, is no more invasive of one's person than to have to obey orders, to have to focus with one's brain and mind on the tasks that one's commander requires one to focus on. The loss of autonomy on a military campaign is, if anything, greater than in the pregnancy and violinist cases.

Note, too, that the draft argument gives support for two claims. First, it supports a moral claim: It is one's duty, on pain of freeriding, to do one's bit for saving lives. Second, it supports a policy claim: It can be both permissible and reasonable for the state to require this of one.

Saturday, January 4, 2014

The traditional Christian view that God is unchanging has been accused of being a fruit of Greek ideals of perfection (and what's wrong with that?). Here I want to motivate this view by thinking about our mental life.

But our conscious states are divided between times in much the way that the two centers of consciousness of a split-brain patient are divided from each other. My present state of consciousness only includes shadowy reminders of what I was aware of five minutes ago and vague premonitions of what I am about to be aware of. My temporality makes me like a patient split into untold numbers of centers of consciousness associated with different times (perhaps in a continuous way, with overlapping between close-by centers, since many of our mental states themselves persist over short amounts of time). We are deeply internally disunited--our "transcendental unity of apperception" is quite limited. Such deep internal division and disunion is surely not what the perfect being would experience (at least not in his proper nature—an Incarnation might make for such an experience, and the above reflection should make us grateful that he took up this deeply divided existence for our sake). This is not a matter of some "Greek ideal" of perfection. It is simply the intuition that mental division within oneself is an imperfection.

The above argument presupposes eternalism. But presentism only introduces even greater limitation in our mental life by making the future and past conscious states not be ours.

So we have good reason to think of God's mental life as all-encompassing, of God living an infinitely rich mental life all at once, as Boethius said. But God is a mind and surely all of his mental states are conscious. This gives us good reason to think God is unchanging.

About Me

I am a philosopher at Baylor University. This blog, however, does not purport to express in any way the opinions of Baylor University. Amateur science and technology work should not be taken to be approved by Baylor University. Use all information at your own risk.