Thursday, October 31, 2013

Here's a decision theoretic picture of how to make the decision between A and B. First, gain as much knowledge K as is reasonably possible about the laws and present conditions in the universe. The more information, the better our decision is likely to be (cf. Good's Theorem). Then calculate the conditional expected utility of the future given A with K, and do the same for B. Then do the action where the conditional expected utility is higher.

Let U(A,K) and U(B,K) be the two conditional expected utilities. (Note: I mean this to be neutral between causal and epistemic decision theories, but if I have to commit to one, it'll be causal.) We want to make our decision on U(A,K) and U(B,K) for the most inclusive K we can.

Now imagine that we could ask an angel for any piece of information I about the present and the laws (e.g., by asking "How many hairs do I have on my head?"), and then form a new set of information K2 including I on which to calculate U(A,K2) and U(B,K2). Then we should ask for as much information as we can. But now here is a problem: if determinism holds, then once we get enough information, Kn will entail which of A and B happens. Let's say it entails A. Then U(B,Kn) is undefined. This informs one that one will do A, but makes decision-making impossible.

So how much cost-free information should we get from the angel? If we ask for so much that it entails what we're going to do, we won't be able to decide. If our choice is indeterministic, we have a simple principled answer: Ask for everything about the laws and the present. But if our choice is determined, we must stop short of full information. But where?

Perhaps we ask for full information about the laws and about everything outside our minds. But the contents of our minds are often highly relevant to our decisions. For instance, if we leave out in our decision-making the content of our minds, we won't have information on what we like and what we dislike. And in some decisions, such as when deciding whether to see a psychologist, information about our character is crucial.

Here's another interesting question. Our angel knows all about the present and the laws. It seems that he's got all the information we want to have about how we should act. So we just ask: Given all you know, does A or does B maximize utility? And he can't answer this question. For given all that he knows, only one of the two conditional utility values makes sense.

Of course, a similar problem comes up in asking an omniscient being in a case where our choices are indeterministic. We might think that we can make a better decision if that being tells about the future. ("What will AAPL close at tomorrow?") But there is a bright line that can be drawn. We cannot use in our decision any information that depends on things that depend on our decision, since then we have a vicious loop in the order of explanation. So an omniscient being metaphysically cannot give us information that essentially depends on our decisions. (In particular, if we're deciding whether to buy AAPL stock, he can't tell us what it will close at tomorrow, unless he has a commitment to make it close at that no matter what we do, since without such a commitment, what it will close at tomorrow depends—in a complex and stochastic and perhaps chaotic way—on whether we buy the stock today.)

Let me end with this curious question:

If you have a character that determines you never to ask for help, isn't that a reason to get professional help?

I think this is an interesting question both for compatibilists and for incompatibilists.

Wednesday, October 30, 2013

Lewis and Sider have argued that if restricted compositionality is true—some but not all pluralities of two or more objects compose a whole—then there will be cases where it's vague how many objects there are. For instance, imagine two universes, A and B, each with the same finite set of n particles with the same intrinsic properties. But in A, the particles are neatly arranged into galaxies, trees, tables, buildings, etc. And in B there is just a blooming buzzing confusion. If restricted compositionality holds, then, assuming there are no immaterial objects, universe B has exactly n or at most n+1 objects—it's just too messy to have any cases of composition, except perhaps for the universe as a whole (that's why it might be n+1 rather than n). But A is much like our universe, and so we would expect lots of cases of composition, and hence the number of objects will be a lot more than n+1, say n+m for some large m. However, we can now imagine a continuous sequence of universes ranging from A to B, differing continuously in how the particles are arranged. As we move that continuous sequence, the number of objects will have to change from no more than n+m to n+1. But it is incredible that the object count should sharply change due to a very tiny shift in particle positions. Instead, the object count will at times be vague. But how many objects there are is a matter of which sentences using universal quantification, conjunction, negation and identity are true. But quantification, conjunction, negation and identity are not vague. So we have vagueness where we cannot have vagueness.

There may be some technical problems with the argument as I formulated it, given the assumption of no immaterial objects. Maybe we can't do without immaterial entities like God or numbers. One could reformulate the argument to restrict the counting to material entities, but "material" might actually be a vague term. Perhaps the best thing to do is to assume that these universes have no immaterial contingent entities, and then just count contingent entities. Contingency shouldn't be a vague matter, after all. The Aristotelian may balk at this. For it may well be that a necessary condition for a bunch of material entities to compose a whole that they have a form, and forms are immaterial but contingent. Maybe, though, "form" is not vague, and so we can just count the contingent non-forms.

But talking of forms suggests a more serious difficulty. If there are Aristotelian forms, then how many material objects there are may well not supervene on how material objects are spatiotemporally arranged and what intrinsic properties they have. For objects to come to compose a whole, there must come into existence a form. There is nothing absurd about there being sharp laws of nature specifying under which precise conditions a form comes into existence. There is no need for the laws of nature to be continuous (and the possibility of fundamental discreteness is empirically still open). Or perhaps God decides on a case-by-case basis whether to create a form. Then there is no vagueness as to how many material objects there are: the number of material objects equals the number of forms of material objects that in fact inform some matter (the souls of the resurrected are forms of material objects but temporarily fail to inform any matter). Of course in transitional cases we won't have much confidence whether some objects compose a whole, but that's just because we are unable to see forms except through their normal effects.

Tuesday, October 29, 2013

The most surprising and important event of the second half of the 20th century was the nonoccurrence of a global nuclear war. In an earlier post, I suggested that the probability of such a war was fairly high given naturalism and fairly low given theism, and hence the nonoccurrence of such a war is evidence for theism. I want to add one more thing to that line of thought: the nonoccurrence of a global nuclear war was probably the most prayed-for event of the second half of the 20th century. All the prayers for world peace were first and foremost prayers for peace between East and West, prayers that there be no global nuclear war. And there is peace between East and West, and there was no global nuclear war.

Monday, October 28, 2013

In this picture from 30 Rock's "Brooklyn Without Limits" episode, Kenneth has only two legs and yet Kenneth has four legs. This sounds like a contradiction, but of course it is not--we can see that it's not.

How shall we resolve the contradiction? Perhaps: Kenneth has two human legs and four table legs. But that suggests that Kenneth has six legs, and that doesn't seem right to say. Maybe we can say that his human legs are also table legs, so he has only four legs: two of them doing double-duty for table legs and human legs, and two of them doing double-duty for table legs and human arms. Maybe. But even the statement "Kenneth has four legs" seems wrong or at least misleading without qualification.

Much better to qualify with a qua: Kenneth qua human has only two legs. Kenneth qua table has four legs.

This should remind us of one of the standard solutions to apparently contradictory talk of Christ incarnate. Christ is eternal. Christ is conceived in time. Christ has boundless knowledge. Christ's knowledge is bounded. And so on. The solution is to say things like: Christ qua God is eternal and has boundless knowledge. Christ qua human is conceived in time and has bounded knowledge--and has two legs.

The naturalness of qua talk in the case of Kenneth should make us less suspicious of the incarnational case.

Thursday, October 24, 2013

David Lewis thinks that even if determinism holds, we have the ability to act in ways other than what is entailed by the pre-human state of the universe, P, and the laws, L. Were we to have so acted, a "small miracle" would have happened. The actual world's law of nature would no longer have held. Perhaps in that world there wouldn't be enough laws for determinism or there would have been a new law with exception clauses. Here is a cheap shot:

On this theory, billions and billions of people daily had the ability to to act such that were they to act so, the laws of nature would have been insufficient for determinism or would have had exception clauses. Why did none of them ever exercise that ability?

Is this just a cheap shot? Maybe. The best answer I see to it is:

Look: this is all hypothetical. We don't actually live in a deterministic world. There will be deterministic worlds with neat exceptionless laws but there will be many more nearby-to-them worlds with many such exceptions. purposes, the world without exceptions. But for the purposes of considering the compatibility of free will with determinism, we posited this unlikely world.

This is a decent answer. But it does suggest that that we would never have reason to think we live in a softly deterministic world with very uniform laws. For near any such world there will be lots of worlds with less uniform laws, worlds that people had the power to actualize simply by acting differently.

Let P be a complete description of the ancient past and let L be the laws. Lewis agrees that our freedom includes the ability to act in a way that falsifies the conjunction P&L but denies that it includes the ability to act in such a way that were we to act so, P would be false.

But here is a plausible thesis. Fundamental particles are essentially tied to laws of nature. There would be no electrons or photons if the laws of electrodynamics were different. This is clear on the Aristotelian picture on which laws are grounded in the powers of objects, but is also plausible without that picture.

Given this plausible thesis, PentailsL. And hence P&L is logically equivalent to P. Thus if we can act in a way that falsifies the conjunction P&L, we can act in a way that falsifies P. Lewis denies the thesis, but it is still plausible.

Wednesday, October 23, 2013

Some compatibilists—e.g., Vihvelin and Fara—think that something that merely blocks the possibility of your trying to do A but doesn't block your disposition to do A when trying to do A does not take away your present power to do A. Two examples of such blocks are (a) Frankfurt cases where you'd be counterfactually prevented from doing A and (b) being determined not to do A.

But there is an interesting family of cases where you can only do something when you try hard enough. For instance, you can run distance D in time T when you try really hard, and you can only try that hard when you know a bear is chasing you. In a case like that, even though you are disposed to do A when you try hard enough, anything that blocks you from the possibility of trying hard enough also blocks you from being able to do A. Thus the absence of a bear, or even just ignorance of the presence of the bear, blocks you from being able to running D in T.

So where trying hard to do A is needed for you to do A, anything that blocks your possibility of trying hard blocks your ability to do A.

Now, anything that blocks you from the possibility of trying also blocks you from trying hard. So in cases where trying hard to do A is needed to do A, determinism and Frankfurt cases block you from being able to do A.

So in cases where success requires trying hard, blocks to trying remove the ability to succeed. But why should this only be true where success requires trying hard? So in cases where success requires trying, blocks to trying remove the ability to succeed, too.

Tuesday, October 22, 2013

(Premise) I am the only entity that has all the conscious states I presently have.

(Premise) I am breathing.

(Premise) My soul isn't breathing.

(Premise) My brain isn't breathing.

I am neither my soul nor my brain. (2-4)

Neither my soul nor my brain has all the conscious states I presently have. (1,5)

(Premise) If my soul or my brain is conscious, it has all the conscious states I presently have.

So, neither my soul nor my brain is conscious.

If my soul or my brain grounds my consciousness, it does not ground my consciousness by being conscious. It grounds my consciousness by having non-conscious states that ground my consciousness. These non-conscious states will then be more fundamental than my conscious states.

In particular, substance dualists should agree with naturalists that conscious states are non-fundamental. Only non-substance dualists, like hylomorphic dualists and property dualists, have a hope of saying that conscious states are fundamental. And of course a similar argument can be run for other mental states beside the conscious ones.

In practice, some substance dualists will say that I am my soul. If so, then I don't breathe (at most I cause breathing), I don't weigh anything, and so on.

Monday, October 21, 2013

Williamson gave a lovely argument that infinitesimals can't capture the probability of an infinite sequence of heads in fair independent trials: Let Hn be the event that we have heads in each of the trials n,n+1,n+2,.... Then, P(H1)=(1/2)P(H2). But P(H1)=P(H2) since they're both just the probability of getting an infinite sequence of heads. Thus, P(H1)=(1/2)P(H1) and so P(H1) is zero, not infinitesimal.

It turns out that a somewhat similar result holds for Popper functions as well. For technical reasons, I need a bidirectionally infinite sequence of coin tosses, one for each integer (positive, zero or negative). Our probability space Ω of infinite sequences will then be the set of all functions from the integers Z to {H,T}. Let G be the group of transformations of Ω generated by translations defined by reflections on Z. In other words, G is generated by the transformations Ra, where a is an integer or a half-integer, and (Ras)(n)=s(2a−n) for any sequence s in Ω.

Let F be any G-invariant field in Ω that contains all the Hn. A very plausible symmetry condition on the Popper function on Ω representing the double sequence of heads then is:

For any g in G and any A and B in F, P(A|B)=P(gA|gB).

In other words, if we flip the sequences around, we don't change the probabilities. E.g., the probability of getting heads on tosses 2,3,4,5,... conditionally on B is the same as the probability of getting heads on tosses 1,0,-1,-2,... conditionally on R1.5B. This symmetry condition is related to Williamson's symmetry assumption that P(H2)=P(H1).

A second obvious condition is that the probability of getting heads on tosses 1,2,3,... given that one has heads on 2,3,... is equal to the probability of getting heads on toss 1, i.e., is 1/2:

Sam killed ten people to save ten people and receive a yummy and healthy cookie that would have otherwise gone to waste.

Frederica killed ten people to save ten people and to have some sadistic fun.

If utilitarianism is true, then in an appropriate setting where all other things are equal and no option produces greater utility, the actions of Jim, Sam and Frederica are not only permissible but are duties in their circumstances. But clearly these actions are all wrong.

I find these counterexamples against utilitarianism particularly compelling. But I also think they tell us something in deontological theories. I think a deontological theory, in order not to paralyze us, will have to include some version of the Principle of Double Effect. But consider these cases (I am not sure I can come up with a good parallel to the Frederica case):

John saved ten people and a squirrel by a method that had the death of ten other people as a side-effect.

Sally killed ten people and received a yummy and healthy cookie that w would have otherwise gone to waste by a method that had the death of ten other people as a side-effect.

These seem wrong. Not quite as wrong as Jim's, Sam's and Frederica's actions, but still wrong. These actions trivialize the non-fungible loss of human life. The Principle of Double Effect typically has a proportionality constraint: the bad effects must not be out of proportion to the good. It is widely accepted among Double Effect theorists that this constraint should not be read in a utilitarian way, and the above cases show this. Ten people dying is out of proportion to saving ten people and a squirrel. (What about a hundred to save a hundred and one? Tough question!)

Sunday, October 20, 2013

Two days ago, I posted a post on this topic based on an inequality that I had claimed to have proved but that was just plain false. Oops! The surprising numerical results in that post were also wrong.

But let's re-do it. Suppose that to count as being confident that p, you need P(p)≥α. But if you're confident that p, you still need not be confident that you will permanenly remain confident, even assuming that you are sure you will remain rational. For if you are confident but barely so, there may well be a good chance that future data will push you below the confidence level.

Say that you are secure from future rational defeat when you are rationally confident that future data won't rationally push you below the confidence level. Assume you know for sure that:

Your probabilities are coherent.

They will remain coherent.

You won't forget data.

If N is the number of pieces of evidence you will receive in your (earthly) life, then E[N]<∞.

When you receive a piece of evidence in your life, you also know for sure that you're receiving a piece of evidence in your life.

Assumptions (4) and (5) are merely technical. Basically, they say that your expectation for your length of (earthly) life is finite and that you're conscious of being in your earthly life when you are.

Under these assumptions, if the confidence level is α, i.e., if P(p)≥α is what it is to be confident in p, then it is enough for security in p that P(p)≥1−(1−α)2. And this inequality is sharp. For any P(p) <1−(1−α)2, one can imagine an experiment that has probability 1−α of pushing your credence below α.

Security from future rational defeat (in this life) normally requires quite a bit more than mere confidence. If your confidence level is 0.95, then security level might need to be as high as 1−(0.05)2=0.9975, but no higher.

Of course, you might know for sure that no more information will come in. Then the security level will equal the confidence level. But normally you don't know that. So let's let the absolute security level be that level of credence which suffices for security regardless of what you know about the kinds of future data you might receive. That level just is 1−(1−α)2 for a confidence level α. The graph shows the absolute security level for each given confidence level.

Suppose confidence α is the credence you need for knowledge. Then absolute security ensures that no matter what expectations you have about future information, you have enough credence to know that your knowledge won't be defeated.

Absolute security is something akin to moral certainty.

More generally, the probability that given a current credence of β your credence will ever dip below α is less than (1−β)/(1−α). The above inequality for the security level is derived from this inequality, and this inequality follows from Doob's optional stopping theorem and the fact that your conditional expectations for p will be a martingale when we condition on an increasing sequence of σ-fields (the increasingness just encodes the fact that you don't forget) and then we can stop that martingale once we reach the end of information or hit either α or 1.

Friday, October 18, 2013

This is an extract from my One Body book. I defend arranged marriage as a morally acceptable option, though of course only in the case where both members of the couple freely consent (without that, there is no marriage). But yet it seems that one ought to have a romantic love for the person one is going to marry. I respond:

[L]ove is always a duty, and the love needs to be appropriate to the relationship. Thus, it is one’s duty to love the person whom one is to marry, and it is a duty to love the person in the way appropriate to the person whom one is to marry. Of course, if one does not know anything about this person, the love cannot be very specifically developed. But it can involve the three aspects of all love: one has a disposition to benefit this person (should one find out what the person needs), one appreciates the other at least as a person, a creature of God, a fellow human being and someone with whom one can engage in sexual activity, and one intends such a union with this person. (The sexual aspects of this union may be the easiest to intend for a young and sexually curious person!) All the while, one can remain open to the mystery, the surprise of the other person. And in this way, the arranged marriage is not so different from an unarranged “love match”. In a love match, too, one must remain open to the enfolding mystery of the other person, traditionally including a lack of sexual knowledge of the other person. In any case, marriage and sex themselves can change people in unpredictable ways, and some of the knowledge of the person prior to marriage is likely irrelevant. Every love must involve a willingness to adjust its form to changes in the beloved and in the relationship, and must remain open to new things.

It is not so much wrong to marry someone that one does not love, as it is wrong not to love the person one marries. Love is required of us always, under all circumstances. It is wrong not to love the person one with whom one shakes hands, the person one sentences to two years in jail, the person one gives a free meal to, or the person one marries. Of course a different form of love is required in each case. However, what primarily distinguishes the different forms of love is the type of real union toward which the love is directed and the aspects under which the beloved is appreciated. If one marries, one ought to have a directedness toward sexual and personal union with the other person, and an appreciation of the other person insofar as this person can be united with. But for this one needs only to know the other person as a fellow human being of the opposite sex with whom one can unite sexually.

Wednesday, October 16, 2013

You find a ticking bomb which will go off in five seconds. There is a pad on it, and if you enter the right five digit number in the time remaining you will defuse the bomb. You frantically enter "12345", "54321" and "91101", and none of these work. The bomb goes off. As it happens, had you entered 88479, a number that didn't occur to you, the bomb would have been defused. You surely could have entered 88479. Are you responsible for failing to defuse the bomb?

Of course not. But why not? You were in some sense able to.

In cases like this, a natural suggestion is one made by Gerald Harrison: you were unable to do it "because ... doing so is contingent upon something highly improbable happening, namely ... entering the right combination."

But I don't think this has much to do with improbability as such. Suppose I'm a habitual criminal and I come across a sure opportunity to steal a million dollars with almost no chance of being caught, and sure enough I avail myself of the opportunity. The probability that I refrain from this was nonzero, but it was very small—perhaps no bigger than the probability of entering the right combination in the above case. Yet despite the low probability, I am responsible.

Or vary the bomb case. There is time to enter only one combination. And you know it's either 12345 or 54321. You try 12345, and fail. You are not responsible for the failure to defuse, even though your defusing the bomb was not "contingent upon something highly improbable happening".

So why aren't you responsible in the two bomb cases? It sounds right to me to say that both in the original case and the bomb case you did your best, while in the criminal case, I failed to do my best. And neither the probabilities of success (low in the first bomb case and in the theft case, but moderate in the second bomb case) nor those of trying to do one best (high in the bomb cases but low in the theft case) seem to settle any of the cases either way.

This suggests the following principle:

If you always tried to do the best you could as hard as you could, you are not culpable for the bad outcomes of any of your actions.

But if determinism rules out alternate possibility—as I think, pace Lewis and Hume and others—and if determinism is true, then we have always tried to do the best we could as hard as we could. For there never was anything else we could do, nor any other way of trying we could engage in.

Tuesday, October 15, 2013

I wonder if this very neat argument can't be used to provide another Grim Reaper style argument against an infinite past? The argument nicely fits with the intuition that Grim Reaper induces in me, namely that no event can have an infinite number of events in its causal history.

Consider the principle that if an action benefits some and harms none, then it's permissible. Now imagine a lottery run by uniformly choosing a random number between 0 and 1, with each number equally likely. There are infinitely many tickets, each bearing a different number between 0 and 1. Each ticket has been sold to a different person (there are lots of people in this story!). At night, I steal all the tickets. I then rearrange them in the following way. I get all the tickets numbered between 0 and 0.990, and my best friend gets all the tickets numbered between 0.990 and 0.999. I then redistribute the remaining tickets to all the people who bought tickets. So by morning, my friend and I have all the tickets numbered between 0 and 0.999, but everybody who had a ticket still has a ticket, and the ticket she has is just a good as the one she had before. I have made it pretty sure that I would win, but I haven't lowered anybody else's chances at winning.

Bracketing contingent considerations of public peace and of positive law, it seems that:

I have harmed none—no one's chance of winning has gone down.

I have benefited some myself and my friend—our chances of winning have gone up.

I have done wrong.

Thus, an action that benefits some and harms none can still be wrong.

One might object. Suppose ticket number 0.458 wins. Previously it was assigned to Mr Smith. Now it's mine. Haven't I harmed Mr Smith? Maybe but maybe not. Let me fill out the case by saying that there is no fact of the matter as to who would have won had I not shuffled the tickets. In the story, we live in a very chaotic universe, and any activity—be it stretching one's arms in the morning or shuffling tickets—affects the random choice of winning number. There is no fact about what that random choice would have been had things gone differently. Thus just because ticket 0.458 wins and it was Mr Smith's before my night-time activity one cannot say that I have harmed Mr Smith. (Molinists won't like this. But surely whether I have harmed anybody shouldn't depend on the truth values of Molinist conditionals.)

Michael Fara in a very interesting paper has offered an account of ability that makes PAP compatible with both determinism and Frankfurt examples. The clever move is to note that ability should not be tied to counterfactuals like

Were x to try in circumstances C, x would do A

since such counterfactuals can be "masked" in Frankfurt cases, but to dispositions. More precisely:

(DispAb) x is able to do B in circumstances C if and only if x has a disposition to do B when trying to do B in C.

The possession of a disposition to do B upon trying to do B in C is compatible with being actually determined to do A (where B is incompatible with A). And Frankfurt cases don't take away the disposition, but only mask the counterfactual. This is very neat.

Very neat, except that it runs into one serious difficulty. The definition of ability cannot be plugged into PAP. To plug it into PAP, we would need DispAb to define being able to do B. But DispAb doesn't define that. It defines being able to do Bin circumstances C. And PAP has no mention of circumstances. In other words, PAP uses a two-place concept of ability—x is able to do A—while DispAb defines a three-place concept—x is able to do A in C.

Maybe this is much ado about nothing. While PAP doesn't mention circumstances, we should take it to say:

(PAP') If x freely does (variant: chooses) A, then x is able to do (variant: choose) otherwise than Ain these circumstances.

But now the problem is evident. For what are these circumstances in a Frankfurt case? Suppose we say:

The circumstances of deciding between A and B while there is a neurosurgeon who upon observing that you are about to try to do otherwise than A will prevent you from doing otherwise than A.

But you are not disposed to do otherwise than A when trying to do otherwise than A in circumstances (1). The details depend on how we spell out (1). Either the neurosurgeon manages to redirect your action towards Abefore you try to do otherwise or right after you have begun to try. If after, then it is clear that you are not disposed to do A in circumstances (1), since to have such a disposition you'd need to have a disposition to get past the neurosurgeon's control when you try to do otherwise, which you don't. If, on the other hand, the neurosurgeon is able to prevent the trying itself, then trying to do A in (1) is impossible, and Fara says you don't have dispositions whose activation conditions are impossible. (He needs this to handle cases where some psychological compulsion removes your ability to do B by making it psychologically impossible to try to do B.)

Of course, one might use coarser-grained circumstances:

The circumstances of deciding between A and B.

But that's too coarse grained. Suppose, for instance, that you are deciding between staying put and running off, while unbeknownst to you, you are tied down. That's a case where obviously you have no ability to do otherwise than stay put. Nonetetheless, you do have a disposition to do otherwise than A when trying to do otherwise than A in (2). For normally when you try to do otherwise than A in (2), you do succeed.

The above approach to figuring out what the relevant circumstances are is too ad hoc anyway. Obviously, one can specify the circumstances at varying levels of descriptive detail. We need a principled way to decide how much detail. The minimal level of detail is given by (2). We have already seen that that's too little. We presumably cannot include in the circumstances what the decision itself is—we get irrelevantly weird stuff if we ask what you are disposed to do in the odd circumstances of trying to do B when having decided to A. So that would be too much. Where can a principled line be drawn? I think the three most natural non-arbitrary options are:

The circumstances are all events that are causally prior to your decision.

The circumstances are all events that are not your decision nor causally posterior to your decision.

The circumstances are the complete state of the universe temporally just prior to your decision.

But now notice that given causal determinism, none of these three ways of specifying the circumstances is going to allow you to maintain PAP in a causally deterministic world. For each of them in a deterministic universe is sufficient, at least in conjunction with the laws[note 1], to fix what you're going to decide.

So all the non-arbitrary ways I can think of for spelling out the circumstances to plug the dispositional account of ability into PAP are not ones a compatibilist who wants PAP to hold can embrace.

But interestingly Fara's dispositional approach can help the incompatibilist! For those of us who think that causal (or at least explanatory) priority is the most important thing vis-a-vis freedom will, I think, be drawn to (3) as the right description of the circumstances. But the libertarian can say that in the neurosurgeon-type Frankfurt case, you do have the disposition to act otherwise than A when trying to do A in circumstances C satisfying (3). For the neurosurgeon's activities are not causally prior to your decision. (And if they were, then the Frankfurt case would make your action be determined by causal factors prior to your action, and that would beg the question against the typical libertarian.)

So, to recap, a non ad hoc filling out of the details in the dispositional account of ability (a) is incompatible with saving PAP on compatibilism but (b) can help the libertarian with Frankfurt examples.

Thursday, October 10, 2013

Imagine that a dart with a perfectly defined tip is going to be thrown by a monster at a circular target, with the impact position uniformly distributed over the target and every point equally likely. There are uncountably many people, and there is a one-to-one assignment of points on the target to people, with every point and every person assigned. There is a monster who will throw a dart with a perfectly defined tip, in such a way that its impact point is uniformly distributed over the target with each point equally likely. The monster will then eat anyone whose point is hit by the dart.

People come in two kinds. There are pointy-eared and round-eared people. The pointy-ears are assigned the left half of the target and the round-ears are assigned the right half, and the dividing line is divided as fairly as can be, too. But along comes a racist who changes all the assignments, moving the pointy-ears to a tiny circle in the middle of the middle of the target containing 1% of the target, and spreading the round-ears over the remaining 99% of the target, but ensuring still that each point is assigned to one person and each person to one point.

The racist harmed the round-ear group. For increasing the chance of harm to a group or individual is a form of harm to the group or individual. (Endangerment is not a victimless crime, even if the danger does not actually befall anyone.) But the racist harmed no individual. No round-ear had her probability of being eaten go up. After all, every point on the target had equal probability of being hit. John, a man with round ears, was first assigned, let's say, to a point right by the middle. Later he was reassigned to a point near the rim. That does nothing to affect the chances of John being eaten.

Thus, it is possible to harm a group without harming any individual from the group.

But perhaps this doesn't give group-rights proponents quite as much as might at first sight seem. Certainly our racist harmed the round-ear group. But she also benefited many other groups of people, each group of the same size as the round-ears. For we can subdivide the benefited pointy-ear group into infinitely many groups, each with the same number as the round-ears, and many of these groups will have been benefited. In fact, there need be no difference between how many groups of that cardinality were benefited and how many groups were harmed. So how can we blame our racist rearranger? She harmed infinitely many groups and she benefited infinitely many groups.

Well, you might say: But the round-ears are a non-arbitrary (maybe more natural in the David Lewis sense) grouping of people, while most of the benefited groups are completely arbitrary groupings.

That may be. If so, then the argument supports the idea that it makes sense to talk of group harm in the case of non-arbitrary groups.

If you have an infinite fair lottery, it's possible to boost the probability that some member of a group wins without boosting the probability of any particular member of that group winning.

For suppose you have a genuine lottery where the tickets are numbered 1,2,3,..., and then a winning ticket is picked fairly. Presumably, the probability that an even-numbered ticket will be picked by the organizers is 1/2. And the probability that a ticket whose number is divisible by four (4,8,12,...) will be picked is 1/4.

Now suppose that after all the infinitely many players have bought their tickets, a mad ticket swapper goes around in the middle of the night. She takes all the tickets and redistributes them so that all the people with ticket numbers divisible by four get even ticket numbers, and all the people with ticket numbers not divisible by four get odd ticket numbers, with every ticket number being had by somebody. We can make the swapping rule be this if we like:

1→1,2→3,3→5,4→2,5→7,6→9,7→11,8→4,9→13,....

So what did the ticket swapper do? She doubled the chance that someone from the set of people whose original ticket numbers were divisible by four would win. For the probability that someone from that set would win without the swapper doing her thing was 1/4, i.e., the probability that a number divisible by four would be picked. But after the swap, the probability that someone from that set would win is 1/2, i.e., the probability that an even number would be picked.

But the mad ticket swapper did not change anybody's chances at winning. Let's say you started out with ticket 8. You now have ticket 4. If the lottery was fair, the probability that 4 is picked is exactly the same as the probability that 8 is picked. So while the probability that someone from your group would win has gone up from 1/4 to 1/2, this makes no difference to your personal probability of winning—-or to anyone else's!

Note that the above argument works no matter whether one says individual winning probabilities are zero or infinitesimal.

You might draw from the above the conclusion that there can't be a countably infinite fair lottery. While the conclusion would be correct, the inference might be a mistake. For you can do this with darts, too. See the next post.

Wednesday, October 9, 2013

Some theorists see the three persons of the Trinity as parts of the Trinity. Here is an untoward consequence of this. Parthood is transitive. Ten fingers are parts of Christ. Christ on the view is a part of the Trinity. Hence, the Trinity has ten fingers as parts. Which seems absurd.

Of course, one might worry about a similar problem for more orthodox views of the Trinity. Ten fingers are parts of Christ. But Christ is identical with God. So ten fingers are parts of God. However, the orthodox Trinitarian is used to answering problems of the form: "Christ is A (say, changeable or non-omniscient); Christ is identical with God; so God is A." She may accept the conclusion but block the absurdity by a qua move: "Christ qua human is changeable. So God qua human, i.e., a divine person who is human, is changeable." The parthood-Trinitarian had better not say that the Trinity has ten fingers qua human, because the Trinity is not human. Maybe she can say that the Trinity has ten fingers qua whole that has a part who is human. But by transitivity of parthood, that qua doesn't seem to do much work.

Another move for the more orthodox Trinitarian is to distinguish two different kinds of identity, identity of person and identity of essence, and say that that some predicates only transfer across one of these two identities. I suppose the analogue for parthood-Trinitarian might be to distinguish two kinds of parthood and say that they can't be combined transitivity-wise. That might be the best move for the parthood-Trinitarian.

Finally, the more orthodox Trinitarian can simply deny that there is such a relation as parthood—and in particular that ten fingers are part of Christ.

Tuesday, October 8, 2013

When I say that something is metaphysically impossible to do, I will mean: metaphysically impossible for creaturely causation. For this post I leave open the question of what God might be able to do through primary causation. The following seems quite plausible to me:

If a particle is in a mixed |A>+|B> quantum state, then it is metaphysically impossible to determine the particle to collapse into the |A> state.

It is surely metaphysically possible to determine it about that the particle should have a transition from an |A>+|B> state to an |A> state. But not every transition from an |A>+|B> state to an |A> is a collapse. A collapse seems to be a natural kind of transition under the influence of the wavefunction. One can presumably take a particle in a mixed state, and then determine it to have a particular pure state. But that isn't collapse. That is our change of the particle's state. This seems very plausible to me.

The following seems to me to be just as plausible as (1):

If an agent is deciding between A for reasons R and B for reasons S, then it is metaphysically impossible to determine the agent to choose A for R over B for S.

Of course, compatibilists can't say (2). But I find it surprising that in the Frankfurt literature incompatibilists typically grant the denial of (2), allowing that neural manipulators or blockers can induce particular choices. But I see very little reason for an incompatibilist to think (2) true. Of course, it may well be possible to deterministically induce a transition from the state of the agent deciding between A and B to the state of the agent attempting to execute A. But such a transition would seem to me to be very unlikely to be a choice.

Simply doing A after deciding between A and B does not constitute having chosen A. Nor is it sufficient for having chosen A that one does A because of deciding between A and B. For one to have chosen, one's doing of A must be caused in the right way by one's process of decision between A for R and B for S. But it just seems very implausible that an externally determined transition, even if it somehow causally incorporated the process of decision, would be a case of causing in the right way.

Could there perhaps be overdetermination, so that one's transition from deciding between A and B one's doing of A be both an exercise of freedom and externally determined? Quite possibly. But that wouldn't be a case where the choice is overdetermined. Rather, it would be a case where choice and external determination overdetermine the action A. The choice, however, is still un-determined.

But couldn't one make the agent choose A for R over B for S by strengthening the motive force of R or weakening that of S? I don't think so. For as long as each set of reasons has some motive force over and against the other set of reasons, it might yet win, just as a particular in a |A>+0.000001|B> state might yet collapse into the |B> state.

The above doesn't settle one question. While it is not possible to determine that one choose A over B, maybe it is possible to determine that one not choose B, by preventing a choice into a decided-for-B state, while allowing a choice in favor of the decided-for-A state? I see little reason to allow such a possibility.

Distinguish between properly agential responsibility and effect responsibility. If I force you to do B, I am agentially responsible for forcing you, but only effect responsible for your doing B. If I deliberately take a drug that forces me to do B, I am agentially repsonsible for taking the drug, but only effect responsible for my doing B. One can put the distinction by distinguishing between responsibility for an action an responsibility for a state of affairs. If I take a drug that forces me to do B, then I am agentially responsible for taking the drug, but only effect responsible for the state of affairs of my doing B.

Simplify slightly by considering actions where there is only one alternative.

x is agentially morally responsible for an action A if and only if there is an alternative B and subjective reason sets R for A and S for B such that x is agentially morally responsible for doing A for R rather than B for S.

x is agentially morally responsible for doing A for R rather than B for S if and only if x did A out of a choice of A for R over B for S that x is agentially morally responsible for.

x is agentially morally responsible for choosing A for R over B for S if and only if (a) x chose A for R over B for S and (b) the moral considerations in R and S are not exactly balanced.

Thursday, October 3, 2013

Suppose that I had a small single-room apartment, and a set of Star Trek style holoemitters which can generate shaped and colored force fields that simulate objects for the purposes of both sight and touch. I could then with a simple verbal command transform my room from a bedroom, complete with bed, bedding and bedside table, to a kitchen with all the appurtanances, to a well-appointed bathroom, to a cozy library, to a living room with as much soft, cushy furniture as would fit. The furniture would be fully usable.

Now, suppose I went to Ikea and bought a lovely new set of sofa designs and uploaded them to my holoemitter. Shouldn't I say that at this very point, when I uploaded the sofa designs, I had a new sofa? If so, then sofas aren't essentially material objects. For my new sofa would, until projected, exist only as a configuring of the memory system of the holoemitter computer. Of course, after activating the sofa projection, the force fields in my room would be sofa-shaped. But they no more would be the sofa than the arrangements of pixels on the screen are identical with the electronic book that I purchased (even in the special case that the book is only one screen long). The fields would manifest the sofa and would at most be part of it.

But perhaps one can resist the above. Maybe I have no new sofa until I project it, and it is the shaped force fields that make up the sofa. But now notice that the sofa can be turned off and then turned back on. There is presumably no numerical identity in the force fields making up the sofa on successive activations. Yet it sure seems plausible to say that it's the same sofa. (We could even have the holoemitter store data on acquired characteristics like dents and scratches.) Suppose we deny that. Then we can still ask: Why is it the same sofa from millisecond to millisecond, when it hasn't been turned off? After all, during every millisecond it is entirely produced by the holoemitter. It has no "existential inertia" (this is not so clear in the Star Trek canon, but I stipulate thus). Yet it would be weird to say that I have had thousands, or maybe even infinitely many, sofas in a single second. But what if I hack the holoemitter to emit two copies of the sofa for a big party, thereby illegally circumventing my sofa license from Ikea? Which of the two sofas being projected will be the sofa I had the day before? There just is no answer to that question.

I think that the more we think about such science fictional scenarios, the more we destabilize our concept of an artifact. And this gives more plausibility to the thought that there really are no such things as artifacts. There is just stuff (force fields, particles, memory systems) arranged artefactually, stuff that serves as sofas, computers and chairs. And we talk as if there are artifacts. But within minutes we would talk in the same way of the holofurniture!

But one might also take this approach to give plausibility to a less radical solution, namely Rob Koons' theory that artifacts are particularized social practices. There will still be tough questions with counting, though. If I buy two sofa licenses from Ikea, the holoprojector computer might not keep two separate copies of the sofa design files. It might much more efficiently store a single copy of the sofa design files with a note that I am permitted two simultaneous manifestations. It seems more plausible that there is a single particular social practice—the pair-of-sofas practice—or two practices, one per sofa. But what if I initially bought a single sofa license from Ikea, and later bought a second license. Did I lose the first sofa, and gain a pair-of-sofas?

What kind of a structure do the utilities that egoists (on an individual level) and utilitarians (on a wider scale) want to maximize have? A standard approximation is that utilities are like real numbers. They have an order structure, so that we can compare utilities, an additive structure, so we can add utilities, and a multiplicative structure, so we can rescale them with probabilities. But that is insufficiently general. We want to allow for cases such as that any amount of value V2 swamps any amount of value V1. Thus, Socrates thought that any amount of virtue is better to have than any amount of pleasure. The structure of the real numbers won't allow that to happen.

A natural generalization is to note that the multiplicative structure of the space of utilities was overkill. We don't need to be able to multiply utilities by utilities. That operation need not make sense. We simply need to be able to multiply utilities by probabilities. Since probabilities are real numbers, a structure that will allow us to do that is that of a partially ordered vector space. However, we should not impose more structure on the utilities than there really is. It makes sense to multiply a utility by a probability in order to represent the value of such-and-such a chance at the utility. And since we have an additive structure on the utilities, we can make sense of multiplying a utility by a number greater than 1. E.g., 2.5U=U+U+(0.5)U. But it is not clear that it always makes conceptual sense to negate utilities. While it makes sense to think of a certain degree of pain as the negative of a certain degree of pleasure, it is not clear that such a negation operation is available in general.

Getting rid of the spurious structure of multiplying utilities by a negative number, and removing the unnecessary multiplication by numbers greater than 1, we get naturally get a structure as follows. Utilities are a partially ordered set with an operation + on them and there is an action of the commutative multiplicative monoid [0,1] on the utilities, with the order, addition and action all compatible.

A further generalization is that [0,1] may not be the best way to represent probabilities in general. So generalize that to a commutative monoid (with multiplicative notation). We now have this. A utility space is a pair (P,U) where P is a commutative monoid with multiplicatively written operation and an action on U, U is a commutative semigroup with an additively written operation + and a partial order ≤, where the operations, action and orders satisfy:

(xy)a=x(ya) for x,y∈P and a∈U

x(a+b)=xa+xb for x∈P and a,b∈U

If a≤b, then xa≤xb for a,b∈U and x∈P

If a≤b and c∈U, then a+c≤b+c.

I keep on going back and forth on whether U really should have an addition operation, though. I do not know if utilities can be sensibly added.

Tuesday, October 1, 2013

An event E has divine-intention probability p if and only if E is part of a system S of events such that there is a probability function P applicable to events in S and God intends S to be such as to be described well by P. For instance, suppose over the history of the world a million coins are tossed and of these 50.1% land heads and other statistical properties of the coin tosses are described well by the assumption that the coin tosses are independent events with each outcome of equal probability. Then if God intended that the coin tosses be such as to be thus described, each coin toss has a divine-intention probability 1/2 of being heads.

In the case of finite sequences, standard frequentism suffers from the problem that it gives intuitively incorrect answers. In the above scenario, it would say that the probability of heads is 0.501. But surely it is possible for a million coins to be tossed, each with probability 1/2 of heads, and yet to get 50.1% of the coins landing heads.

Lewis's regularity theory of laws can get the same result as the theistic version, for in the case of probabilistic laws one trades accuracy against simplicity. But regularity theories of laws get the order of explanation mixed up: chances are supposed to be explanatory rather than descriptive (of course Humeans will say that this is a false dilemma, but they are wrong).

About Me

I am a philosopher at Baylor University. This blog, however, does not purport to express in any way the opinions of Baylor University. Amateur science and technology work should not be taken to be approved by Baylor University. Use all information at your own risk.