Friday, January 30, 2015

I've been thinking about a curious issue in translation, which is not that uncommon. In most ordinary contexts, the Polish "ręka" and the English "hand" would be interchangeable in the sense that where a speaker of one language would use the one, the speaker of the other would use the other. Where the English-speaker talks of having something in his hand, the Polish-speaker talks of having it in his ręka, and so on. But the two terms are not synonymous. In non-medical Polish, "ręka" refers to the whole of the upper limb (though in medical Polish, it refers just to the hand), while the English "hand" refers only to the area from the wrist to the fingertips. The Polish term referring to the exact same part of the body as the English "hand" does is "dłoń", but the word is significantly less used than "ręka" (as per Google hits in .pl sites, say), and in many ordinary contexts using "dłoń" for "hand" would make for awkward translation. Conversely, to translate the Polish "ręka" as "arm", which would refer to the same part of the body (I am assuming that the arm includes the hand), would in most cases lead to awkwardness as well. It sounds funny to talk of picking up one's phone with one's arm, and so on.

Thus, it seems that these are cases where the natural translation from one language to the other does not in fact preserve the truth conditions. One can pick up one's phone with one's ręka without picking it up with one's hand (say, use the crook of the elbow), even though in the context of picking up a phone one would translate "ręka" as "hand", unless it was obvious from the context that the hand wasn't the part of the arm that was being used.

Maybe what is happening here is that when a sentence asserts a proposition p and implicates a stronger proposition q, we feel no qualms about using a translation that asserts q, or vice versa. To say in Polish that one picked up one's phone with one's ręka implicates the stronger proposition that one did this with one's hand, since if one had picked it up in the unusual way with the crook of the elbow, say, we would have expected the speaker to mention this. (This is a case where the usual Gricean presumption that one will use an equally brief but more precise term in place of a less precise one is defeated by the fact that the more precise and equally brief term "dłoń" is also less commonly used.) So one translates the implicature rather than the assertion.

I wonder, though. Maybe cases like this are evidence that the distinction between implicature and assertion is artificial. This would have the important consequence that the wrongness of implicating contrary to one's mind, or at least intentionally doing so, is the same sort of thing as lying. I don't want to embrace that consequence in general. I think false implicature is qualitatively less morally problematic than lying.

Thursday, January 29, 2015

We like being sure. No matter how high our confidence, we have a desire to be more sure, which taken to an extreme becomes a Cartesian desire for absolute certainty. It's tempting to dismiss the desire for greater and greater confidence, when one already has a very high confidence, as irrational.

But the desire is not irrational. Apart from certain moral considerations (e.g., respecting confidentiality) a rational person does not refuse costless information (pace Lara Buchak's account of faith). No matter how high my confidence, as long as it is less than 100%, I may be wrong, and by closing my ears to free data I close myself to being shown to have been wrong, i.e., I close myself to truth. I may think this is not a big deal. After all, if I am 99.9999% sure, then I will think it quite unlikely that I will ever be shown to have been wrong. After all, to be shown to be wrong, I have to actually be wrong ("shown wrong" is factive), and I think the probability that I am wrong is only 0.0001%. Moreover, even if I'm wrong, quite likely further evidence won't get me the vast distance from being 99.9999% sure to being unsure. So it seems like not a big deal to reject new data. Except that it is. First, I have lots of confident beliefs, and while it is unlikely for any particular one of my 99.9999%-sure beliefs to be wrong, the probability that some one of them is wrong is quite a bit higher. And, second, I am a member of a community, and for Kantian reasons I should avoid epistemic policies that make an exception of myself. And of course I want others to be open to evience even when 99.9999% sure, if only because sometimes they are 99.9999% sure of the negation of what I am 99.9999% sure of!

So we want rational people to be open to more evidence. And this puts a constraint on how we value our levels of confidence. Let's say that I do value having at least 99.9999% confidence, but above that level I set no additional premium on my confidence. Then I will refuse costless information when I have reached 99.9999% confidence. I will even pay (perhaps a very small amount) not to hear it! For there are two possibilities. The new evidence might increase my confidence and might decrease it. If it increases it, I gain nothing, since I set no additional premium on higher confidence. If it decreases it, however, I am apt to lose (this may requiring tweaking of the case). And a rational agent will pay to avoid a situation where she is sure to gain nothing and has a possibility of losing.

So it's important that one's desire structure be such that it continue to set a premium on higher and higher levels of confidence. In fact, the desire structure should not only be such that one wouldn't pay to close one's ears to free data, but it should be such that one would always be willing to pay something (perhaps a very small amount) to get new relevant data.

Intuitively, this requires that we value a small increment in confidence more than we disvalue a small decrement. And indeed that's right.

So our desire for greater and greater confidence is indeed quite reasonable.

There is a lesson in the above for the reward structure in science. We should ensure that the rewards in science—say, publishing—do not exhibit thresholds, such as a special premium for a significance level of 0.05 or 0.01. Such thresholds in a reward structure inevitably reward irrational refusals of free information. (Interestingly, though, a threshold for absolute certainty would not reward irrational refusals of free information.)

I am, of course, assuming that we are dealing with rational agents, ones that always proceed by Bayesian update, but who are nonetheless asking themselves whether to gather more data or not. Of course, an irrational agent who sets a high value on confidence is apt to cheat and just boost her confidence by fiat.

Technical appendix: In fact to ensure that I am always willing to pay some small amount to get more information, I need to set a value V(r) on the credence r in such a way that V is a strictly convex function. (The sufficiency of this follows from the fact that the evolving credences of a Bayesian agent are a martingale, and a convex function of a martingale is a submartingale. The necessity follows from some easy cases.)

This line of thought now has a connection with the theory of scoring rules. A scoring rule measures our inaccuracy—it measures how far we are from truth. If a proposition is true and we assign credence r to it, then the scoring rule measures the distance between r and 1. Particularly desirable are strictly proper scoring rules. Now for any (single-proposition) scoring rule, we can measure the agent's own expectation as to what her score is. It turns out that the agent's expectation as to her score is a continuous, bounded, strictly concave function ψ(r) of her credence r and that every continuous, bounded, strictly concave function ψ defines a scoring rule such that ψ(r) is the agent's expectation of her score. (See this paper.) This means that if our convex value function V for levels of confidence is bounded and continuous—not unreasonable assumptions—then that value function V(r) is −ψ(r) where ψ(r) is the agent's expectation as to her score, given a credence of r, according to some strictly proper scoring rule.

In other words, assuming continuity and boundedness, the consideration that agents should value confidence in such a way that they are always willing to gather more data means that they should value their confidence in exactly the way they would if their assignment of value to their confidence was based on self-scoring (i.e., calculating their expected value for their score) their accuracy.

Interestingly, though, I am not quite sure that continuity and boundedness should be required of V. Maybe there is a special premium on certainty, so V is continuous within (0,1) (that's guaranteed by convexity) but has jumps—maybe even infinite ones—at the boundaries.

Wednesday, January 28, 2015

There are infinitely many people. A random process causes each one to independent develop a cancer, either of type A or of type B. The chance that a given individual develops a type A cancer is 9/10 and the chance that she develops a type B cancer is 1/10. It is not possible to diagnose whether an individual has type A or type B cancer. There are two drugs available, either of which—but not both, because they are toxic when combined—could be distributed by you en masse to all of the infinitely people. There is no possibility of distributing different drugs to different people—the logistics only make it possible for you to distribute the same drug to everyone. Drug Alpha cures type A cancer but does not affect type B, and drug Beta cures type B cancer but does not affect type A.

What should you do? Clearly, you should distribute Alpha to everyone. After all, each individual is much more likely to have type A cancer.

But now suppose that an angel reveals to everyone the following interesting fact:

(F) Only finitely many people have type A cancer.

You're very surprised. You would have expected infinitely many to have type A cancer and infinitely many to have type B cancer. But even though F is very unlikely outcome—indeed, classically it has zero probability—it is possible. So, what should you do now?

The obvious answer is that you should distribute Beta to everyone. After all, if you distribute Alpha, finitely many people will be cured, while if you distribute Beta, infinitely many will be. Clear choice!

But not so fast. Here is a plausible principle:

(I) If you're choosing between intrinsically morally permissible options X and Y and for every relevant individual x, option X is in x's best interest, then option X is the best option to choose.

But there is an argument that it is in every individual's interest that you distribute Alpha to her. Here's why. Let x be any individual. Before the angel's revelation of F, it was clearly in x's best interest that she get Alpha. But now we have all learned F. Does that affect what's in x's best interest? There is a very convincing argument that it does not. Consider this proposition:

(Fx) Among people other than x, only finitely many have type A cancer.

Clearly, learning Fx does not affect what is to be done in x's best interest, because the development of cancer in all the patients is independent, so learning about which cancers people other than x have tells us nothing about x's cancer. To dispute Fx is to buy into something akin to the Gambler's Fallacy. But now notice that Fx is logically equivalent to F. Necessarily, if only finitely many people other than x have type A cancer, then only finitely many people have type A cancer (one individual won't make the difference between the finite and the infinite!), and the converse is trivial. If learning Fx does not affect what is to be done in x's best interest, neither should learning the equivalent fact F. So, learning F does not affect what is in x's best interest, and so the initial judgment that drug Alpha is in x's best interest stands.

Thus:

Necessarily, if I is true, then in the infinitary case above, you should distribute Alpha.

But at the same time it really was quite obvious that you should save infinitely many rather than finitely many people, so you should distribute Beta. So it seems we should reject I.

Yet I seems so very obviously true! So, what to do?

There are some possibilities. Maybe one can say deny I in cases of incomplete knowledge, as this one is. Perhaps I is true when you know for sure how the action will affect each individual, but only then. Yet I seems true without the restriction.

A very different suggestion is simply to reject the case. It is impossible to have a case like the one I described. Yet surely it is possible for the outcome of the random process to satisfy F. So where lies the impossibility? I think the impossibility lies in the fact that one would be acting on fact F. And the best explanation here is Causal Finitism: the doctrine that there cannot be infinitely many things among the causal antecedents of a single event. In the case as I described it, the angel's utterance is presumably caused by the infinitary distribution of the cancers.

Monday, January 26, 2015

Rule utilitarianism holds that one should act according to those rules, or those usable rules, that if adopted universally would produce the highest utility. Act utilitarianism holds that one should do that act which produces the highest utility. There is an obvious worry that rule utilitarianism collapses into act utilitarianism. After all, wouldn't utility be maximized if everyone adopted the rule of performing that act which produces the highest utility? If so, then the rule utilitarian will have one rule, that of maximizing the utility in each act, and the two theories will be the same.

A standard answer to the collapse worry is either to focus on the fact that some rules are not humanly usable or to distinguish between adopting and following a rule. The rule of maximizing utility is so difficult to follow (both for epistemic reasons and because it's onerous) that even if everyone adopted it, it still wouldn't be universally followed.

Interestingly, though, in cases with infinitely many agents the two theories can differ even if we assume the agents would follow whatever rule they adopted.

Here's such a case. You are one of countably infinitely many agents, numbered 1,2,3,..., and one special subject, Jane. (Jane may or may not be among the infinitely many agents—it doesn't matter.) Each of the infinitely many agents has the opportunity to independently decide whether to costlessly press a button. What happens to Jane depends on who, if anyone, pressed the button:

If a finite number n of people press the button, then Jane gets n+1 units of utility.

If an infinite number of people press the button, then Jane gets a little bit of utility from each button press: specifically, she gets 2−k/10 units of utility from person number k, if that person presses the button.

So, if infinitely many people press the button, Jane gets at most (1/2+1/4+1/8+...)/10=1/10 units of utility. If finitely many people press the button, Jane gets at least 1 unit of utility (if that finite number is zero), and possibly quite a lot more. So she's much better off if finitely many people press.

Now suppose all of the agents are act utilitarians. Then each reasons:

My decision is independent of all the other decisions. If infinitely many other people press the button, then my pressing the button contributes (2−k)/10 units of utility to Jane and costs nothing, so I should press. If only finitely many other people press the button, then my pressing the button contributes a full unit of utility to Jane and costs nothing, so I should press. In any case, I should press.

And so if everyone follows the rule of doing that individual act that maximizes utility, Jane ends up with one tenth of a unit of utility, an unsatisfactory result.

So from the point of view of act utilitarianism, in this scenario there is a clear answer as to what each person should do, and it's a rather unfortunate answer—it leads to a poor result for Jane.

Now assume rule utilitarianism, and let's suppose that we are dealing with perfect agents who can adopt any rule, no matter how complex, and who would follow any rule, no matter how difficult it is. Despite these stipulations, rule utilitarianism does not recommend that everyone maximize utility in this scenario. For if everyone maximizes utility, only a tenth of a unit is produced, and there are much better rules than that. For instance, the rule that one should press the button if and only if one's number is less than ten will produce nine units of utility if universally adopted and followed. And the rule that one should press the button if and only if one's number is less than 10100 will produce even more utility.

In fact, it's easy to see that in our idealized case, rule utilitarianism fails to yield a verdict as to what we should do, as there is no optimal rule. We want to ensure that only finitely many people press the button, but as long as we keep to that, the more the better. So far from collapsing into the act utilitarian verdict, rule utilitarianism fails to yield a verdict.

A reasonable modification of rule utilitarianism, however, may allow for satisficing in cases where there is no optimal rule. Such a version of rule utilitarianism will presumably tell us that it's permissible to adopt the rule of pressing the button if and only if one's number is less than 10100. This version of rule utilitarianism also does not collapse into act utilitarianism, since the act utilitarian verdict, namely that one should unconditionally press the button, fails to satisfice, as it yields only 1/10 units of utility.

What about less idealized versions of rule utilitarianism, ones with more realistic assumptions about agents. Interesting, those versions may collapse into act utilitarianism. Here's why. Given realistic assumptions about agents, we can expect that no matter what rule is given, there is some small independent chance that any given agent will press the button even if the rule says not to, just because the agent has made a mistake or is feeling malicious or has forgotten the rule. No matter how small that chance is, the result is that in any realistic version of the scenario we can expect that infinitely many people will press the button. And given that infinitely many other people will press the button, if only by mistake, the act utilitarian advice to press the button oneself is exactly right.

So, interestingly, in our infinitary case the more realistic versions of rule utilitarianism end up giving the same advice as act utilitarianism, while an idealized version ends up failing to yield a verdict, unless supplemented with a permission to satisfice.

But in any case, no version of rule utilitarianism generally collapses into act utilitarianism if such infinitary cases are possible. For there are standard finitary cases where realistic versions of rule utilitarianism fail to collapse, and now we see that there are infinitary ones where idealized versions fail to collapse. And so no version generally collapses, if cases like this are possible.

Of course, the big question here is whether such cases are possible. My Causal Finitism (the view that nothing can have infinitely many
items in its causal history) says they're not, and I think oddities such as above give further evidence for Causal Finitism.

Friday, January 23, 2015

I have a special interest in programming education for children, mainly because I have children and because so much of the fun I had as a child was from programming. Not too long ago, I came across the fact that on the Raspberry PI, you can control things in Minecraft with simple python code--in particular, you can procedurally generate objects. This is very attractive because my kids of course are really into Minecraft. But we don't have a PI, and while they're cheap, they need an HDMI display device and we don't have one. It turns out that there are plugins (e.g., Raspberry Juice for Bukkit) for Minecraft servers that implement (most of) the PI's protocol, but it seems overkill to run a private server just to do this.

So this January I made a mod for Minecraft 1.8 and Forge that implements most of the Raspberry PI protocol and works with most of the python scripts that work with Minecraft PI Edition. For instance, here's a spiral and a glass torus with water inside.

The scripts communicate with Minecraft via ASCII messages sent over port 4711. The python API is described here. The subset I implement is the Raspberry Juice one. There is a lot of information on python programming of Minecraft here (and the author of that site has a book which I've ordered for my kids but it hasn't come yet).

I am hoping that this will be great for both programming education and teaching 3D geometry.

It turns out that someone beat me to this by a couple of weeks, and has a mod for 1.7.10 that does the same thing. (In fact some of the ideas in the current version of my mod are based on the ideas from that mod.)

Wednesday, January 21, 2015

Suppose I assign a credence r>1/2 to some proposition p, and I am a perfectly rational agent (and know for sure that I am). I will diametrically change my mind about p provided that at some future time my credence in p will be as far below 1/2 as it is now above it. I.e., I will diametrically change my mind provided that my credence will get at least as low as 1−r.

Of course, I might diametrically change my mind, either because I am wrong and will get evidence to show it, or because I will get misleading evidence. But it it would be worrisome if rational agents frequently diametrically changed their minds when they were already fairly confident. Fortunately, it doesn't actually happen all that often. Suppose my credence is some number r close to 1. Then my credence that I am wrong is 1−r. It turns out that this is quite close to the upper bound on my probability that I will diametrically change my mind. That upper bound is (1−r)/r (easy to prove using that fact that a perfect Bayesian's credence are a martingale), which for r close to 1 is not much more than 1−r.

So if my credence is close to 1, my concern that I will diametrically change my mind is about the same as my concern that I am wrong. And that's how it should be, presumably.

But perhaps I am more worried that one day I will come to suspend judgment about something I am now pretty confident of, i.e., maybe one day my probability will drop below 1/2. The same martingale considerations show that my probability of this danger is no more than 2(1−r).

Thus, if my current credence is 0.95, my probability that I will one day drop to 1/2 is at most 0.10 while my probability that I will one day drop to the diametrical opposite of 0.05 is at most 0.05/0.95=0.053. If I am a perfect Bayesian agent, I can be pretty confident that I won't have a major change of mind with respect to p if I am pretty confident with respect to p.

So if we are confident we have the truth, we should not be afraid that we will lose it as long as we remain rational.

Tuesday, January 20, 2015

Think of these as thresholds for credences. There is an interesting property that the items in this sequence have: If a perfect Bayesian rational agent assigns credence an to a proposition p, then she has credence at least an−1 that her credence in p will never fall below an−1. Thus, the agent who assigns credence a1=0.75 to p thinks it's at least as likely as not (a0=0.5) that her credence will always stay at the level of being at least as likely as not. Moreover, it is easy to show that no lower values for an have this property.

So each of these thresholds for belief has the property that it gives the previous degree of confidence in the belief not dipping below the previous degree of confidence. This is a pretty natural property, and it seems like it would provide a good way of dividing up our confidence in our beliefs. Credence from a1 on indicates, maybe, that "likely" the proposition is true. From a2 on, maybe we can be "fairly confident". From a3 on, maybe we can be "quite confident". The level a4 gives us "pretty sure", while a5 maybe makes us "sure", in the ordinary non-philosophical sense of "sure".

How we label the thresholds is a matter of words. But taking these thresholds as natural division points may be a good way to organize beliefs.

Note that each of these thresholds requires approximately twice as much evidence as the preceding one, if we measure the amount of evidence according to the logarithm of the Bayes factor.

Friday, January 16, 2015

Curley and Arrow are forensic scientist expert witnesses involved in very similar court cases. Each receives a sum of money from a lawyer for one side in the case. Arrow will spend the money doing repeated tests until the money (which of course also pays for his time) runs out, and then he will present to the court the full test data that she found. Curley, on the other hand, will do tests until such time as either the money runs out or he has reached an evidential level sufficient for the court to come to the decision that the lawyer who hired him wants. Of course, Curley isn't planning to commit perjury: he will truthfully report all the tests he actually did, and hopes that the court won't ask him why he stopped when he did. Curley reasons that his method of proceeding has two advantages over Arrow's:

if he stops the experiments early, his profits are higher and he has more time for waterskiing; and

he is more likely to confirm the hypothesis that his lawyer wants confirmed, and hence he is more likely to get repeat business from this lawyer.

Now here is a surprising fact which is a consequence of the martingale property of Bayesian investigations (or of a generalization of van Fraassen's Reflection Principle). When Curley and Arrow reflect on what their final credences will be with respect to what they are each testing, if they are perfect Bayesian agents, their expectation for their future credence equals their current credence. This thought may lead Curley to think himself quite innocent in his procedures. After all, on average, he expects to end up with the same credence as he would if he followed Arrow's more onerous procedures.

So why do we think Curley crooked? It's because we do not just care about the expected values of credences. We care about whether credences reach particular thresholds. In the case at hand, we care about whether the credence reaches the threshold that correlates with a particular court decision. And Curley's method does increase the probability that that credence level will be reached.

What happens is that Curley, while favoring a particular conclusion, sacrifices the possibility of reaching evidence that confirms that conclusion to a degree significantly higher than his desired threshold, for the sake of increasing the probability of reaching the threshold. For when he stops his experiments once the level of confirmation has reached the desired threshold, he is giving up on the possibility—useless to him or to the side that hired him—that the level of confirmation will go up even higher.

I think it helps that in real life we don't know what the thresholds are. Real-life experts don't know just how much evidence is needed, and so there is some an incentive to try to get a higher level of confirmation, rather than to stop once one has reached a threshold. But of course in the above I stipulated there was a clear and known threshold.

Van Fraassen's reflection principle (RP) says that if a rational agent is certain she will assign credence r to p, then she should now assign r to p.

As I was writing on being-sure yesterday, I was struck by the fact (and I wasn't the first person to be struck by it) that for Bayesian agents, the RP is a special case of the fact that the sequence of continually updated credences forms a martingale whose filtration is defined by the information one is updating on the basis of.

Indeed, martingale considerations give us the following generalization of RP:

(ERP) For any future time t, assuming I am certain that I will remain rational, my current credence in p should equal the expected value of my credence at t.

(Van Fraassen himself formulates ERP in addition to RP.) In RP, it is assumed that I am certain that my credence at t will be r, and of course then the expected value of that future credence is r. But ERP generalizes this to cases where I don't know exactly what my future credence will be.

But we can get an even further generalization of RP. I understand that ERP and RP apply when there is a specific future time at which one knows what one's credence will be. But suppose instead we have some method of determining a variable future time T. The one restriction on that determination is that it can only depend on the data available to us up to and including that time. For instance, we might not know exactly when we will perform some set of experiments in the next couple of years, and we might let T be a time at which those experiments have been performed. The generalization of ERP then is:

(GERP) For any variable future time T in a future human life bounded by the normal bounds on human life and such that whether T has been reached is guaranteed to be dependent only on data gathered up to time T, my current credence in p should equal the expected value of my credence at T.

This follows from Doob's optional sampling theorem (given that human life has a normal upper bound of about 200 years) and the martingale property of Bayesian epistemic lives.

Now GERP seems like a quite innocent generalization of ERP when we are merely thinking about the fact that we don't know when we will do an experiment. But now imagine a slightly crooked scientist out to prove a pet theory. She gets a research grant that suffices for a sequence of a dozen experiments. She is not so crooked that she will fake experimental data or believe contrary to the evidence, but she resolves that she will stop experimenting as soon as she has enough experiments to confirm her theory—or at the end of the dozen experiments, if worst comes to worst. This is intuitively a failure of scientific integrity—she seems to be biasing one's research plan to favor the pet theory. One might think that the slightly crooked scientist would be irrational to set her current credence according to her expected value of her credence at her chosen stopping point. But according to GERP, that's exactly what she should do. Indeed, according to GERP, the expected value of her credence at the end of a series of experiments does not depend on how she chooses when to stop the experiments. Nonetheless, she is being crooked, as I hope to explain in a future post.

Thursday, January 15, 2015

Aristotle (in the Rhetoric) understands fear as an attitude towards a possible but uncertain future bad. I wonder if fear of the unknown fits into this schema. At first sight, I think it's easy to fit it in: if something is unknown, it's unknown whether it's going to be good or bad, so there is a possible but uncertain future bad when we meet up with the unknown. But it seems to me that this doesn't quite capture the phenomenology of fear of the unknown. It conflates fear of the unknown with the fear that an unknown bad will happen, and those seem to me to be separate things. And it seems to me that one can have fear of the unknown even where there is no bad at all being feared, say as one contemplates the Cantor hierarchy or the night sky.

Maybe, though, in fear of the unknown the potential bad isn't so much the potential that the unknown will be bad, but the potential for facing something—good or bad—that one is unprepared to face. There is a kind of bad in facing what one is unprepared for (say, a potential for inadequacy).

If this is the story, then it also gives us a way to understand the Biblical idea of fear of the Lord. It's not so much that one is afraid that the Lord will smite us for our sins—though that definitely can be a component—but that one is afraid that as one meets the utterly Other, the ultimate Unknown, one will be inadequate, nothing. And so the fear of the Lord is the beginning of wisdom: it is a recognition that one is nothing, that one may be—indeed, probably or certainly is (here the "fear" verges into what—on at least one translation—Aristotle calls "dread", an expectation of a certain bad) utterly inadequate. But this fear is not the completion of wisdom, for that involves the Unknown reaching out to us, entering into a relationship of love with us, even living among us.

Note, though, that the kind of inadequacy here is not just sinfulness. It is the essential inadequacy of all creatures. Thus, strictly speaking, it's a lack but not a bad. A privation of something due is a bad, but a mere privation is not a bad, and our being creatures is a privation of divine infinity, but is not a bad. If that's right, then a central component of this fear of the Lord is something that isn't quite fear (or even dread) in the Aristotelian sense—it isn't an expectation of a bad, but of something similar to a bad, an innate shortcoming of sorts. We do not quite have a name for this.

Wednesday, January 14, 2015

I think sometimes people think of the doctrine of divine simplicity as an odd artifact of a particular metaphysical view—say, Aquinas'. But that's the wrong way to think about it. Rather, as Maimonedes observed, divine simplicity is an expression of uncompromising monotheism.

For if God had parts, these parts would be in important ways divine. The first and most obvious reason, which I've discussed in at least one earlier post, is that at least some of God's parts would be uncreated. But only God is uncreated. Granted, the Platonist restricts this to claim that only God is an uncreated concrete entity. I think this restriction does compromise on monotheism, but even this restriction won't help here, since presumably God's parts, if he has any, are as concrete as God.

Second, a central theme in monotheism is that God not only is greater than everything else—some polytheists may think this to be true of their chief god—but that God exceeds everything else by, as one might say, "infinitely many orders of magnitude." But can a being that is composed of parts exceed the collection of his parts by infinitely many orders of magnitude? The whole can be greater than the parts taken together. But can it be so much greater than the parts, so much that God is God but the parts taken together do not threaten monotheism? If one responds that the sum of God's parts is just as God (as on classical mereology), and so God doesn't have to exceed the sum, then I have a different argument. Consider any one part x1 of God, and consider the collection X* of God's other parts. Then if God is the sum of his parts, he cannot exceed bothx1 and X* by infinitely many orders of magnitude, since the sum of two things does not exceed both of them by infinitely many orders of magnitude (compare the arithmetical fact that a+b is no greater than twice the greater of a and b). And so at least one of x1 or X* threatens uncompromising monotheism.

Third, a being that is made of parts has some powers because of the parts. So if God were made of parts, he would have some powers because of something other than himself. But that certainly threatens monotheism.

Fourth, if God were not simple, then sometimes when we worship God, we would be worshiping him on account of some component of God. For instance, we would be worshiping God on account of his mercy, or on account of his justice, or on account of his beauty.

Now, we learned in Plato's Lysis that if we love a for the sake of b, then in an important sense what we really love is b. I propose a weaker analogue to this principle:

If we worship x on account of y, then we are thereby worshiping y

(I am not saying that we don't really worship x). Thus:

If God is not simple, our worship of God on account of his mercy (say) is worship of a component of God that is not God.

But to worship something other than God is idolatrous on uncompromising monotheism. Thus:

Worship of anything other than God is wrong if uncompromising monotheism is true.

However:

It is not wrong to worship God on account of his mercy.

Putting (1)-(4) together, we conclude that God is simple.

I think the last argument is the religiously deepest reason why uncompromising monotheism requires divine simplicity. Divine simplicity ensures that our worship of God has only God as the object of worship.

Friday, January 9, 2015

Let's try another exercise in philosophical imagination. Suppose Platonism and dualism are true. Then consider a theory on which our souls actually inhabit a purely mathematical universe. All the things we ever observe—dust, brains, bodies, stars and the like—are just mathematical entities. As our souls go through life, they become "attached" to different bits and pieces of the mathematical universe. This may happen according to a deterministic schedule, but it could also happen an indeterministic way: today you're attached to part of a mathematical object A1, and tomorrow you might be attached to B2 or C2, instead. You might even have free will. One model for this is the traveling minds story, but with mathematical reality in the place of physical reality.

This is a realist idealism. The physical reality around us on this story is really real. It's just not intrinsically different from other bits of Platonic mathematical reality. The only difference between our universe and some imaginary 17-dimensional toroidal universe is that the mathematical entities constituting our universe are connected with souls, while those constituting that one are not.

One might wonder if this is really a form of idealism. After all, it really does posit physical reality. But physical reality ends up being nothing but Platonic reality.

Given Platonism and dualism, this story is an attractive consequence of Ockham's Razor. Why have two kinds of things—the physical universe and the mathematical entities that represent the physical universe? Why not suppose they are the same thing? And, look, how neatly we solve the problem of how we have mathematical knowledge—we are acquainted with mathematical objects much as we are with tables and chairs.

"But we can just see that chairs and tables aren't made of mathematical entities?" you may ask. This, I think, confuses not seeing that chairs and tables are made of mathematical entities with seeing that they are not made of them. Likewise, we do not see that chairs and tables are made of fundamental particles, but neither do we see that they are not made of them. The fundamental structure of much of physical reality is hidden from our senses.

So what do we learn from this exercise? The view is, surely, absurd. Yet given Platonism and dualism, Ockham's razor strongly pulls to it. Does this give us reason to reject Platonism or dualism? Quite possibly.

Wednesday, January 7, 2015

As the length increases, the possibilities for good novels initially increase. It may not be possible to write a superb novel significantly shorter than One Day in the Life of Ivan Denisovich. But eventually the possibilities for good novels start to decrease, because the length itself becomes an aesthetic liability. While one could easily have a series of novels that total ten million words, a single novel of ten million words just wouldn't be such a good novel. Indeed, it seems plausible that there is no possible novel of ten million words (in a language like human languages) that's better than War and Peace or One Day or The Lord of the Rings.

If this is right, then there are possible English-language novels with the property that they could not be improved on. For there are only finitely many possible English-language novels of length below ten million, and any novel above that length will be outranked qua novel by some novel of modest length, say War and Peace or One Day.[note 1]

So, there are possible unimprovable English-language novels. Are there possible unimprovable worlds? Or is it the case that we can always improve any possible world, say by adding one more happy angelic mathematician? In the case of novels, we were stipulating a particular kind of artistic production: a novel. Within that artistic production, past a certain point length becomes a defect. But is an analogue true with worlds?

One aspect of the question is this: Is it the case that past a certain point the number of entities, say, becomes a defect? Maybe. Let's think a bit why super-long novels aren't likely to be that great. They either contain lots of different kinds of material or they are repetitive. In the latter case, they're not that great artistically. But if they contain lots of different kinds of material, then they lose the artistic unity that's important to a novel.

Could the same thing be true of worlds? Just adding more and more happy angels past a certain point will make a world repetitive, and hence not better. (Maybe not worse either.) But adding whole new kinds of beings might damage the artistic unity of the world.

Monday, January 5, 2015

There has been recent interest in subtraction arguments for the thesis that possibly there is nothing concrete. These arguments tend to be based on the thesis that there cannot be a concrete object such that subtracting it necessitates adding something to the world. Here is a much weaker subtraction principle:

There is no concrete contingent object o such that there could be a concrete object o* with the property that necessarily o* exists if and only if o does not exist.

This is weaker in several ways than the standard subtraction principle. It only extends to actual objects o. It is restricted to contingent objects. And it rules out only the possibility that there is a single possible concrete o* that, necessarily, exists if and only if o does not.

Now, suppose that divine believings are objects distinct from God. Believings seem to be concrete objects. Let o be God's believing that there are horses and o* be God's believing that there are no horses. Without divine simplicity, o and o* will be distinct from God, and presumably necessarily o* will exist if and only if o does not (since God is necessarily existent and essentially omniscient). But that would violate (1).

So it seems that we shouldn't suppose divine believings to be objects distinct from God. Thus, either divine believings aren't objects, or they are identical with God. In either case, we have divine simplicity with respect to divine believings.

Saturday, January 3, 2015

At least as traditionally philosophically understood, the Catholic understanding of transsubstantiation insists on the persistence of (at least some) of the accidents of bread and wine after the bread and wine have ceased to exist. But how can accidents exist without their substance?

Well, imagine a very long rattlesnake—say, a billion kilometers long—all stretched out in space. Suppose that the snake rattles its rattle at noon for a second, and one second after the end of the rattling a prearranged array of blasters simultaneously annihilates the whole snake.

Let R be the accident of the snake's rattling. A simple relativistic calculation shows that there is an inertial reference frame in which the rattling occurs after the vast majority of the snake—including all of the snake's vital organs (which I assume are placed much as in a normal snake)—has been annihilated. But an animal is dead, and hence non-existent (barring afterlife for animals; let's stipulate there is none), after all its vital organs have been annihilated. Thus, there is a reference frame in which the accident R exists after the substance S of the snake has been annihilated.

So special relativity gives us good reason to think that accidents can survive the destruction of the substance, at least in some inertial reference frames. But all inertial reference frames are supposed to be on par.

I suppose an opponent of transsubstantiation could insist that while an accident can survive the destruction of a substance in some reference frames, it cannot survive the destruction of the substance in all reference frames (as it would have to in the case of the Eucharist). But that requirement sounds a little ad hoc.

So, relativity theory gives us good reason to reject one of the most famous objections to transsubstantiation.

Friday, January 2, 2015

Mereological universalism says that every set A of objects has a fusion: an object wholly composed of parts in A. One motivation for mereological universalism is to make sure that ordinary language terms like "this chair", "that bump" and "those waves" have reference, while avoiding the ad hoc anthropocentrism of positing all but only all the objects that ordinary language describes. We don't want to be ad hoc, so we suppose a vast multitude of objects.

In order to make sense of commonsense objects that have exactly the same parts now—such as the statue and the lump of bronze—the above strategy is normally extended to be four-dimensional: any set of four-dimensional objects has a fusion. The lump of bronze includes temporal parts that pre-existed the statue. On the other hand, if a piece of the statue will tomorrow break off and be repaired with brass, that piece of brass is a part of the statue but not of the lump of bronze.

But we also have to make sense of commonsense objects that have exactly the same parts for all time. For instance, it could just so happen that the lump of bronze is produced simultaneously with the statue (e.g., by solidifying within a mould) and destroyed simultaneously with it (e.g., by instantaneous vaporization). To distinguish such objects, mereological universalism is generalized to allow for something like arbitrary fusions across possible worlds. Even if the lump coincides with the statue throughout its career, it doesn't coincide with it in other worlds, say ones where the statue is destroyed in a way that preserves the lump, say by squashing. I call this five-dimensional mereological universalism—the fifth "dimension" being that of worlds. We could formulate this by saying that there is an object corresponding to every modal profile, where a modal profile picks out a four-dimensional (assuming the world is one with time and 3D space; this may need further generalization) object in each world.

The above is a fairly familiar story. But I now want to complicate it some more. If we really are out to preserve commonsense language, we not only have to get an object for every commonsense object, which we can do by saying that every modal profile has an object but we have to get right the definiteness profile for that object. Consider, for instance, a statue made of copper. The statue fairly quickly acquires a copper oxide patina. That patina is definitely a part of the statue. But the patina is not a part of the lump of copper, since copper oxide is not copper. However, it is vague when an atom of copper comes to be an atom of copper oxide—just how tightly bound to an oxygen atom does it need to be? So there will be a temporal part of a copper atom which will only vaguely be a part of the lump of copper, but be definitely a part of the statue.

To avoid anthropocentric arbitrariness, if we take such vagueness seriously, then the same reasoning that led us to mereological universalism in the first place requires yet another generalization. Not only should every set of objects have a fusion, but it should have a fusion with parts whose vagueness matches any "coherent (five-dimensional!) definiteness profile".

This yields a new variety of bloated ontologies. Given three fundamental particles, ordinary mereological universalism gives us seven objects (x, y, z, x+y, x+z, y+z, x+y+z). Five-dimensional universalism gives us an infinity out of these (since there will be infinitely many modal profiles that yield one of the seven objects above). But the vagueness bloat will give a new multitude of objects. For simplicity, I will consider only the world-bound objects. Thus, there will be an object that definitely has x, y and z as its parts. We might call this the definite fusion of x, y and z. But there will also be an object that definitely has x and y as parts, and vaguely has z (and two other similar objects). And an object that definitely has x as a part and vaguely y and z (and two other similar objects). And there will be stranger objects yet, such as an object that definitely has x as a part, and definitely has exactly one of y and z as parts, but it's vague which of the two it is, and the object that definitely has two of the three particles as parts, but it's vague which two it is. And there will be infinite multiplication of such possibilities through higher level vagueness.

It's hard to say how much bloat is too much bloat. But I find it very plausible that this much is too much.

If one rejects this bloat, I think one has three non-exclusive options:

Do not insist on there being objects corresponding to so much of ordinary language.

I do want to say something about (3). One fairly standard way of doing (3) is to say that identity is vague, but parthood, diachronic identity and transworld identity are not. Thus, for any object shaped roughly like me it's definite whether some particle is a part of it. But it's vague which object shaped roughly like me I am. Allowing for an infinite multitude of objects where it's vague which of them is me goes against commonsense in a very serious way, though. The commonsense picture of me is that it's not vague who I am, but vague which exact particles (say, on the peripheries, or in food that's in the process of being digested) are a part of me.

About Me

I am a philosopher at Baylor University. This blog, however, does not purport to express in any way the opinions of Baylor University. Amateur science and technology work should not be taken to be approved by Baylor University. Use all information at your own risk.