Wednesday, September 30, 2009

Necessarily, if our minds supervene on physical brains like ours and these brains are not potentially interfered with, it is possible for us to sin, because of non-deterministic processes in physical brains like ours.

Therefore, in heaven, our minds either do not supervene on physical brains like ours or else these brains are potentially interfered with.

So the Christian materialist has to say that something changes in heaven. Either we get different kinds of brains from the ones we now have (either through matter being moved about or through the laws of the functioning of that matter being changed—which I think also counts as a change of the kind of brains), or else materialism ceases to be false, or else our indestructable righteousness in heaven depends on potential interference with our brain's functioning. This isn't a knock-down argument that materialism can't be true in heaven, but it should give the materialist pause.

Tuesday, September 29, 2009

Sometimes I am struck with how "strange" the Christian faith is—it just seems a bit incredible. But this reflection, I think, helps: we have very good reason to think that the correct physics and cosmology is going to be very strange, too. (Even if, and maybe especially if, it turns out to be quite simple and elegant.) What is prior in the order of knowledge is posterior in the order of being, the Aristotelians tell us, and so we would expect the ultimate explanations of reality to be removed from ordinary experience. I always find amusing the story of how St John Chrysostom had to preach against Arian heretics who used arguments like "If God is a Trinity, then God's essence is incomprehensible; but God's essence is comprehensible; hence God is not a Trinity." St John was preaching against the second premise.

Monday, September 28, 2009

It is normal to talk of "inductive logic", as if non-deductive reasoning formed a branch of logic, with discoverable rules. But what if it is not so? What if the rules of inductive reasoning, unlike the rules of deductive logic, are merely "subjectively necessary", to use Kant's phrase? It is perhaps simply the case that our minds are hard-wired to think in certain ways inductively. This hard-wiring is truth-conducive, not for any deep logical reason (as in the case of deductive logic, where the validity of modus ponens, and the truth of excluded middle, etc. are all necessary truths), but simply because God created us with minds hard-wired to reason inductively in ways that match the arrangement of large segments of the world that he has created.

One can say some of this with natural selection in place of God, but natural selection will only yield the result that our minds' functioning matches the structure of the world in those respects that are relevant to the fitness of our evolutionary forebears—it will give us little or no reason to think that things will work out when we do cosmology or quantum mechanics.

If this is right, then we should not be surprised if one particular formalization of inductive logic—say, the Bayes-Kolmogorov probabilistic account—yields doxastic rules that some of our doxastic practices break, and are right to break. (See the previous several days' posts.) For the theistic story gives us reason to think that our inductive reasoning will get us to the truth, but does not give us much reason to think that our inductive reasoning can be formalized. If this is right, then working scientists may very well do better than ideal Bayesian epistemic agents, say, and be unable to explain their successes.

Probably, the epistemology that would go along with a view like that would have to be some sort of proper-function epistemology. But I am happy to leave that to the epistemologists—I am just a probability theorist.

Sunday, September 27, 2009

I suspect that sometimes we can just see that our priors were wrong, and we can see it in a way that outstrips the speed of Bayesian conditionalization. We can just see where the Bayesian conditionalizations are heading—and jump there.

For instance, suppose I know there is a hidden regular polygon on a page. I have a special device, which I can point to a place on the page and it makes a black dot on the page if the place is within the polygon and a yellow dot otherwise. I have no information on the number of sides of the polygon, so I assign some priors, like maybe 0.0000001 for a triangle, 0.0000001 for a square, and then eventually dropping off. But suppose that in fact it's a square. I put down a lot of random points. It might well happen that I can just see what the shape is, long before my priors converge.

If one were worried that the number of points is insufficient (it would be stupid to think it's a triangle after seeing three points!), one can compare P(what one sees | n-gon) versus P(what one sees | square) to ensure one has enough points for confidence. But in all of this, one can—and perhaps should—side-step the priors.

Saturday, September 26, 2009

There is a physical constant T. You know nothing about it except that it is a positive real number. However, you can do an experiment in the lab. This experiment generates a number t which is uniformly distributed between 0 and T, and the number t is stochastically independent each time the experiment is run.

Suppose the experiment is run once, and you find that t=0.7. How should you estimate T? More exactly, what subjective probability distribution should you assign to T? This is difficult to solve by standard Bayesian methods because obviously either your priors on T should be a uniform distribution on the positive reals, or your priors on the logarithm of T should be a uniform distribution on all the reals. (I actually think the second is more natural, but I did the calculations below only for the first case. Sorry.) The problem is that there is no uniform probability measure on the positive reals or on the reals. (Well, we can have finitely additive measures, but those measures will be non-unique, and anyway won't help with this problem.)

So perhaps the conclusion we should draw from this is that you don't learn anything about T when you find out that t=0.7 other than the deductive fact that T is at least 0.7. But this is not quite right. For suppose you keep on repeating the experiment. If you draw a point at each measured value of t, you will eventually get a dotted line between 0 on the left and some number on the right, and the right hand bound of that interval will be a pretty good estimate for the value of T, and not just a lower bound for T. But if the first piece of data only gives a lower bound, then, by similar reasoning, further pieces of data either will be irrelevant (if they give a t that's less than 0.7) or will only give better (i.e., higher) lower bounds for T, and we'll never get an estimate for T, just a lower bound.

So, the first piece of data should give something. (The reasoning here is inspired by a sentence I overheard someone—I can't remember who—say to John Norton, perhaps in the case of Doomsday.)

Now here is something a little bit fun. We might try to calculate the distribution for T after n experiments in the following way. First assume that T is uniformly distributed between 0 and L (where L is large enough that all the t measurements fit between 0 and L), then calculate a conditional distribution for T given the t measurements, and finally take the limit as L tends to plus infinity. Interestingly, this procedure fails if n=1, i.e., if we have only one measurement of a t value—the resulting limiting distribution is zero everywhere. However, if n>1, then the procedure converges to a well-defined distribution of T. Or so my very rough sketches show.

So there is a radical difference here between what we get with one measurement—no distribution—and what we get with two or more measurements—a well-defined distribution. I have doubts whether standard Bayesian confirmation can make sense of this.

Friday, September 25, 2009

This puzzle is inspired by a reflection on (a) a talk [PDF] by John Norton, and (b) the problem of finding probability measures on multiverses. It is very, very similar—quite likely equivalent—to an example [PDF] discussed by John Norton. Suppose you are one of infinitely many blindfolded people. Suppose that the natural numbers are written on the hats of the people, a different number for each person, with every natural number being on some person's hat. How likely is it that the number on your hat is divisible by three?

The obvious answer is: 1/3. But Norton's discussion of neutral evidence suggests that this obvious answer is mistaken. And here is one way to motivate the idea that the answer is mistaken. Suppose I further tell you this. Each person also have a number on her scarf, a different number for each person, with every natural number being on some person's scarf. Moreover, the following is true: the number on x's scarf is divisible by three if and only if the number on x's hat is not divisible by three. (Thus, you can have 3 on your scarf and 17 on your hat, but not 16 on your scarf and 22 on your hat.) This can be done, since the cardinality of numbers divisible by three equals the cardinality of numbers not divisible by three.

If you apply the earlier hat reasoning to the scarf numbers, it seems you conclude that the likelihood that the number on your scarf is divisible by three is 1/3. But this is incompatible with the conclusion from the hat reasoning, since if the likelihood that the scarf number is divisible by three is 1/3, the likelihood that the hat number is divisible by three must be 2/3.

If there are numbers on hats and scarves as above, symmetry, it seems, dictates that the probability of your hat number being divisible by three is the same as the probability of your scarf number being divisible by three, and hence is equal to 1/2. But this conclusion seems wrong. For the numbers on scarves, even if anti-correlated with those on the hats, should not affect the probability of the hat number being divisible by three. Nor should it matter in what order the hat and scarf numbers were written—hats first, and then scarves done so as to ensure the right anti-correlation between divisibilities, or scarves first, and then hats. But if the hat numbers are written first, then surely the probability of divisibility by three is 1/3, and this should not change from the mere fact that scarf numbers are then written.

One of several conclusions might be drawn:

Actual infinities are impossible.

Uniform priors on infinite discrete sets make no sense.

Probabilities on infinite sets are very subtle, and do not follow the standard probability calculus, but there is a very intricate account of dependence such that whether the hat numbers are assigned first or the scarf numbers are assigned first actually affects the probabilities. I don't know if this can be done—but when I think about it, it seems to me that it might be possible. I seem to be seeing glimpses of this, though the fact that as of writing this (a couple of hours after my return from Oxford) I've been up for 21 hours may be affecting the reliability of my intuitions.

Thursday, September 24, 2009

I am back from the Philosophy of Cosmology conference at Oxford. There are probably going to be several posts inspired in various ways by the conference.

For now, here is a cheap remark but one that I think worth making: while being verified obviously increases the likelihood of a theory (if "verified" means "conclusively verified", then obviously it increases it to close to one), being verifiable does nothing by itself to contribute to the theory's likelihood of being true.

Saturday, September 19, 2009

One of my basic driving intuitions is that the things we should most care about are the ones that are most fundamental ontologically. This intuition drives me away from micro-reductionism (facts about people, animals and the like reducing to properties of microscopic stuff) and towards macro-reductionism (facts about particles, at least sometimes, reduce to facts about the macroscopic things they are "part" of).

Thursday, September 17, 2009

Modal accounts—and I take counterfactual ones to be a special case—typically do not get at the heart of what is going on. Consider for example the account of free will in terms of the Principle of Alternate Possibility (PAP), or the account of causation in terms of counterfactuals. Both fail, and in both cases there either are counterexamples or there are cases that are so close to being counterexamples that they significantly lower our confidence in the claim that there are no counterexamples. Yet PAP and the counterfactual account of causation do get something right. I think what is going on in both cases, and maybe in cases of other modal accounts, is that the account confuses explanans with explanandum (or, more fluently, cause with effect). It is because I am free that I typically have alternate possibilities, and it is because A caused B that were B not to have occurred, A would not occur.

Typically, explanatory and causal relations can be blocked—the explanans can be had without the explanandum. So one can have A causing B without the counterfactual, and one can have freedom without alternate possibilities. But these are not going to be standard cases. Now if causal determinism of the standard variety were generally true, then surely we would not be the possessors of a faculty innately capable, in the right external circumstances, of producing events with alternate possibilities. And so we would not be free. So an argument from PAP to indeterminism can still be made, despite counterexamples to PAP.

Modal or counterfactual stories like PAP may show that a view—say, compatibilism—is false, but they typically fail to get at the essence of why the view is wrong. (When an argument against a view is given that fails to get at the essence of what is wrong with the view, it can trigger a large literature of attempts to tweak the view, nitpick about problems with the argument, etc.) Here's another example. The knowability paradox argument against anti-realism. From the claim that everything can be known by beings like us, we can prove the absurdity that everything is known by beings like us (just apply the claim that everything can be known by beings like us to the proposition that p is an unknown truth). This is a perfectly good argument, but it fails to get at the essence of what is wrong with anti-realism, and that is a part of why instead of being simply taken as a perfectly good argument as it should be, it is taken to be a paradox.

Wednesday, September 16, 2009

If presentism is true, then necessarily: if t is present, then x exists at t if and only if x exists simpliciter, x exists definitely at t if and only if x exists simpliciter definitely, x exists vaguely at t if and only x exists simpliciter vaguely.

There can be vagueness about existence at a time.

There cannot be vagueness about existence simpliciter.

Any time can be present.

Then by 2, suppose that t is such that at t there is vagueness about existence at t. By 4, t might be present. By 1, it follows that if presentism is true, there can be vagueness about existence simpliciter. By 3 and modus tollens, presentism is false.

I think 3 is very plausible. The idea that, say, I might vaguely exist seems absurd. However, 2 is also very plausible. Think of the vagueness in the moment of a brute animal's death. (Human death is not the cessation of existence, so human death is beside the point.)

The weakness in the argument is that one might take 2 to be strong evidence for the denial of 3. Suppose there is vagueness in the moment of conception, so that at at the actual world w0, it is vague whether Bucephalus exists yet at t0. Then take a world w1 which is a duplicate of everything in w0 up to and including t0, but with everything getting annihilated after t0. Plausibly, at w1 it will be vague whether Bucephalus exists. Here, I would bite the bullet if I accepted 2. I would say that it's vague whether a world w1 that contains Bucephalus is a duplicate of w0 up to and including t0, or whether the duplicate is a world w2 that doesn't contain Bucephalus. This is a hard bullet to bite, but it is better than allowing for vague existence.

Tuesday, September 15, 2009

The argument from beauty, it seems to me, can come in four varieties, each asking a different "why" question, and each claiming that the best answer entails the existence of a being like God.

1. Why is there such a property as beauty?

This argument is the aesthetic parallel to the standard argument from morality. For it to work, a distinctively theistic answer to (1) must be offered. Parallel to a divine command metaethics, one could offer a divine appreciation meta-aesthetics. I think this gets the direction of explanation wrong—God appreciates beautiful things because they are beautiful. Moreover, if what God appreciates does not modally supervene on how non-divine things are, then divine simplicity will be violated. A better answer is that beautiful things are all things that reflect God in some particular respect, a respect that perhaps cannot be specified better than as that respect in which beautiful things reflect him (I think this is not a vicious circularity).

2. Why are there so many beautiful things?

The laws of physics, biology, etc. do not mention beauty. As far as these laws are concerned, beauty, if there is such a thing, is epiphenomenal. So, it does not seem that a scientific explanation of the existence of beautiful things can be given. But, perhaps, a philosophical account could be given of how, of metaphysical necessity, such-and-such physical states are always beautiful, and maybe then we can explain these entailing states physically. Or maybe one can show philosophically that, necessarily, most random configurations of matter include significant amounts of beauty, and then a statistical explanation can be given. But all that is pie in the sky, while a theistic explanation is right at hand.

3. How do we know that there is beauty?

This is parallel to my favorite argument from morality—the argument from moral epistemology. As far as naturalistic theories go, beauty (like morality) is causally inefficacious. As such, it appears difficult to see how we could have knowledge-conferring faculties that are responsive to beauty. The best story is probably going to be something like this. There is some complex of physical properties which correlates with being beautiful, and for some evolutionary reason, we have a faculty responsive to that complex of physical properties, and hence to beauty. However, the "and hence to beauty" is to be questioned. Evolutionary teleology is tied to fitness. The connection to beauty is fitness-irrelevant, because beauty is naturalistically causally inefficacious. It is at most that complex of physical properties that we are responsive to. But then it is not beauty as such that we are responsive to, that we perceive. Maybe, though, what we perceive is the most natural (in Lewis's sense) property among those that we could be reasonably be said to have states covarying with. But the physical correlates are, presumably, also quite natural since having states covarying with them is of evolutionary benefit. Moreover, I deny that the evolutionary teleology should snap to the most natural states, if the most natural ones are evolutionarily irrelevant. All in all, I do not think the prospects for a naturalistic account of our knowledge of beauty are good. But a theistic account is easy.

4. Why do we have aesthetic sensations?

This is an interesting question, but it strikes me as yielding an argument that is distinctly weaker than (3), unless it is just a different way of formulating an aspect of (3) (namely, the aspect of asking how our aesthetic beliefs get their intentionality). Still, the question is puzzling. We see such a very wide variety of things as beautiful: some people, most sunsets, many clouds, sme plants, most jellyfish, most tigers, most galaxies, some proofs, some musical compositions, some ideas, etc. It is odd that there would be an evolutionary benefit from being responsive to these things. The more likely naturalistic story is that this is some sort of a spandrel, maybe a spandrel of our recognition of good mate choices. Note that this story undercuts the attempt to evolutionarily ground our knowledge of beauty—it makes for us having aesthetic sensations but not aesthetic knowledge. That's a problem. But I am also not sure that the wide variety of things we sense as beautiful has enough in common for there to be a plausible story. However, that only yields a God of the gaps argument (not that there is anything intrinsically wrong with that).

Monday, September 14, 2009

We see that things which lack intelligence, such as natural bodies, act for an end, and this is evident from their acting always, or nearly always, in the same way, so as to obtain the best result. Hence it is plain that not fortuitously, but designedly, do they achieve their end. Now whatever lacks intelligence cannot move towards an end, unless it be directed by some being endowed with knowledge and intelligence; as the arrow is shot to its mark by the archer. Therefore some intelligent being exists by whom all natural things are directed to their end; and this being we call God.

A standard question about design arguments is whether they aren't undercut by the availability of evolutionary explanations. Paley's argument is often thought to be. But Aquinas' argument resists this. The reason is that Aquinas' arguments sets itself the task of explaining a phenomenon which evolutionary theory does not attempt to, and indeed which modern science cannot attempt to, explain. In this way, Aquinas' argument differs from Intelligent Design arguments which offer as their explananda features of nature (such as bacterial flagellae) which are in principle within the purview of science.

Aquinas' explanandum is: that non-intelligent beings uniformly act so as to achieve the best result. There are three parts to this explanandum: (a) uniformity (whether of the exceptionless or for-the-most-part variety), (b) purpose ("so as to achieve"), and (c) value ("the best result"). All of these go beyond the competency of science.

The question of why nature is uniform—why things obey regular laws—is clearly one beyond science. (Science posits laws which imply regularity. However, to answer the question of why there is regularity at all, one would need to explain the nature of the laws, a task for philosophy of science, not for science.)

Post-Aristotelian science does not consider purpose and value. In particular, it cannot explain either purpose or value. Evolutionary theory can explain how our ancestors developed eyes, and can explain this in terms of the contribution to fitness from the availabilty of visual information inputs. But in so doing, it does not explains why eyes are for seeing—that question of purpose goes beyond the science, though biologists in practice incautiously do talk of evolutionary "purposes". But these "purposes" are not purposes, as the failure of evolutionary reductions of teleological concepts show (and anyway the reductions themselves are not science, but philosophy of science). And even more clearly, evolutionary science may explain why we have detailed visual information inputs, but it does not explain why we have valuable visual information inputs.

Saturday, September 12, 2009

The question of the meaning of life obviously differs from the also interesting question of the meaning of "life". The latter asks for the meaning of the word "life", while the former asks for the meaning of the thing which is signified by that word. Suppose we take seriously, however, the idea that in asking for the meaning of "life" and in asking for the meaning of life, we are using "meaning" univocally. Then the question presupposes that life, just like "life", is communicative unit. For it is only communicative units that have meaning.

But if life is a communicative unit, then who is communicating to whom? Is it the living person who is communicating, with her life? If so, to whom? Herself? But then living is like talking with oneself, which does not seem right, though I can see that it could be defended. So, maybe, with another. But which other? Presumably either God or fellow human beings (or both). No other options seem available. If only fellow human beings, then if someone is on a desert island and does not expect to meet anybody, her living is just like her talking to the wall—pointless. And that's not right. So, if it is the living person herself who is communicating with her life, she is communicating with God.

Suppose, then, that it is someone else who communicates by means of our lives. There are two options. One is God. The other is society. In the latter case, the life of the person on the desert island, largely formed by desert island experiences, is of questionable meaning.

So if life is a communicative unit, the communication is either by God or with God (or both, a gnostic might add). If so, then the meaning of life does depend on the actual or at least presupposed existence of God.

Friday, September 11, 2009

Sentence tokens come in many types, such as stupid sentence tokens, true sentence tokens, sentence tokens written in green ink, tokens of "Snow is white", tokens of "Snow is white" written in a serif font and in a 4pt typeface or smaller, etc. Most of these types of sentence tokens do not qualify as "sentence types". In fact, in the above, the only sentence type is tokens of "Snow is white". Types of sentence tokens are abstractions from sentence tokens. But there are different kinds and levels of abstraction, and so not all types of sentence tokens count as "sentence types".

I will argue that the notion of a sentence type is to a large extent merely pragmatic. We consider the following to each be a token of the same sentence type:

Snow is white.

Snow is white.

Snow is white.

The difference between roman, bold and italic, as well as differences in size, are differences that do not make a difference between sentence types. Similarly, "Snow is white" as said by me with my weird Polish-Canadian accent and as said by the Queen are the same sentence types. On the other hand,

Snow, it is white.

is a token of different sentence type.

Say that a difference between the appearances (visual or auditory) of tokens that does not make for a difference in sentence type is a "merely notational difference". So, the way logicians think of language is roughly something like this. First, we abstract away merely notational differences. The result of this abstraction is sentence types. Then we can do logic with sentence types, and doing logic with sentence types helps us to do other abstractions. Thus, Lewis abstracts from differences that do not affect which worlds verify the sentence, and the result are his unstructured propositions (which he, in turn, identifies with sets of worlds). Or we might abstract from differences that do not affect meaning, and get propositions. (This simplifies by assuming there are no indexicals.)

But one could do things differently. For instance, we could say that differences in typeface are not merely notational differences, but in fact make for a different sentence type. Our logic would then need to be modified. In addition to rules like conjunction-introduction and universal-elimination, we would need rules like italic-introduction and bold-elimination. Moreover, these rules do not contribute in an "interesting way" to the mathematical structures involved. (John Bell once read to me a referee's report on a paper of his. As I remember it, it was something like this: "The results are correct and interesting. Publish." There are two criteria for good work in mathematics: it must, of course, be correct but it must also be interesting.) Moreover, there will be a lot of these rules, and they're going to be fairly complicated, because we'll need a specification of what is a difference between two symbol types (say, "b" and "d") and what is a difference between the same symbol type in different fonts. Depending on how finely we individuate typefaces (two printouts of the same document on the same printer never look exactly alike), this task may involve specifying a text recognition algorithm. This is tough stuff. So there is good pragmatic reason to sweep all this stuff under the logician's carpet as merely notational differences.

Or one could go in a different direction. One could, for instance, identify the differences between sentences (or, more generally, wffs) that are tautologically equivalent as merely notational differences. Then, "P or Q" and "Q or P or Q" will be the same sentence type. Why not do that? One might respond: "Well, it's possible to believe that P or Q without believing that Q or P or Q. So we better not think of the differences as merely notational." However, imagine Pierre. He has heard me say that London is pretty and the Queen saying that London is ugly. But he has failed to recognize behind the difference in accents that my token of "London" and the Queen's token of it both name the same city. If we were to express Pierre's beliefs, it would be natural to say "Pierre believes that [switch to Pruss's accent] London [switch back] is pretty and that [switch to Her Majesty's accent] London [switch back] is ugly." So the belief argument against identifying "P or Q" with "Q or P or Q" pushes one in the direction of the previous road—that of differentiating very finely.

On this approach, propositional logic becomes really easy. You just need conjunction-introduction and disjunction-introduction.

Or one could do the following: Consider tokens of "now" to be of different word types (the comments on the arbitrariness of sentence types apply to word types) when they are uttered at different times. Then, tokens of "now" are no longer indexicals. Doing it this way, we remove all indexicality from our language. Which is nice!

Or one can consider "minor variations". For instance, logic textbooks often do not give parenthesis introduction and elimination rules, as well as rules on handling spaces in sentences. As a result, a good deal of the handling of parentheses and spaces is left for merely notational equivalence to take care of. It's easy to vary how one handles a language in these ways.

There does not seem to be any objective answer for any language where exactly merely notational differences leave off. There seem to be some non-pragmatic lines one can draw. We do not want sentence types to be so broad that two tokens of the same non-paradoxical and non-indexical type can have different truth values. Nor, perhaps, do we want to identify sentence tokens as being of the same type just because they are broadly logically equivalent when the equivalence cannot be proved algorithmically. (Problem: Can the equivalences between tokens in different fonts and accents be proved algorithmically? Can one even in principle have a perfect text scanning and speech recognition algorithm?) But even if we put in these constraints this still leaves a lot of flexibility. We could identify all tautologously equivalent sentences as of the same type. We could even identify all first order equivalent sentences as of the same type.

Here is a different way of seeing the issue, developed from an idea emailed to me by Heath White. A standard way of making a computer language compiler is to split the task up into two stages. The first stage is a "lexer" or "lexical analyzer" (often generated automatically by a tool like flex from a set of rules). This takes the input, and breaks it up into "tokens" (not in the sense in which I use the word)—minimal significant units, such as variable names, reserved keywords, numeric literals, etc. The lexical analyzer is not in general one-to-one. Thus, "f( x^12 + y)" will get mapped to the same sequence of tokens as "f(x^12+y )"—differences of spacing don't matter. The sequence of tokens may be something one can represent as FUNCTIONNAME("f") OPENPAREN VARIABLENAME("x") CARET NUMERICLITERAL(12) PLUS VARIABLENAME("y") CLOSEPAREN. After the lexical analyzer is done, the data is handed over to the parser (often generated automatically by a tool like yacc or bison from a grammar file).

Now, in practice, the hand-off between the lexer and the parser is somewhat arbitrary. If one really wanted to and was masochistic enough, one could write the whole compiler in the lexer. Or one could write a trivial parser, one that spits out each character (or even each bit!) as a separate token, and then the parser would work really hard.

Nonetheless, as Heath pointed out to me, there may be an objective answer to where notational difference leaves off. For it may be that our cognitive structure includes a well-defined lexer that takes auditory (speech), visual (writing or sign language) or tactile (Braille or sign language for the deaf-blind) observations and processes them into some kind of tokenized structure. If so, then two tokens are of the same sentence type provided that the structure would normally process them into the same structure. If so, then sentence type will in principle be a speaker-relative concept, since different people's lexers might work differently. To be honest, I doubt that it works this way in me. For instance, I strongly doubt that an inscription of "Snow is white" and an utterance of "Snow is white" give rise to any single mental structure in me. Maybe if one defines the structure in broad enough functional terms, there will be a single structure. But then we have arbitrariness as to what we consider to be functionally relevant to what.

The lesson is not that all is up for grabs. Rather, the lesson is that the distinctions between tokens and types should not be taken to be unproblematic. Moreover, the lesson supports my view—which I think is conclusively proved by paradoxical cases—that truth is a function of sentence token rather than sentence type.

Thursday, September 10, 2009

The Knowability Paradox is the surprisingly easy argument that if there are unknown truths, there are unknowable truths. Use "Kp" to mean that p is known (say, to humans). Then, suppose p is an unknown truth and let q be the proposition (p and ~Kp). Then q is true, but cannot be known, since if q were known, its first conjunct, namely p, would also be known, and both of its conjuncts would be true (since knowledge entails truth), so that we would have both Kp and ~Kp, which is absurd. For reasons I don't quite understand, some people think this is a paradox rather than just a perfectly good, and highly intuitive ("if p is an unknown truth, then that p is an unknown truth is an unknowable truth"—isn't that obvious?) argument that the existence of unknown truths entails that of unknowable ones.

Here's something less trivial—a reductio of an interesting quintuple of premises.

(Premise) Human epistemic states concerning natural states of the world, including knowledge of natural states of the world, supervene on natural states of the world.

(Premise) Every true proposition reporting only natural states of the world can be known (by humans).

(Premise) Some true proposition reporting only natural states of the world is not known.

(Premise) There is a true proposition p reporting solely natural states of the world is such that any world in which p holds is an exact duplicate of our world in respect of natural states.

(Premise) The conjunction of propositions reporting solely natural states of the world reports solely natural states of the world.

Let q be a true proposition reporting only natural states of the world that is not known. (3)

Let r be the conjunction of p and q. (4 and 6)

r is true. (4, 6 and 7)

r reports only natural states of the world. (4, 5, 6, 7)

r is knowable. (2, 8 and 9)

Let w be a world at which r is known. (10)

At w, r is true. (11: knowledge entails truth)

At w, p is true. (7 and 12)

w is an exact duplicate of the actual world in respect of natural states. (4 and 13)

At w, Kr. (11)

Actually, Kr. (1, 14, 15)

Therefore, Kq. (7 and 16: if a conjunction is known, so are its conjuncts)

Therefore Kq and ~Kq. (6 and 17)

If we take 2-5 to be very plausible, this gives us a nice argument against 1. I think there are some technical difficulties with 4. To make 4 work, a description of natural states of the world has to be able to use not only first order naturalistic vocabulary ("electrons", "equids", etc.) but also "natural", so it can, after giving a complete catalog of natural states, say: "And there are no other natural states." This reading of "reporting natural states" is not the standard, I think. This broader reading of "reporting natural states" only makes 1 and 3 more plausible, though it makes 2 a bit less plausible.

The other way to get 4 working with only first order naturalistic vocabulary is to have an infinite proposition that inter alia denies the existence of all possible natural kinds other than the ones that are exemplified. This has some problems, but can be defended (how well?).

Tuesday, September 8, 2009

These days, it is common to develop logic by positing a logical language (e.g., First Order Logic) and then giving various rules. But there is another approach, one that I've been told was that of Russell and Whitehead. On this approach, what we are studying is the structure of the space of propositions, which we understand realistically as good Platonism.

If I were enough of a Platonist, I would want to do this myself. Here is how I would do it. First step, ontology. I take as the basic kind of entity an n-ary relation, where n is any non-negative integer. You might wonder what 0-ary relations and unary relations are: they are more familiarly known as propositions and properties, respectively. Nonetheless, they seem pretty clearly a part of the same range of entities as binary, tertiary, quarternary, ... relations.

Now I distinguish certain functions. I shall consider a (partial—but I shall suppress that word) function f to be an n-ary relation where n>1 such that if f(x1,...,xn−1,xn) and f(x1,...,xn−1,xn*), then xn=xn*. I shall write xn=f(x1,...,xn−1) for short. The first three functions I distinguish are Conj, Disj and Neg. (Or maybe just a Nand.) These satisfy the formal relations:

If Conj(p,q)=Conj(p*,q*), then p=q* and q=p*, or p=q* and q=p*. If Disj(p,q)=Disj(p*,q*), then p=p* and q=q*, or p=q* and q=p*. If Neg(p)=Neg(p*), then p=p*.

(I.e., Conj and Disj are one-to-one except perhaps for order, and Neg is one-to-one.) There are some other special relations. For any function z from {1,...,n} to {1,...,m}, there is a function Pz from the m-ary relations to the n-ary relations. These have the formal property that

Pz(Pw(p))=Pwz(p) where wz is the composition of w and z, and Pz(p)=p if w is the identity function.

For any n>0, there are one-to-one functions Un and En from the m-ary to the (m−1)-ary relations for any m greater than or equal to n.

Finally, for every n, there is an (n+1)-ary relation Sn, which relates n objects with one n-ary relation. This relation has formal properties like these:

Sn(x1,...,xn,Conj(p,q)) if and only if Sn(x1,...,xn,p) and Sn(x1,...,xn,q).

Sn(x1,...,xn,Disj(p,q)) if and only if Sn(x1,...,xn,p) or Sn(x1,...,xn,q).

Sn(x1,...,xn,Neg(p)) if and only if not Sn(x1,...,xn,p).

Sm−1(x1,...,xm−1,Un(p)) if and only if Sm(x1,...,x,...,xm-1,p) for all x, where the x is in the nth position.

Sm−1(x1,...,xm−1,En(p)) if and only if Sm(x1,...,x,...,xm-1,p) for some x, where the x is in the nth position.

If z is a function from {1,...,n} to {1,...,m} and p is an m-ary relation, then Sm(x1,...,xm,Pz(p)) if and only if Sn(xz(1),...,xz(n),p).

Next, we have a crucial non-formal condition:

Sn(x1,...,xn,p) if and only if x1,...,xn stand in the relation p. (If n=0, we have: S0(p) if and only if p is true.)

Finally, if we so wish, we can add some relations with language, such as that when sentences "s" and "t" express p and q respectively, then "(s) and (t)" expresses Conj(p,q). Etc. But this is just an afterthought, and should not be taken as a definition of Conj, since there presumably are propositions that are not expressed by sentences, or at least sentences of any human language.

Open wffs correspond to n-ary relations with n>0. Sentences correspond to 0-ary relations, or propositions.

Here is one interesting corollary of this way of seeing logic. Because propositions are just 0-ary relations, it would be weird to have a metaphysics with sparse relations and properties, but with propositions—which are surely abundant! If we have abundant propositions that correspond to sentences, surely we want something abundant that corresponds to wffs.

Another advantage of doing things this way is that we can uniformly handle infinite propositions.

I've been feeling that there is some kind of an analogy between Anselm's version of the Ontological Argument (OA) and semantic paradoxes like the Liar or Curry's. Here is one analogy. I've argued in an earlier post that when the consequent in material-conditional Curry sentences is true, the Curry sentence is true, and when the consequent is false, the Curry sentence is nonsense. (The Curry sentence with consequent p is: "If this sentence is true, then p." There is a cool argument from the meaningfulness of the sentence to p.) If this is right, then we have a valid way of arguing from meaning to truth: We have sentences that are true if and only if they are meaningful (for when the consequent is true, the whole sentence is true). Now, I've always thought that Anselm's argument went through as soon as it were granted that one had a concept of that than which nothing greater can be conceived. However, as St. Anselm himself notes but does not make enough of, to have a concept is more than just have a sequence of words in one's head. Thus, it may well be that we have the sequence of words without them expressing a concept.

Just as the Curry sentence is true iff it expresses a proposition, so too the Anselmian predicate has a satisfier (i.e., God) iff it expresses a property. At the same time, this suggests a caution. It would be mistaken to try to figure out by introspection whether a Curry sentence with empirical consequent expresses a proposition, and likewise it may not be appropriate to figure out by introspection whether the Anselmian predicate expresses a property.

Friday, September 4, 2009

Suppose x thinks that it is 95% likely that fetuses lack moral standing. I will argue that x ought not to have or perform an abortion except perhaps for extremely serious reasons, assuming x reasons correctly probabilistically. The argument is simple:

(Premise) One ought not do something that one takes to be 5% likely to result in the death of an innocent with moral standing except perhaps for extremely serious reasons.

(Premise) x thinks that it is 95% likely that fetuses lack moral standing.

(Premise) x reasons correctly probabilistically.

x thinks that it is 5% likely that fetuses have moral standing. (By 2 and 3)

Everybody thinks that if fetuses have moral standing, they are innocent.

x thinks that an abortion is 5% likely to result in the death of an innocent with moral standing. (3-5)

x ought not to have or perform an abortion except perhaps for extremely serious reasons.

Observation 1: I think it would be unreasonable to think that the likelihood that fetuses lack moral standing is more than about 95%. The pro-choice arguments for the claim just aren't that strong.

Observation 2: One can argue for (1) as follows. It would be prudentially irrational to induce a 1/20 risk to one's life except for extremely serious reason. But likewise one should then not induce a 1/20 risk to the life of another innocent with moral standing except for extremely serious reason. (This, I guess, uses some version of the Golden Rule or of Kantian universalizability.) How to argue that a 1/20 risk to one's own life is prudentially irrational except for extremely serious reason? It might be useful, once again, to switch from the first person to the third person perspective. Suppose everybody in the U.S. individually took an independent 1/20 instantaneous risk to their lives. Then 15 million people would instantly die. We would think this is a tragedy, unless perhaps the remaining 285 million people got an extremely big individual benefit from individually taking the risk.

Observation 3: A public policy that on the costs side had an expected value equal to 25,000 deaths per annum of innocents with moral standing would need extremely strong justification on the benefits side. But if, say, there would be 500,000 fewer abortions per annum were abortion illegal, then keeping abortion legal has an expected value equal to 25,000 annual deaths on the costs side, if the probability that fetuses have moral standing is 5%. So it would need an extremely strong justification on the benefits side.

Final remark: Of course, I think that all things considered, it's clear that fetuses have moral standing. But it's interesting that even if it were quite probable that they don't, nonetheless there is good reason to worry morally about at least some, maybe many, cases of abortion.

Thursday, September 3, 2009

According to some defenders of abortion, what makes it wrong to kill an adult but not wrong to kill a fetus is that the adult has future-directed desires while the fetus does not. But now imagine an innocent adult who has only one future-directed desire: to die. Nonetheless, it is uncontroversially wrong to kill this adult without her consent (it's wrong to kill her with her consent, but that's controversial). Thus, it is wrong to kill this adult without her consent. Why? Well, on the theory in question, it's wrong because she has future-directed desires, or, more precisely, a desire. But the desire is a desire not to be alive. It is absurd that the presence of that desire is what makes it wrong to kill her.

So what makes it wrong to kill her? I see two answers: The first is that she is being deprived of future life. And that future life is valuable even if she does not recognize it as such. The second is that the killing is a destruction of a human body.

Wednesday, September 2, 2009

Suppose that the truth is what is (the optimistic version) or would be (the counterfactual version) known (by beings like us) in the ideal limit. In both cases, it seems there are things I know that aren't true, which is absurd. For instance I know that on the table beside my laptop there is right now a water bottle arranged thus-and-so relative to a used wet wipe.

Or was. For I just threw out the wet wipe, and moved the bottle. In a week or a year, most likely I'll forget how these were arranged. It's already starting to fade a little. I don't expect to tell anybody. So, here is something I knew: The water bottle and the wipe were arranged thus and so at 8:17 am on September 2, 2009. Is this something that would be known in the ideal limit? I doubt it. While the powers of science will grow in the progress to the ideal limit, the facts about the water bottle and the wipe will recede into the past, and their traces will be covered up. For a while, the fact could be pulled out from my brain by careful investigation of the memory structures. But presumably eventually the brain will rot (unless the Second Coming comes first). It may be true that the rotted matter will be slightly differently arranged for this memory. But the difference will be harder and harder to discover as time goes on.

Suppose this argument succeeds. And suppose that I accept the ideal limit theory of truth. Then I should say: "I know p but p is not true." And a theory of truth that implies that is absurd.

There are two ways out of this predicament for the ideal limiter. The first is to deny that the ideal limit for all true propositions is reached at the same time, at the culmination of science. The ideal limit for some propositions, such as that the items on the table were arranged thus-and-so, is reached much earlier, say, now. One way to try to do this is to say that the ideal limit has been reached for p at t provided that p is believed by someone at t, and in the course of progress towards an ideal science, an undefeated defeater for p will never be found. An obvious problem, however, is that I might form a false belief about some really trivial matter, then forget the belief, and the course of ideal science would never be able to recover the situation from the rotting matter of my brain to provide a defeater. A further problem is with conjunctions. For let q be some proposition that is known only when science is completed. Then, p and q are both true, and hence their conjunction is true. But their conjunction is never known.

The second, which Jon Kvanvig offered me, is to say that the ideal limit involves time travel. We even have an argument for the possibility of time travel: Truth is ideal-limit knowledge; it is true that the bottle and wipe were arranged thus-and-so; therefore that they were arranged thus-and-so is known in the ideal limit; the only way this could be known in the ideal limit is by time travel; hence, time travel is possible. Now there seems to be something very fishy about a theory of truth that, when conjoined with trivial observations, implies the scientific claim that time travel is possible. Moreover, it is surely true that nobody is time traveling to my home on 8:37 am on September 2, 2009. But if in the ideal limit there were such time travel, as the hypothesis suggests, then we have a truth that isn't true: namely, that nobody is time traveling to my home on 8:37 am on September 2, 2009.

Tuesday, September 1, 2009

It would be a pity to have to drop the T-schema. But if I had to do that, I'd justify myself as follows. Sometimes sentences of the form "It is true that p" are just an emphatic way of affirming p. (Observe: "It is true that banks lend money" is a statement about banks, not about a proposition or a linguistic item. Yet if there were a real predication of truth, the sentence would be about a proposition or a linguistic item.) In those cases, the T-schema obviously holds. However, these cases are not really cases of talking about truth—they are just a stylistic device, akin to the way that an atheist might say "God knows that p" instead of "p". Unless one is prepared to affirm with deflationists that all uses of "is true" are like that, one cannot generalize from these uses to the more substantial uses, since the two are different uses.

About Me

I am a philosopher at Baylor University. This blog, however, does not purport to express in any way the opinions of Baylor University. Amateur science and technology work should not be taken to be approved by Baylor University. Use all information at your own risk.