Monday, August 31, 2015

I have the good of friendship with Trent. Suppose Trent was my only friend. Then I would be getting two good things out of my friendship:

having Trent as a friend

having a friend.

These are separate non-instrumental goods. When I came to be friends with Trent, I already had good (2) as I had other friends, so I "only" gained good (1) (which is a great good). But if I had had no friends previously, coming to be friends with Trent would have provided me with both good things.

So the family of friendship-with-X goods has the property that not only are particular members of the family non-instrumentally valuable, but it's also of non-instrumental value to possess some member or other of that family, which gives one d over and beyond that particular member. Not all families of goods are like this. Consider the family F consisting of the two goods (a) friendship with Trent and (b) reading Anna Karenina. There is no good of possessing some member of F that goes over and beyond the two particular goods in F. It's good to be friends with Trent and it's good to read Anna Karenina, but there is no third disjunctive good here. Or at least there is no third non-instrumental disjunctive good (we can imagine cases where the disjunction is, as such, instrumentally valuable, say when a prize is given to anyone who is friends with Trent or is reading Anna Karenina).

Here's another example. Consider the subfamily of the friendship-with-X goods given by friendships with blue-eyed people. While every member of this subfamily is valuable (I'm supposing for simplicity that all cases of friendship are valuable), there does not seem to be a further value to being friends with a blue-eyed person. Someone all of whose friends are brown- or green-eyed is missing out on the good of friendship with the particular people whose eyes are blue, but isn't losing out on some further good. On the other hand, someone who has no female (or no male or no American or no Iranian) friends seems to be losing out on something valuable over and beyond the value of the particular female friends that he or she does not have, though it is unclear whether the lost value here is instrumental (say by providing a different outlook on the world) or not.

Wednesday, August 26, 2015

It's natural to model deontic constraints in decision theory by assigning infinite disutility to forbidden actions. This temptation should be resisted. There are too many deontic theories with non-zero probability, and since an infinite disutility multiplied by a non-zero number is still infinite, we would have to take all these deontic theories extremely seriously. And that would lead to constant weighing of infinities against each other and/or an unduly restricted life that must obey prohibitions from fairly crazy (but not so crazy as to have zero probability) theories.

Tuesday, August 25, 2015

There are two teleological concepts: that of a telos and that of proper function. Each of them helps us make certain teleological judgments. Do we need them both? Could we, for instance, define the telos of a system as what is achieved when the system is properly functioning? Or define proper function as the achievement of the telos in a system and all its (relevant?) subsystems?

I don't know for sure that we need them both, but neither of the two specific proposals is correct. Consider the case of an excellent mathematician is striving to solve an extremely difficult mathematical problem, her mathematical faculties are all working properly, but she fails. It is not a necessary condition for the proper function of mathematical faculties that they be able to solve every mathematical problem there is. Both of the attempted reductions neglect the phenomenon that our teleologies can push us above our proper functioning. An excellent mathematician or athlete may already be functioning above our proper level of functioning, but nonetheless there will be telê that she doesn't have fulfilled. (Though, perhaps, we may want to say that all humans are mathematically and athletically defective, due to the Fall, in which case we could maintain that the mathematician is not functioning properly if there are any soluble problems she can't solve. But even if this view of humans is true, we could imagine that mathematicians of some other species are functioning properly and yet failing to solve problems.)

It seems that we need the concept of proper function to tell us what is good enough, what is normal. But we need the concept of a telos to supply us with comparisons between instances of proper function. The mathematician who solves and who fails to solve are both functioning properly, are both functioning sufficiently well, but the one who solves is functioning better--precisely because she achieves her telos in respect of that process. Or, alternately, we may say that both are functioning properly, but one is functioning merely normally and the other supernormally--again, a distinction that mere proper function does not seem capable of making.

Monday, August 24, 2015

There is a fashion in certain quarters these days to criticize the lavishness of student recreational facilities, and their effect on the cost of higher education. I confess to being a beneficiary of that lavishness (here at Baylor, the recreation facilities open to students are open to faculty and families at the same cost--indeed, typically, at no cost), as I live a five minute walk from the gym.

I think the criticism forgets something important: American colleges (and colleges more generally, probably going back to the middle ages) have traditionally been known for students engaging in entertainment that is unwholesome both physically and morally. This harms the moral education of students, and (at least derivatively) the intellectual education. Given this background, providing wholesome fun to the students seems to me to be at least instrumentally important to the educational mission of a university.

Friday, August 21, 2015

If I have on-balance stronger reasons to do A than to do B, and I am choosing between A and B, then it is better that I do A than that I do B.

But notice that the following is false:

If in decision X, I choose A over C, and in decision Y, I choose B over D, and I had on-balance stronger reasons to do A than I did to do B, then decision X was better.

To see that (2) is false, suppose that in decision X, you are choosing between your friend's life and your convenience, while in decision Y, you are choosing between your friend's life and my own life. Your reasons to choose your friend's life over your convenience are much stronger (indeed, they typically give rise to a duty) than your reasons to choose your friend's life over your own life. Nonetheless, to save your friend's life at the cost of your own life is a better thing than to save your friend's life at the cost of your own convenience.

There is a whiff of paradoxicality here. But it's just a whiff. If you chose your convenience over your friend's life you'd be a terrible person. So in a case like that described in (2), choosing B (e.g., your friend's life over your life) is a better thing than choosing A (e.g., your friend's life over your convenience), choosing C (e.g., your convenience) is worse than choosing D.

In other words, when you choose A over B, the on-balance strength of reasons for A doesn't correlate--even typically--with the value of your deciding for A. Rather, the on-balance strength of reasons for A correlates (at least roughly and typically) with the value of your deciding for A minus the value of your deciding for B. This is quite clear.

This helps to resolve the paradox of why it is that doing the supererogatory is better than doing the obligatory, even though in a case where an option is obligatory the reasons are stronger than the reasons for supererogation. For omitting the supererogatory is much less bad than omitting the obligatory.

We may even be able to use some of the above to make some progress on the Kantian paradox that a good action by a person with a neutral character is better than a good action by a person with a good character, once we observe that it is worse for a good person to do something bad than for a neutral person to do the same thing, since the good person does two bad things: she does the bad thing in itself and she fights her good personality. Thus, even though the good person has more on-balance reason to do the good thing, because the strength of reasons doesn't correlate with the value of the action but with the value of the action minus the value of the alternative, this does not guarantee that her action has greater value than the good action of the neutral person.

One sometimes hears the idea that justice trumps other considerations. There is a sense in which this is true, but it's not a very interesting sense: ultima facie duties of justice trump other considerations--but they do that not because they are duties of justice, but simply because they are ultima facie duties. Other ultima facie duties--say, ultima facie duties of beneficence or of chastity--also trump other considerations, as that's what we mean by saying that they are ultima facie duties. (One might have some worries here about real dilemmas. But one had better not say that in real dilemmas the ultima facie duties of justice trump other kinds of ultima facie duties, since that would contradict these other duties being ultima facie.) I suppose one could think that only justice gives rise to ultima facie duties. That might be true but only if one has an expansive unity-of-the-virtues kind of view of justice. And that expansive view also trivializes the trumping thesis simply by taking all the other kinds of considerations under the umbrella of justice.

And it's just false that all considerations of justice trump all other considerations. Considerations of justice range over a full spectrum of strength. For instance, there are very weak considerations of justice: whenever I see someone having done the right thing, I have a reason of justice to praise, and whenever I see someone having done the wrong thing, I have a reason of justice to criticize. But these reasons are typically extremely weak, being easily overcome by reasons of social propriety. ("Good for you, you didn't cheat on the test" isn't very appropriate, nor should I criticize every driving infraction I observe a friend making.)

Wednesday, August 19, 2015

Here's a surprising thing: It is possible to cause something to be uncaused. Let's suppose Socrates' parents chose to have only one child and that was Socrates. Let U be any uncaused being whose existence is independent of the decisions of Socrates' parents. Perhaps U is God or the number 2 (or some quantum event if these are uncaused--I think they're not). Consider the disjunctive event E of U existing or Socrates having a sibling. This event has no cause. But if Socrates' parents had had another child, then E would have had a cause, namely Socrates' parents who would have been the cause of Socrates' having a sibling and hence of the disjunctive event. So, by choosing to have only one child, Socrates' parents caused E to have no cause.

I've found the argument compelling for most of my life. I continue to be confident of (3). But I am now not sure about the inference from (3) to (4). Here's why. Essential to Augustine's account is an ontology sufficiently sparse not to include lacks. After all, if holes, lacks and privations exist then Augustine's account is in trouble. But if the true ontology is sparse enough not to include lacks, then there will likely be other "things" that don't exist. And these other "things" will escape Augustine's argument.

For instance, why couldn't some evils instead of being privations be mismatches? The mismatch between Jones' belief that Americans never landed on the moon and the fact of the moon landing could be an evil. An ontology could, for instance, include both Jones' belief and the moon landing, but not include as a further item the mismatch between the two. One might try to argue that a mismatch is a privation of a due match. But the correct ontology might not include matches either. Or consider the example, discussed in the secondary literature, of the man with two noses. It's an evil to have two noses, but at least prima facie (sorry!) the extra nose isn't a privation. But if we do not suppose that evil has to be a privation, then we can say that the problem is the mismatch between the face and the human form.

This approach would allow one to retain the central anti-Manichean insight that Augustine has, namely (3), while at the same time escaping some counterexamples. I am not sure it escapes the biggest counterexample, namely pain. Though if we take Mark Murphy's theory that what is bad is not pain itself, but the disharmony between reality and desire that tends to correlate with pain, then the above approach helps, since a disharmony is a kind of mismatch.

I am not claiming every mismatch is an evil. The argument doesn't establish that every "thing" that doesn't exist is an evil (remember the remark that matches might not be in the correct ontology).

Final note: An alternative to the above would be to weaken (1) to the claim that everything that fundamentally exists is sustained by God, and hence everything that fundamentally exists is good insofar as it does so.

Saturday, August 15, 2015

What should an ordinary household thermometer show at temperatures close to absolute zero? There is no answer to this question. The question asks about what should happen in circumstances too far beyond the normal operating conditions of the instrument.

I wonder if something analogous doesn't happen in ethics. We have our normal operating conditions. These are very broad, because we are very adaptable beings, but nonetheless they are limited. Are there always going to be answer to questions about what we should do beyond these conditions?

One way of going beyond the conditions is to consider metaphysically impossible situations. If you promised to bring three oranges to the party and you are in the impossible world where four is less than three, do you fulfill your promise by bring four? Would you be obligated to honor yourself as your parent if you were your own cause? These questions seem to make little sense, and even we philosophers rarely think about them. But analytic ethicists do, on the other hand, sometimes ask questions about nomically impossible situations, and certainly we ask questions about situations far beyond ordinary life.

I think we should, however, take seriously the possibility that as we depart far enough from the normal operating conditions of human beings, some of the questions (a) have no answer or at least (b) have no answer available to us. This possibility undercuts some arguments.

For instance, one can argue that utilitarianism gives deeply implausible answers (e.g., that every action is equally permissible) in cases where there are infinitely many people. But suppose that there aren't in fact infinitely many people, and the situation of there being infinitely many people is far beyond humans' normal operating conditions. Then the fact that utilitarianism predicts something that seems implausible to us beyond those conditions is not a problem for the utilitarian--as long as she is willing to modestly limit the scope of ethics to humans in or near their normal operating conditions (if she's not, the argument is fair game).

Or consider this argument against deontology: It seems permissible to kill one innocent person to save a billion. But circumstances where we choose between one life and a billion lives might well be so far beyond our normal operating conditions that they fall beyond the scope of ethics.

The last case is interesting. For it raises this question: Might we not actually find ourselves in circumstances so far beyond our normal operating conditions that ethics doesn't apply, much as someone could actually throw a household thermometer into liquid helium? After all, it is sadly all too easy to imagine how someone might end up choosing whether to kill one innocent to save a billion: it seems physically possible for someone to end up in that position. It seems deeply troubling to suppose that some people end up in circumstances that go beyond the presuppositions in the moral law.

I think Christians have reason based on revelation to think this doesn't actually ever happen. The moral law is also embodied in revelation, and revelation presents itself as a guide to us in all the vicissitudes of life. But note that even if nobody ends up in circumstances that go beyond the presuppositions of the moral law, going beyond these presuppositions could be physically possible but for God's providential protection. A case of choosing whether to kill one innocent to save a billion may be like that: God makes sure we're not tried beyond the edge of ethics.

But what could one say without revelation?

Of course, the above line of thought fits best either with (a) divine command metaethics or (b) natural law metaethics on which what grounds ethical truths is our nature and an Aristotelian metaphysics of human beings on which it makes sense to ask what our normal operating conditions are. All this won't be an issue given a utilitarian or perhaps even Kantian metaethics. So that limits the applicability of the line of thought. But if we do find plausible the Aristotelian metaphysics and a natural law metaethics, then I think we should take seriously the worry that sometimes an analytic philosopher's ethics examples will go too far beyond our normal operating conditions. An argument for the above line of thought, and hence indirectly for either (a) or (b), is given by the apparently insuperable difficulties in ethics when one supposes that one's actions affect an infinite number of people.

Thursday, August 13, 2015

I'll just baldly give the theory without much argument. Axiology is necessary. It's a necessary truth that friendship and knowledge are good, that false beliefs are bad, etc. But the values have many aspects and exhibit much incommensurability. It's also a necessary truth that the good us to be pursued and the bad avoided. This gives some practical guidance, but mainly in cases where the reasons in favor of an action dominate those against. And that's rare. In typical cases agents face competing incommensurable reasons.

There may also be a necessary truth that some goods are fundamental and never to be acted against. The nature of a particular kind of agent then specifies how incommensurability is to be resolved. When the agent should be merciful rather than strictly just, when strictly just, and when the agent is morally free to go either way. The nature of an agent also gives the agent inclinations to act accordingly, inclinations that can be introspective. So we can know how we should resolve cases of incommensurability when they come up for us. We have reliable moral intuitions about us.

But these moral intuitions are about humans. Intelligent sharks would have a nature that resolves incommensurables differently, and our moral intuitions wouldn't directly tell us much about how intelligent sharks should act (except in cases of domination and maybe the deontic constraint not to act directly against the most fundamental goods). So we have a reasonable scepticism about for our insight into how a morally upright intelligent shark would act. But this scepticism of course in no way detracts from our knowledge of how we should resolve incommensurables.

For exactly the same reason, we have a reasonable scepticism about how God would act, about what resolutions between incommensurables are necessitated by his nature and which are left to choice. But this scepticism in no way detracts from our moral knowledge.

I don't think the scepticism is total. We can engage in limited analogical speculation. But this needs modesty if the theory is right.

Let me end with a little argument. When we think of particularly outlandish ethics cases, such as actions that affect an infinite number of people, we get stuck or even misled. No surprise on the above theory. We aren't made for such decisions. Those are decisions for more godlike beings than us. Perhaps our nature simply fails to specify the resolutions for these cases, as they aren't relevant to us in our niche. Imagine asking an intelligent amoeba about sexual ethics!

Wednesday, August 12, 2015

If to be human is to be a member of a particular biological taxon, then being human is extrinsic. (Biological taxa are defined by gene interchange in a population and are thus extrinsic characterizations of individuals.)

So, to be human is not the same as to be a member of a biological taxon.

Our best alternative to the biological taxonomic account of what it is to be human is the Aristotelian account that it is to have a human form, so the Aristotelian account is probably true.

If many of the worst kinds of pains are good, then probably pain isn't intrinsically bad.

Many of the worst kinds of pains are good.

So, probably, pain isn't intrinsically bad.

The argument for (2) is based on two different thoughts. The first is a remark that Paul Draper made which was something like that the worst pains we suffer are psychological pains. The second is my thought that many of the worst of the psychological pains are good. For instance, it is good to be pained at the loss of those we love and it is good to feel the pain of guilt for wrongs done (in both cases, it would be terrible not to be pained!).

Tuesday, August 11, 2015

Some films are evil. Birth of a Nation or Triumph of the Will, say. My interest in this post is not in such morally harmful films. Rather, I am interested in bad movies in the sense of trashy or kitschy movies, but which are nonetheless not evil. Star Wars Episodes I-III are examples.

One might say that the films that I am interested in are ones that are bad artistically, unlike Birth of a Nation and Triumph of the Will which promote evil ideas by what is in a narrow sense of the term "good art".

Now if I were to spend the rest of my life on a desert island with a solar powered DVD player, I'd rather have Star Wars I-III (or any one of them) than no movies. (On the other hand, I would rather not have Birth of a Nation or Triumph of the Will, because I'd be afraid that out of boredom I would watch them enough times that the propaganda might eventually start sinking in.) And I think that not only would I be less bored, but I would actually be better off qua film viewer, better off qua being with artistic sensibilities, for watching these films on the island. Despite the fact that I would call these films bad, nonetheless I think they are better than nothing, even qua films. That probably wouldn't be true of all films.

So I think that some of the things we call bad movies are nonetheless better than nothing. They are really, thus, on balance good things, and even on balance good qua films. They are simply bad compared to the better ones, and hence bad compared to our expectations. The point generalizes. Much of what we call bad literature, bad music, bad painting and so on is, nonetheless, on balance good. Qua consumers of the art, we are better off with it than with nothing. Much but probably not all. I don't want to deny that there is literature, music and paintings that it would be better not to witness, simply on aesthetic grounds, but I suspect we greatly exaggerate the quantity of it. In the interests of not being whiny, of appropriate gratitude and optimism, I suggest the more accurate word "mediocre" in place of "bad" when we're not dealing with stuff that's worse artistically than nothing.

Humans in heaven will eventually have a personal love for all persons.

It is not possible for a human to have a personal love for infinitely many persons.

Therefore, there are only finitely many persons.

Let me clarify the premises. I mean the "all" in (1) and the "there are" in (3) to extend to all persons who ever exist--my quantifiers are eternalist ones. I mean "personal love" to contrast with the kind of "impersonal love" that even now the saints among us have towards all humans (and maybe even all persons) in general. A personal love, however, is a deeper relationship that requires an attitude directed at one specific person. An argument for (1) might go as follows: humans in heaven will be morally perfect and hence they will have a love for all persons. Now moral perfection doesn't itself require anything more than a general love--for it doesn't require anything more of us than we can have in this life, and in this life we cannot personally know billions of people. But while moral perfection doesn't require that the humans in heaven have more than a general love for all persons, when we have the right kind of general love for a person, we want to know the person specifically, to know the specific good features of that person, and in heaven such desires will be satisfied. So (1) is true. The argument for (2) involves either the finitude of our minds or something like my causal finitist thesis.

Presentists (and maybe some others) might want to replace (1) by the weaker claim that humans in heaven will eventually have a personal love for all persons who then exist. If so, then if we add the additional premise that all persons live forever, we get the weaker conclusion that at every present and future time there are only finitely many persons.

Some may worry about hell here. Do the people in heaven have a personal love for all the damned? Do they really know and delight in what good can be found in them? I would like to say "Yes". I can imagine someone, however, saying that (a) such a personal love for the damned would lead to mourning and (b) that mourning has no place in heaven. I would deny (a), I guess. But I can see that some people would find this line of thought implausible. Very well. Then I can revise premise (1) to say that humans in heaven will eventually have a personal love for all persons in heaven. And then the conclusion is that there are only finitely many persons in heaven. I can still, however, get the conclusion that there are only finitely many persons if I add the premise that if there are infinitely many persons not in heaven, then there are infinitely many in heaven as well. (It would be too tragic if infinitely many went to hell--or, worse, were annihilated--but only finitely many went to heaven.) Since there are finitely many in heaven, there are finitely many outside of heaven.

If one adds to the original argument the premise that one can only have a personal love at t for someone who exists at t, then (1) on the eternalist interpretation also yields the important thesis that all persons live forever.

Monday, August 10, 2015

Suppose time is made of points, and suppose it's continuous. Suppose Sam suffers pain from noon to 1 pm and Sally suffers an equally intense pain from noon to 2 pm. Then Sam and Sally suffer the same number of equally painful points of time. So, it seems, we cannot say that Sally is worse off than Sam. But of course she is worse off than Sam. Hence we should either reject the continuity of time or reject the pointiness of time.

I think there is one gap in this argument. Even if time is a continuum made of points, it doesn't follow that our temporal experience is made of the same points. It could be that the basic perceptual units of time and pain are short intervals, let's say approximately ten milliseconds long (they may vary, too: sometimes time seems to be going faster, after all). And then Sally will have twice as many painful intervals, and the problem disappears. Maybe this works, but I think it's somewhat paradoxical. For this story to solve the problem, it seems that pain has to be suffered not at points of time, but at these short intervals of time. We cannot say Sam is suffering a pain exactly at 1:30 pm, it seems. In other words, we have something like Zeno's paradox of the arrow: at no time are Sam and Sally suffering, yet they are suffering.

But maybe one can respond in the same way that people have responded to the arrow. When we say that the arrow is moving at 1:30 pm, the truth of that statement is not grounded in what is happening exactly at 1:30 pm, but rather in the differences of arrow position between 1:30 pm and slightly earlier. Perhaps, then, we can say that Sam suffers at 1:30 pm, but his suffering at 1:30 pm is grounded not just in what happens at 1:30 pm, but in what happens over the basic perceptual interval that contains 1:30 pm?

Perhaps we can. But it's not quite so simple. For it is deeply implausible that Sam's being in pain at 1:30 pm is grounded in part in what happens after 1:30 pm. Yet a typical time t will be within one of the basic perceptual intervals of time, and hence some of that interval will come after t. So perhaps we should say that Sam's being in pain at 1:30 pm is grounded by the painfulness of the then-past part of the basic perceptual interval. Maybe 6 ms of the interval have passed, and those painful 6 ms is what makes Sam hurt. But then the basic perceptual interval of time isn't 10 ms, because it seems that a mere 6 ms of pain suffices (and if 6 ms, then by the same token 3 ms, and so on). So this is problematic.

A different move would be to say that although time is continuous, pain perception consists of discrete instants of pain. There are infinitely many instants of time between noon and 1 pm, but Sam only suffers at finitely many of them, and Sally has approximately twice as many instants to suffer at. My argument doesn't rule out this possibility, and it does indeed solve the problem. But it does it at the cost of positing a deceptive phenomenology. For a pain can feel temporally unbroken, and yet on this theory it occurs only at an infinitesimal fraction of the instants of time during an interval.

All in all, I think my basic argument and our experience of pain does provide some evidence against pointy continuous time. How much depends on how much we can rely on our phenomenology.

Thursday, August 6, 2015

If I resent your doing A and you didn't do A, then my resentment was perhaps justified (if I was justified in thinking you did A) but it was nonetheless misplaced. On the other hand, if I am crossing the road and I notice a car speeding towards me, and I fear it will run me over, but then the driver brakes and stops just barely in time, my fear was entirely appropriate and not at all misplaced.

The proper object of resentment, thus, is an event (or action) taken as actual (and wrongful), and when that action doesn't occur, the resentment is misplaced. But the proper object of fear is an event merely taken to be a serious chance. What kind of chance? An objective chance or a merely epistemic probability?

I will argue that it's an epistemic probability. Suppose that I fear that my investments will fail. I get into a time machine, travel to the future, and notice that my investments won't in fact fail. I go back in time and it would be appropriate for my fear to go away. Nonetheless, there is an objective chance of the investments failing: the chancy processes that make investments go up and down continue to run despite my knowledge. But there is no longer a serious epistemic probability. So it looks like epistemic probability is what is relevant. Moreover, I think it can be appropriate to have fears about things that are in fact necessarily false. For instance, if I have an answering a multiple choice exam in calculus, and I the question asks whether the definite integral of some function over some range range is 2, 3, 5, or π/2. I think it's probably 5, but there is something in my calculation that I am not confident of, and I realize that if I got that wrong the answer is π/2. My fear that the definite integral might be equal to π/2 is in fact appropriate, even if it is necessarily true that the answer is 5.

This makes fear very different from resentment: fear is made appropriate by epistemic probabilities--either the actual ones or the ones my evidence justifies (which one?), while resentment is made appropriate by what people have actually done.

I wonder if this focus on the epistemic dimension isn't partly responsible for the notorious way that fears resist rational thought. No matter how much I reflect on the very good statistics for indoor wall climbing injuries (the chance of injury during a session is about the same as that while driving 26 miles) and what I know about the stringency of Baylor's training of my belayer, when I look down from 50 feet up, I feel fear. This fear is misplaced: my epistemic probability for a fall is tiny (and justifiedly so given the evidence). Why? Because it looks dangerous. Now, in the absence of defeaters, appearances yield epistemic probabilities. Moreover, many times even though a defeater to an appearance of an impending bad is sufficient to defeat belief, a sufficient epistemic probability will remain (after all, we may be wrong about the defeater), and it could take quite a bit of time to evaluate whether the defeater is complete or only partial. Given that physical danger may require a quick response, and the examination of defeaters takes time, it makes sense for us to be wired in such a way that appearances have a strong tendency to directly drive fear. So in cases like my climbing case, while the fear is misplaced, inappropriate and unjustified, it is nonetheless understandable (unlike my pathological fear of dogs!).

(Well, when I reflect on the fact that an indoor climbing session has equal injury probabilities to a 26 mile drive, this actually makes me feel a bit afraid. For I do think driving (or being driven) by an average driver is genuinely dangerous. And so perhaps my fear is justified, just as I would be justified to be afraid of a 26 mile drive (even if in fact I don't always feel afraid). If so, then change the example, say to standing on a five inch thick glass floor above a precipice.)

Tuesday, August 4, 2015

One can't really separate software from hardware in a principled way (for boundary cases, think of FPGAs, microcode, etc.), so instead of thinking about computers and their programs, we really should simply think of programmed machines. We can think of a programmed machine as something that embodies a function from possible inputs to possible outputs. When a vector of inputs x is given a programmed machine and the machine functions correctly, it computes the output y=f(x). One version of computational functionalism holds that mental states are grounded in computations: what grounds my being in mental state M is that I have been given such-and-such a vector of inputs x and have computed f(x) where f is the function I embody.

We go wrong mentally. We malfunction. Our brains function in ways contrary to our design plan due to glitches of all sorts, some a bit more on the software side (these are treated by psychologists and psychiatrists) and some a bit more on the hardware side (these are treated by psychiatrists and neurologists) though of course the software/hardware division is vague. Now some malfunctions do knock us out. But many do not. We remain conscious. And we don't just remain conscious in the respects in which we are functioning correctly. We remain conscious in the respects in which we are functioning incorrectly. Of course, the mental states that we exhibit in those cases can be weird. On one of the spectrum they may involve arithmetical errors and minor inferential failure, and on the other they involve psychosis. What will the computational functionalist say about such cases?

Well, presumably they will have to say that we still compute values of functions that we embody, but we embody abnormal functions. This, however, is a seriously problematic proposal. For what defines me as computing f(x)? It isn't, of course, just the fact that I get y as the answer (where in fact y=f(x)). For there are infinitely many functions that yield the value y given the input x. Rather, I see two initially plausible answers. The first answer is that what makes me compute f(x) is a pattern of non-normative counterfactual facts of the form:

Were I given input a, I would produce output f(a)

for a large set of values of a. But that can't be right. Any such set of facts could be finked. (Imagine that a neurosurgeon implanted a device such that were I to be given any input other than the x that I am actually given, I would explode.) The second answer is that what makes me compute f(x) is a pattern of normative facts of the form:

Were I given input a, I should produce output f(a)

for a large set of values of a. But the problem is that when I embody an abnormal function, we don't have a pattern of facts like this, because the function that I embody--if that's the right way to think about this--is one whose outputs I should not produce!

If this argument is right, then both a non-normative and a normative (Aristotelian) computational functionalism has a serious problem with abnormal mental states.

The normative computational functionalist has more resources, though. Could she perhaps say that given that I embody an abnormal function f, I should compute f(a)? Maybe, but the basic question here is what grounds the fact that I embody the particular function that I embody. It's not the would-facts, but it's also not the should-facts, it seems, so what is it?

About Me

I am a philosopher at Baylor University. This blog, however, does not purport to express in any way the opinions of Baylor University. Amateur science and technology work should not be taken to be approved by Baylor University. Use all information at your own risk.