Friday, November 21, 2014

A moral argument

I've never found the moral argument for morality—except in its epistemic variety—particularly compelling. But now I find myself pulled to find plausible premises (1) and (2) of the following pretty standard argument:

Only things that are infinitely more important than me can ultimately ground absolutely overriding rules on me.

Rules without ultimate grounding are impossible or not absolutely overriding.

I am a finite person.

The only things that could be infinitely more important than a finite person are or have among them (a) infinitely many finite persons or (b) an infinite person.

Moral rules that apply to me are absolutely overriding.

Moral rules that apply to me are not grounded in a plurality including infinitely many finite persons.

So, moral rules that apply to me are grounded at least in part in an infinite person.

So, there is an infinite person.

The vague thought behind (1) is that rules grounded in something merely finitely more important than me will not be absolutely overriding. After all, it is logically possible that I rise in importance by some large finite amount in my life and then exceed the importance of the ground of moral rules if they are grounded in something of merely finite importance. The vague thought behind (2) is that a regress of grounding in effect leaves things ungrounded, and and ungrounded facts can't be that important to me, because it is beings that are important. Premise (3) is plausible.

I find (4) quite plausible. It's based on the personalist intuition that persons are the pinnacle of importance in reality. Merely Platonic entities, should there be any, while perhaps beautifully structured and infinite in their own way are not important, not unless they are persons as well.

Next, (5) is obvious to me. And (6) seems very plausible. The only plurality of finite persons who could plausibly provide a ground for the moral rules that apply to me is a human community, and there are only finitely many humans. Even if we live in an infinite universe with infinitely many people, the infinitely many aliens surely are not needed to ground the absolute wrongness of degrading a fellow human being.

All that said, I am a dubious about (1). I think there are no reasons other than moral reasons, and so the fact that moral reasons take priority over other reasons is a triviality.

But even within this controversial framework, I am now realizing there is room to ask the question of why some reasons are absolutely conclusive—they should close deliberation no matter what else has been brought to bear. "But A requires intentionally degrading my neighbor" should close deliberation about A: it doesn't matter what reasons there are for A once it becomes clear that A requires intentionally degrading my neighbor.

And that makes something like (1) still plausible. For nothing but a person can be the ultimate ground for a rule whose deliberative importance is so absolutely conclusive—nothing but a person matters enough for this task. Could this person just be my neighbor? Yes—but only if my neighbor is infinitely important, and important in a personal kind of way. This infinite importance can be had in two ways: either my neighbor is an infinite person, or else the infinite importance of my neighbor is derivative from other persons (if it's derivative say from Platonic entities it's not the right kind of importance, for only considerations about persons can bestow the kind of importance that trumps all conflicting considerations about persons). In the latter case we get a regress that is vicious unless there is an infinite person or an infinite number of finite persons grounding the rule. The latter is implausible, so there is an infinite person.

This argument requires deontology, of course.

Let me end by saying that none of this means I am being pulled to Divine Command Metaethics (DCM). DCM is just one among many ways of grounding morality in an infinite person, and it seems to me to be less plausible than other ways of doing so.

7 comments:

Sure, premise one is where the game is given away. When you see the word 'absolute' is an argument, get real suspicious.

In the context of morality, what could it mean in practical terms? Suppose I believe that a certain human right constitutes my duty to act in a given situation. It is overriding. Really overriding. I can't see how any other duty or value should slow down my commitment to act on behalf of this human right. But is this respect for the human right a display of a respect for an 'absolutely overriding rule'? I doubt it. I don't know what that means. I can't see the difference between prioritizing a human right over all else that I can think of, and prioritizing it absolutely, in a real situation calling for action.

Suppose I'm told, "Unless you prioritize a rule absolutely, you will at some future time fail to make sure that some other rule won't seem to be even more important to you."

OK, but so what? Perhaps we will learn how another human right 'B' might, in some circumstances, have to take priority over human right 'A'. Expanding the scope and meaning and applicability of human rights has always been part of the ethical growth of humanity.

But then I will be told, "Oh no! If you can imagine the possibility that you wouldn't truly respect human right 'A' under some possible future circumstances, then you are no better than a Nazi, and we can't trust your moral judgment now."

This is of course an utter fallacy. I can conceive having to put human right 'A' in second place, not to some mere desire or transient value, but only to an even more important human right. And that more important human right 'B' is only granted superiority in a provisional manner as well, to allow for further future ethical growth.

The absolutist doesn't even want to think about having to think about the discovery and re-prioritization of moral duties in light of future human experience and growth. Instead, the absolutist wants ethical thinking to HALT at some arbitrary point, where no further deliberation about ethics is permitted by humans. If a halt to creative ethical thinking were granted, then the rest of this "moral argument" would follow. But I don't think that such a course is wise.

I guess I gave two arguments. The first is the official main argument, and the second is my briefly sketched deontology version.

I don't think your objection applies against my official main argument. For there what is absolutely overriding are the true rules of morality, i.e., that an action is morally wrong should override all non-moral considerations. That we can grow in our understanding of what the true rules of morality are does not affect the fact that moral wrongness should override other considerations. The true rules of morality may even all be ceteris paribus, and yet when they in fact apply, they are absolutely overriding, in that no non-moral consideration could override a moral consideration.

Your argument may have more force against my deontological version. But there I say that what takes priority is intentionally acting against some basic human goods. Our best list of these goods can extend as we gain moral knowledge. But that just increases the number of duties that we know about. It does not create a conflict, since there is never a conflict between two duties of the form "Never intentionally act against G". For any duty of that sort can always be satisfied by refraining from acting.

What does "an infinite person" mean when contrasted with "an infinite number of persons"? I'm wondering if there might be some kind of equivocation going on since "an infinite person" seems to be a contradiction in terms if "infinite" is being used in the same sense as when you say "an infinite number of persons."

About Me

I am a philosopher at Baylor University. This blog, however, does not purport to express in any way the opinions of Baylor University. Amateur science and technology work should not be taken to be approved by Baylor University. Use all information at your own risk.