Paying People Not to Do Good: A Puzzle about Superogation

First, thanks to Doug and the other editors for the invitation! I'm happy to be here.

For my first post, I wanted to share an issue that has been puzzling me. Ultimately, this relates back to issues in voting ethics, but I want to submit the problem in the abstract. I'm not sure what to say about it. I'm curious what others think (and if there's relevant published work on it.)

Suppose Alf plans to do X. X is supererogatory–it's nice of him to do, but he isn't morally required to do it. Betty doesn't want Alf to do X, so she offers to pay him not to do so, because she doesn't want him to do something supererogatory. (Suppose also that it's legal for him to accept money not to X.) My question: Is Betty doing something morally wrong?

Note: I'm not asking about what making the offer reveals about Betty's character. That is, I'm not asking whether Betty's offering to pay Alf not to do something supererogatory shows that she has vicious character.

Note also that there are cases where it's morally right to pay something to do Y, even though this foreseeably might prevent them from doing some supererogatory action X. So, for instance, it's not wrong for our various universities to pay us our salaries, even though our working as philosophers prevents us from devoting our lives to charity work in the poverty-stricken regions. [EDIT: Our employers are paying us to teach, not paying us not to do supererogatory volunteer work.]

Sometimes, Betty's actions seem wrong to me. But other times, it seems like she's doing something permissible, though she has bad character. After all, since X is supererogatory, it's morally permissible for Alf to refrain from X-ing, even for morally frivolous or vicious reasons. (His reasons for refraining bear on his character, but generally not on the deontic status of the act.) If so, why wouldn't it be morally permissible for Betty (for morally frivolous or vicious reasons) to pay Alf not to X? [EDIT: Let's assume for the sake of argument that Alf is permitted to accept money not to X.]

Post navigation

29 Replies to “Paying People Not to Do Good: A Puzzle about Superogation”

Jason, I’m not convinced by the prima facie case you make at the end in favor of permitting Betty’s action. There seem to be a number of situations in which it would be permissible for someone to do X or not, but impermissible for someone else to pay that person to do X. Arguably: to donate a kidney, to break up with a partner/spouse, to award a prize to some nominee, to have sex, to admit someone to a university, to vote for some candidate (maybe this is why you raise the example?). and so forth. Now, the reason why paying for each of those actions is impermissible (if it is) surely varies from case to case. Some of them — the honorific nature of a prize or the nature of intimate, personal relationships — don’t apply to supererogatory acts in general (though they will apply to specific instances of supererogation, which means we need to know more about what Alf plans to do). If there’s a general case to be made here, I suspect it might parallel the arguments sometimes given for the examples above. For example, we might say that an agent’s moral character lies within some relevant sphere of autonomy, and so another agent ought not interfere unnecessarily. That might mean it would be permissible in many cases for Betty to pay Alf not to do something supererogatory, but that it would be impermissible for Betty to [pay Alf not to do something supererogatory because it was supererogatory].

Hi Jason – Welcome to the site. That’s an interesting question. I guess I might like you to clarify the difference between Betty’s actions and the actions of the various Universities that employ us as philosophers. Presumably, it might be the case that some philosopher some day goes to the dean and says, “Hey Dean, I’m going to leave the department to go pursue helping the poor,” and the dean convinces the philosopher to stay as a result of a pay raise. Wouldn’t that be the same thing? You might say that the difference is Betty’s motive–as I understand the post, Betty is motivated to prevent Alf from ever performing any supererogatory acts. A) I find that motive fairly difficult to understand. B) I guess my own inclination is to say that the motive doesn’t make much difference in this case. (My own view is that motive never makes any difference, but I’ll leave that aside for now.)

Hi Jason – I just re-read your post, and realized that I might have misinterpreted something you wanted to say. Did you want to say that Betty didn’t want Alf to do x, where x just happens also to be supererogatory, but that Betty was unaware/uninterested in x’s status as supererogatory? Or did you mean (as I interpreted you as having meant) that Betty was explicitly motivated to stop Alf from acting in a way that was supererogatory given its status as supererogatory?

Sorry if the original post was unclear. Let’s assume for the sake of argument that it’s morally permissible for Alf to accept money not to X. However, even so, it might still be wrong to pay him not to X. Also, the point about university salaries was supposed to be a kind of double effect: My employer doesn’t pay me with the goal of stopping me from doing a supererogatory action. Rather, I get paid to teach and write, but the foreseeable effect of this is that I won’t perform certain supererogatory actions. Finally (in response to Dale), let’s divide it into two cases: Case 1: Betty simply prefers that Alf not X. (Not because she’d rather he do Y instead, but simply because she doesn’t want him to X.) She recognizes that X is supererogatory, but doesn’t care. Case 2: Betty prefers Alf not do X because X is supererogatory. Betty is a bad person who doesn’t want people to do praiseworthy things. (In this case, Betty has bad character, but her action might be permissible even though it displays her bad character.)

Concerning “Case 1” especially (but maybe also “Case 2”) is it supposed to be clear that Betty is doing something wrong if she stops Alf from doing something he is morally required to do rather than something that is only supererogatory? For instance, suppose Alf promises his wife he’ll be home for dinner, and suppose Betty operates a ferry service. Is it wrong for Betty to knock off work early to go bowling if she knows that Alf has made this promise and also that if she goes bowling Alf will have to walk a mile out of his way to get to a bridge, which will make him late for dinner?

Mark, What if the reason Betty takes off from work is not because she wants to go bowling, but because she wants to make it so that Alf will break his promise? Does that change the deontic status of her actions, or just the character ascriptions she deserves?

The case is a bit too abstract. Let me fill in some possible details. Let’s suppose, for instance, that Alf plans to give one of his kidneys to a complete stranger who is in renal failure and needs Alf’s kidney to survive. In this case, I think that, given the cost to Alf, Alf’s saving this stranger’s life is supererogatory. In certain circumstances, the cost of saving someone’s life is high enough that it makes saving that person’s life supererogatory as opposed to obligatory. But now by paying Alf not to give the stranger a kidney, Betty is preventing a life from being saved and she is doing so at significant financial cost to herself. This seems to be impermissible. It’s one thing to act in a way that ensures that someone is going to die when the cost of acting otherwise is quite high (Alf’s act). But it’s another thing altogether to act in a way that ensures that someone is going to die when it would actually be less costly to act in a way that ensures that that person lives (Betty’s act). I think that many cases of supererogation involve acts that would be obligatory but for the cost to the agent. If this is right, then there will be many instances in which it will be wrong to pay (or incur other costs) so as to ensure that someone else doesn’t do a supererogatory act.

Doug’s right that the abstraction is going to get in the way of useful analysis. Also, it seems like one’s theory of supererogation is going to make a difference. So, I’ll dispense with the abstraction and talk about the real case I’m interested in. However, doing so requires a lot of “assume for the sake of arguments”. So, here goes: Assume for the sake of argument that, in general, people don’t have a duty to vote. Instead, they have imperfect duties of reciprocity and fair play toward their societies. These duties can be discharged any number of ways besides voting. Assume also that it’s possible to do one’s share, such that doing more counts as supererogatory. Now, assume also that it’s not illegal to buy or sell votes. (I think it should be, but I’m interested in whether it’s inherently wrong apart from being a violation of the law.) Also, assume that I’ve shown that there are particular ways you are obligated to vote, if you do vote. Assume, finally, that I’ve shown that so long as buying and selling votes doesn’t lead to violations of these obligations, then it’s permissible to accept money to cast a vote or to pay money to get someone to cast a vote. (I.e., paying someone to vote for Y is wrong only if it’s wrong for that person to vote for Y for free.) Okay, lots of assumptions there, some of which people will find implausible without a background argument. So, here’s the case I’m puzzled about. Suppose Alf has, without voting, discharged all of his duties of reciprocity and fair play toward his society. He plans to vote (and to vote the way he should), but he isn’t obligated to do so. It’s permissible for him to abstain, though it’s praiseworthy for him to vote well rather than abstain. Betty doesn’t want Alf to vote, simply because, let’s say, she doesn’t want Alf to vote. So, she offers to pay him not to vote. She knows that this won’t make any difference to the outcome of the election, nor does it represent any significant harm. (Even if Alf was going to vote well, the expected utility of his vote in a large election is vanishingly small, thousands or millions of orders of magnitude less than a penny.) So, Betty has perverse preferences, and is willing to pay a significant sum to prevent Alf from participating in a collectively beneficial activity, where his individual contribution is of negligible consequence. I think it’s easy to say that Betty has bad or at least strange character, but harder to show she’s doing something wrong. You might try a generalization argument: “What if everybody felt free to pay people not to vote? Then, perhaps very few people would vote, and that would be bad. So, we need a moral, and perhaps a legal, prohibition against paying people not to vote.” However, I worry that this kind of argument is too strong. Suppose I’m rich and have some odd preferences. I just happen to prefer that Farmer Bill not farm. So, I offer to pay Bill not to farm. (Our contract states that he may do anything else except farm.) This doesn’t seem wrong, but you could run the “What if everyone felt free to pay farmers not to farm?” argument just as easily.

Jason, This is completely off topic, but can’t the expected utility of voting in a large close election be quite high? In a close election with 100,000,000 voters, a candidate might be able to go from being an almost sure loser to an almost sure winner by buying 10,000,000 votes. If the expected utility of that candidate winning rather than the other candidate is a trillion dollars (which doesn’t seem unreasonable) 100,000 dollars. We can quibble about the exact numbers, but how do you get that the expected utility of a vote is a thousandth or a millionth of a penny? (I assume that that is what you meant to claim, not that the expected utility of a vote is thousands or millions of orders of magnitude less than a penny, which would be mind-bogglingly small!)

Concerning the generalization argument, the same concerns (obviously) arise for Kantians thinking about the Formula of Universal Law. It’s unsustainable for everyone to act on a maxim of being a banker, but surely it’s not wrong to decide to be a banker. The standard Kantian answer (and this seems independently plausible to me) is to say that it is wrong to act on a maxim which says to become a banker, no matter what. But it’s not wrong to act on a maxim proposing, roughly, to become a banker, so long as doing so doesn’t violate any other duty. In practice, this will amount to an intention to be a banker, so long as society doesn’t urgently need you for some other purpose. In your case, we could similarly say that if Betty only intends to pay Alf not to vote, so long as the consequences of his not voting are trivial, that’s permissible (though perhaps, as you say, indicative of a bad character). If, however, she intends to pay Alf not to vote, even if his not voting would have serious consequences, then her act is impermissible (even if his not voting in fact has only trivial disvalue). That seems plausible to me.

My last post may have been unclear. If one candidate is an almost sure loser then buying 10,000,000 votes may of course have much greater than 10,000,000 times the expected utility of buying a single vote. But if for all anyone knows either candidate could easily win then there may be around a 1 in 10,000,000 chance that only one vote separates the candidates. Again, sorry to post something so off topic, but I want to encourage people to vote in close elections!

Hi Mark, Good question. After spending lots of time looking at different accounts of the expected utility of votes, I’ve come to believe that the formulae given by Geoff Brennan and Loren Lomasky in their 1993 Decision and Democracy are the best we’ve got. Their formulae are complicated, but in a simplified form, they go like this: Ui = p[Vi(A)-Vi(B)] where Ui is the expected utility of your vote (in terms of its effects on the outcome of the election), p is the probability your vote will be decisive, and Vi(A)-Vi(B) represents the difference in value between the candidate (A) you vote for and (B) you vote against. In turn, p = f(N, M) That is, the probability your vote is decisive is a function both of the number of voters who vote and of the “anticipated proportional majority” enjoyed by one of the candidates. (The idea is that if one candidate is expected to have a lead of 60-40%, you treat a random voter as if she has a 60% chance of voting for the leading candidate.) f is a really complicated function, so I won’t post it here. (There have been some good critiques of BandL’s formulae, but most critiques, if correct, don’t change the expected utilities much.) Now, Brennan and Lomasky show that the probability that your vote will be decisive decreases slowly as N (the number of voters) increases. Even in an election with 100,000,000 voters, if (according to your prior probabilities) they are split 50-50, you have a good chance of being decisive. However, the probability of your vote being decisive drops dramatically as M (the anticipated proportional majority) increases. In really large elections, on the order of 100,000,000 voters, even a tiny proportional majority drops the expected value of individual votes down to nothing. So, in an election of 100,000,000 voters, where one candidate has a 50.5% over 49.5% lead, even if the better candidate is worth a trillion dollars more than the inferior candidate, the expected utility of my vote is worth thousands of orders of magnitude less than a penny. (In my paper “Polluting the Polls”, forthcoming in AJP, I give a precise calculation of the expected utility of my vote in a hypothetical election, where if we suppose A) that I’ll vote for the better candidate, B) the better candidate is worth $33 billion more than the worse candidate, C) the number of voters will be the same as in the 2004 pres election, and D) the better candidate has a 50.5% to 49.5% lead, then the expected value of my vote works out to be $4.77 X 10 ^ -2650.) For what it’s worthy, ideologically this stuff is a wash for me. I want to argue that citizens don’t have a duty specifically to vote. It’s easier for me to argue that if votes don’t count much. But I also want to argue that they have rather stringent duties to vote in certain ways if they do vote. But it’s hard to argue that when even a really stupid vote has vanishingly small expected harm. I also don’t think that this stuff makes it irrational to vote, because I don’t think rationality requires that you always perform the action with the highest expected value. Now, the expected value of 10,000,000 votes as a bloc in a close election is quite high. (It’s not the sum of the expected value of each of the 10,000,000 votes. After all, the expected value of the majority of votes just is the difference in the value between the majority and minority candidate, but individual votes have vanishingly small utility.) So, buying 10,000,000 votes is a big deal. (In my view, if you pay people to vote the way they should, it’s fine, but if you pay them to vote badly, it’s wrong…) Note, finally, that there are other aspects of the utility of votes that the formulae above ignore. For instance, perhaps each vote has some chance of saving democracy from collapsing, or perhaps increased voting tends to improve the character of government, or perhaps people enjoy voting and seeing others vote.

I’m inclined to think that, at least in most cases, paying someone to not perform a supererogatory act will be wrong. Most cases will, I suspect, look something like Doug’s case – that is, the payment will prevent a significant positive consequence from obtaining while achieving nothing more than the satisfaction of a strange preference (perhaps at a not insignificant financial cost) for the payer. In these cases it seems to me that paying for non-performance is clearly wrong. However, the case of Betty paying Alf to not vote does not have this structure. In this case no significant good is prevented from obtaining, and so it’s less clear what the objection to the act might be. Much will depend, as Jason suggested, on one’s account of supererogation. Jason’s assumption that we have imperfect duties of fair play and reciprocity toward our society, and that once we do our fair share doing more is supererogatory, seems to imply that how much good could be achieved by a further act (beyond one’s fair share), as well as how much performing such an act would cost an agent, is not relevant to whether the act is supererogatory or required. This suggests that there could be supererogatory acts that would, if performed, achieve much good and cost the agent very little. In such cases, it again seems to me that paying for non-performance in order to satisfy a strange preference would be wrong, and that the reason that it would be wrong is that it would prevent much good from obtaining without achieving any comparable alternative good. But this reason is not present in the case of Betty paying Alf to not vote, so if her doing so is wrong we need another explanation of its wrongness. Given all of Jason’s assumptions, I’m not sure that a successful alternative explanation can be given. The satisfaction of even a strange preference may be enough to justify preventing the minimal good that would result from Alf’s voting. All of this suggests that whether it is permissible to pay to prevent someone from performing a supererogatory act may depend on how much good would be achieved by the performance of the act, as well as how much the payer would gain from the non-performance of the act. If the potential gain to the payer from non-performance is close enough to the gain to others of performance, then perhaps paying for non-performance is permissible.

Welcome to the blog, Jason! This is an interesting issue. I think Brian’s broadly consequentialist point is right. Betty’s strange satisfaction doesn’t outweigh the life that Alf’s kidney transplant would save, but it might outweigh the good from voting. If is right, it doesn’t seem to matter very much whether Alf is a person who intends to perform a supererogatory action or a mechanical process of some kind. All that matters is the outcome that Alf would produce, and that Betty will avert. In Dale’s case where Alf is thinking about donating a kidney, it would be wrong to pay him not to, just as it would be wrong to interrupt a mechanical process that was about to crank out a replacement kidney and save someone’s life. But in your voting case where the consequences of Alf’s actions are negligible, it wouldn’t be wrong to interrupt a mechanical process that would generate a vote. (This is a bit of a weird case to consider, because mechanical processes themselves don’t have a right to vote. But suppose some political community gave a bonus vote to a computer that would make calculations based on economic data, if it happened to be turned on at the time. In a case where the likelihood of the computer deciding the election was sufficiently miniscule, shutting the computer off wouldn’t be wrong.) Are there cases in which it’s going to matter that Alf is a person and not some kind of mechanical process?

I’m happy with Doug’s analysis of the version of the case that he describes, but I wonder how far away we have to get from the idea that Betty is acting out of pure whimsy for the answer to change. Suppose that Betty is Alf’s loving and rich aunt. She genuinely worries about his undergoing the risks of surgery and then life with one kidney. She knows that he has always wanted to visit Iceland, so she offers to pay him to forgo making the donation. Here Betty’s motives are ones that we can endorse, even if we think that they really ought to be outweighed in this case by other motives, and since she genuinely would be sad if anything happened to Alf she does have something at stake in this herself. Do these variations change the way we look at the case? I’m torn but leaning slightly toward saying yes.

The significance of Alf’s wanting to visit Iceland may not have been entirely clear in that last post. 🙂 I was going to have Betty offering to pay for him to make a trip instead of offering a crass monetary payment, but then I decided that this was probably a red herring… I just forgot to delete one reference to the trip.

Dale, I think your case helps show that it matters how we ought to describe the action of the person making the payment. Suppose tomorrow is an election. I’m a newspaper editor, and I have a strong desire to get a movie critic column from you by tomorrow night. I pay you to watch films all day, even though this has the predictable consequence of keeping you from the polls. In this case, I’m not paying you not to vote under that description. I’m paying you to watch films and write some columns, though this has the foreseeable consequence that you won’t vote. Same with our universities paying us to teach. Teaching us reduces our volunteer work, perhaps, but they aren’t paying us not to volunteer. They’re paying us to teach. So, we get a principle of double effect complicating the analysis. In one of the cases above, Betty is paying Alf not to vote de dicto. She doesn’t care what he does, so long as he doesn’t vote. In the aunt case, I’m not sure what to say. What’s the best way to describe the action she’s doing? Is she paying him not to vote under that description? Is she paying him to go to Iceland with the foreseeable and intended consequence that this will keep him from donating a kidney? What if she were sending him to Iceland because she runs a travel magazine and needs a column on Iceland? Let’s suppose she knows Alf is planning to donate a kidney, but doesn’t care. She just wants the column. So, in this case, she’s paying him to go to Iceland, and this has the predictable but unintended consequence that someone will fail to be saved. Many people above seem to think that this would be wrong, but the newspaper editor case I just listed is permissible, because the consequences are different.

Also, it’s much easier to evaluate people’s character in these cases than the rightness or wrongness of their actions. Alf’s aunt isn’t morally perfect, but it’s understandable she’d behave that way. We wouldn’t say that her actions show she has a vicious character, though perhaps she has some flaws. She’s acting out of love and worry. However, it might well be wrong for her to make that offer, since it will prevent a life from being saved. In contrast, when Betty pays Alf not to vote, simply because she doesn’t want him to perform this supererogatory act, Betty shows that she has vicious character. However, her action might be permissible.

Jason, with respect to your voting case, I think there are two different sorts of moral evaluations we can make of the action, in addition to the evaluation of Betty’s character. Above (several posts above, where you first outlined the voting case) you said that Alf voting was a praiseworthy act. I take it that this means that his voting is a good thing, morally speaking, even if it does not affect the outcome of the election and so on. Then, insofar as Betty’s end is simply to prevent a good thing from occurring her action looks like a bad one. It isn’t that she is trying to weigh conflicting moral concerns or anything like this, she just takes it into her head that she doesn’t want Alf voting, which would be a morally good thing, and so I think it follows pretty simply that her action is bad (and we can justify some inference about her character on this basis). But the second sort of evaluation we can make of her action is not just whether it is a moral wrong, but whether it is a wrong of the sort that could be justifiably, coercively, prevented. I’m not sure at all that it is this kind of wrong. (I’m relying here on Kant’s distinction between juridical and ethical duties – an intentional act of theft violates both, but I think Betty probably only violate the latter sort). So, it might be that the conflict you have over what to say about this could be diagnosed by pointing to the two different frameworks within moral theory, right and ethics, and noting that the action violates duties in one but not the other.

Since no one is harmed by Alf’s failing to vote, I’m wondering if the Betty’s action (paying him not to vote) might displayed bad character but not strictly speaking be wrong. Consider Richard Sylvan’s Last Man thought experiment. Sylvan asks you to imagine that you’re the last sentient being alive on Earth, and somehow you know that no other sentient beings will ever live on Earth. Next to you is a magnificent redwood tree that has stood for thousands of years. A thought occurs to you–you could destroy it. (You don’t need to do so for shelter or fuel, etc.–it’s destruction would be frivolous.) Would it be wrong to do so? Some people have the (often uneasy) intuition that it would be wrong to destroy the tree, and they come up with different explanations why. However, some people have suggested that it’s not strictly speaking wrong to destroy the tree, but destroying the tree displays bad character. Even though there’s no moral duty not to destroy the redwood, the kind of person who would be willing to destroy it for frivolous reasons is morally bad. I’m wondering if that’s the way to go with the Alf and Betty case.

Jason, are your concerns different from the usual sorts of disagreements between consequentialists and Kantians? A Kantian can have the resources to say that we can have duties not to do things for frivolous reasons, regardless of the potential for harm, while a consequentialist can say that one’s motivation is irrelevant to the moral evaluation of a particular action (though a consequentialist story can be told about the value of a sense of duty and so on).

Pete, If you run a kind of Kantian universalization, I’m pretty sure that paying people not to perform supererogatory acts fails the contradiction of the will test. However, what I’d like to do is offer an explanation of whether the act is wrong or not that is not itself too dependent on any particular moral theory. Rather, I’d like the explanation to be itself a plausible, freestanding principle which in turn can be grounded in and is implied by any number of background moral theories. The problem is that this case looks more theory dependent than some others. What you will say about depends on whether you’re a Kantian, consequentialist, etc.

I think that considerations of doing vs. allowing are important to making sense of the sort of case you have in mind. Suppose that Jones is contemplating donating a kidney to Smith. Smith will die without it. To say that donating the kidney would be “supererogatory” is, I take it, to say that Jones would do no wrong (would not have wronged Smith) if he refrained from donating and allowed Smith to die. But suppose that Sally, who doesn’t like Smith, prevents Jones from carrying through his generous plans by e.g. threatening him with violence or giving him wrong directions to the hospital or by offering him an enormous sum of money. In such cases we may well think that Sally has done something wrong (has wronged Smith) for the uncomplicated reason that she has intentionally caused Smith’s death. This answer assumes that we can coherently describe X as causing some outcome by causing Y to allow that outcome. There are accounts of the doing/allowing distinctions (e.g. those that hold that “omissions can’t be causes” or that causation requires a “flow of energy” between agent and outcome) that don’t allow us to say that. But so much the worse for those accounts. For non-consequentialists then it will come as no surprise that it may be permissible to allow other people to come to harm even though it would not be permissible to cause them to come to those harms. Indeed it is hard to envisage any viable non-consequentialism that did not assume this.

Hi Tomkow, Thanks for your suggestion. I find your analysis very appealing. However, I’m not completely convinced. It’s morally optional for Jones to save Smith. Sally offers Jones money not to save Smith, because Sally is a bad person. Jones doesn’t save Smith. Smith dies. Arguably, Sally caused Smith’s death, as you say. So, Sally did something wrong–she caused a death. However, let’s take a parallel case. Let’s suppose that Peter Singer and Peter Unger are wrong about how much you owe others. (Given your blog, you’ll grant me this assumption, I’m sure.) Jones, having just finished reading *Riding High and Letting Die* comes to think that he is obligated to give his kidney to Jones. Sally is a moral theorist who, let’s say, has a knock-down proof that donating a kidney is not obligatory, but rather optional. She realizes that if she shows this proof to Jones, he won’t donate the kidney, and Smith will die. Still, out of motive X (see below), she decides to show him the proof. Jones doesn’t donate the kidney. Smith dies. Did Sally cause Jones to die? Did she do something wrong. For motives X, try inserting any of the following: X1: She believes she has a duty to tell people the truth about morality, regardless of the consequences. X2: She worries about Smith undertaking such pain and risk upon himself on the basis of a mistake. She wants him to make the decision to donate on the basis of a clear understanding of his obligations and options. X3: She hates Smith and wants him to die, and recognizes that telling Jones the truth about morality will have the predictable consequence that Smith will die. What we insert for X changes our evaluation of her character. But unless it changes the description of the act then it doesn’t change the deontic status of the act. I’m not convinced that Sally does something wrong, expecially when her motive is X2. (This might be because it changes the description of the act. But still, it looks like she causes Jones to allow the outcome of Smith’s death.) Thanks!-J

Jason, It looks like the view that you’re suggesting in your last comment is that it’s permissible to prevent someone from performing a supererogatory act, whatever the motive is for doing so, as long as there are in fact sufficent negative consequences that will be avoided if the act is not performed (in your case above the pain and hardship that would result from donating the kidney). This seems like a plausible view, but I think there are cases in which our intuitive responses might differ from the one that you rely on in the kidney donation case. Assume, as you did above, that Singer and Unger are wrong about the demandingness of morality. Jones, however, having read Living High and Letting Die, is about to donate 99% of his wealth to OXFAM, leaving himself very much worse off than he currently is. Sally decides to show Jones her proof that such an act is supererogatory, out of appropriately reformulated versions of motives X1-X3 above. I certainly agree that our assessment of Sally’s character should depend on which motive she acts from. But I’m also inclined to think that preventing Jones from donating, and thereby causing the deaths of a number of children (or at least reducing the amount of help that OXFAM can provide to those threatened with early death), is wrong. We might attempt to explain this by claiming that unlike in the case in which Jones is about to donate a kidney, he doesn’t have enough to lose by donating 99% of his wealth, relative to what others have to gain, to justify preventing the act. But besides being not entirely plausible, this claim, if true, would significantly undermine the thought that donating is in fact supererogatory, since an act’s status as supererogatory seems to depend, at least in part, on how much peforming the act would cost the agent relative to how much others would gain from its performance. There may be a way to defend the position that preventing supererogation in the kidney donation case is permissible, while doing so is not permissible in my case, without having to give up the thought that Jones’ donating really would be supererogatory, but I’m not sure what it would look like. But perhaps others don’t share my intuition that Sally’s preventing his donation would be wrong?

My intuitions about these cases are not firm at all. I’m quite sure that it’s wrong to pay someone to do something wrong, when one knows or should it it wrong, unless there are some countervening factors. I’m quite sure that it displays bad character to pay someone to fail to do something good because it is good. But then with many of these other cases, I don’t find it quite as obvious.

In fact I was interested in your puzzle precisely because I have the worry you noticed : if myarguments against Singer lead some people not to give to charity would I then be causing the harms that the charity would have prevented?

I think I have to answer yes. And it seems clear too that we should say that, in the case you describe, Sally causes Smith’s death whatever her motives may have been. Let me say a few things in my and Sally’s defense.

First of all, I was not saying that it was always impermissable to cause harm by preventing a supererogragory act, only that when it is wrong it is wrong for the unremarkable fact that it is causing a harm.

But it is not always wrong to cause a harm, even a death, though for a non-consequentialist saying why this is so is a complicated business. As David Lewis observed, all of us have probably caused many deaths but we are not murderers. I’veargued that Lewis’s explanation of why this is so is unconvincing but I don’t know that anyone has a better answer.

Lewis also argued that it makes some sort of difference whether or not you cause a death through the agency of another. When you hire a hitman to kill, you wrongfully cause a death but, Lewis insisted, you are still not the killer. And I have argued that the difference between preventing harm and causing someone else to prevent a harm is relevant to our verdicts on cases like the Trolley Problem.

Your example of causing-a-harm-by-causing-an-allowing seems to me to raise lots of interesting questions in the same vein but I don’t think that the central issues here stem from supererogation perse. Thus suppose, in your stories, Jones had (unknown to Sally) , promised his kidney to Smith. In that case Sally would be preventing Jones from doing something he was obliged to do. But it does not seem to me that this makes a difference to morality of Sally’s action or, so long as she was unaware of the promise, the quality of her character.

Very interesting post. I want to pick up on a point made by Andrew that we are not necessarily obligated to pay someone to do a supererogatory action if that is what it will take to get that person to perform the action? Consider the following example: John needs a kidney. James is a good match but is unwilling to give up one of his kidneys to John. Assuming that giving a kidney is supererogatory, James not giving up a kidney is normatively permissible. However, Mary, John’s wife, wants John to live. James says to Mary, I will give John a kidney for 20k (which Mary has). Is Mary now obligated to pay James for his kidney to save John? It seems to me that she is. It seems that the general argument against paying is that it is not permissible to stop someone from knowingly and freely choosing to do a good action regardless of whether or not the action is supererogatory or obligatory. The underlying presumption is that we are interested in assessing the normative status of being able to blame someone for not acting and it is generally agreed that it is not permissible to blame someone for not performing a supererogatory action. If James wanted to help John by giving him a kidney no one should stop James from being able to do so. It is not permissible for Mary to pay James not to help John because James knowingly and freely wants to help John. However, it does seem reasonable to assert that Mary should do what is in her power to convince James to help John if James has yet to decide or can be persuaded to help John. In Mary’s case we are asking about her obligations and it seems reasonable to maintain that her obligation to John is to work for the best outcome for John based on what John desires. We can utilize Singer’s notion of comparable worth to help us make this assessment; as long as the cost is not greater then the benefit then we are obligated to do the action. Assuming that John desires to live and that Mary can affect that outcome without the cost being greater then the benefit, then it seems that she is obligated to pay James for his kidney even though James is not obligated to give up his kidney. And this does seem to be the crux of the issue. If James were obligated to give up his kidney then Mary would not be obligated to pay for it. It is the fact that James is not obligated to give up his kidney that makes it obligatory for Mary to pay.

Sorry to arrive late to the party, but isn’t this all a debate about the permissibility of leveraging? That is, when is it permissible for B to G so as to leverage A to F? If so, then the following looks to hold: 1) Assuming (for A) F-ing (or ~F-ing) to be morally permissible, if (for B) G-ing itself is morally permissible, then it is morally permissible for B to G so as to leverage A to F (or ~F). 2) Assuming (for A) F-ing to be morally obligatory (and ~F-ing to be morally impermissible), then if (for B) G-ing itself is morally permissible, then it is at least morally permissible for B to G so as to leverage A to F (and morally impermissible for B to G so as to leverage A to ~F). This goes both ways: that is, the same applies to being leveraged as to leveraging. For example, suppose it is supererogatory for anyone to give money to UNICEF. Alf decides to make a one time donation of 5 dollars. Betty tells Alf that if he doesn’t donate any money to UNICEF, then she’ll donate 10 dollars to UNICEF. If it is the case that donating or not donating money to UNICEF is morally permissible, then it cannot be the case either that a) it is morally impermissible for B to donate money so as to leverage A to not donate money or that b) it is morally obligatory for A not to donate money in virtue of being leverage by B donating money. So, even were we to suppose that Betty tells Alf that if he doesn’t donate money to UNICEF, then she’ll donate 10 billion dollars to UNICEF, unless some additional threshold argument is made, Alf no more has an obligation not to donate his 5 dollars than he did prior to Betty’s leveraging. You cannot generate an obligation or prohibition merely in virtue of morally permissible leveraging. So, it looks like if A ~F-ing is morally permissible and B G-ing is morally permissible, then B G-ing so as to leverage A to ~F is likewise morally permissible. Given that the supererogatory is a subspecies of the morally permissible, whether or not for A F is supererogatory doesn’t matter at all for the permissibility of B leveraging A to ~ F. Intuitions to the contrary I think result from the supererogation of A F-ing functioning in Jason’s (and others’) examples as an illicit intuition pump.