Saturday, February 06, 2016

Opposite Day: "Charity begins at home" edition

It's been almost a decade since my evil twin Ricardo last posted on this blog. I invite him back today to share a horribly misguided speech that he recently gave as part of a debate in St Andrews on the topic 'Charity begins at home'. (They needed someone to defend that awful claim, and I wasn't entirely comfortable about it myself, so sent along my evil twin to do the job. Here's what he came up with...)

Charity begins at home… but that’s not to say it ends there!

So let me begin by clarifying what is and is not at stake in this debate. It’s common ground that we should do more to help others (and the global poor in particular).

Our core thesis is just that we should not focus exclusively on the global poor, neglecting significant needs closer to home.

To establish this thesis, consider the following scenario: Upon learning that the Against Malaria Foundation can save a life for around £2000, you go to the bank and withdraw £4000, intending to save two lives with it. But on the way to the post office, you come across a young child drowning in a shallow pond. You are the only person around, and her only chance at survival is if you jump in immediately to rescue her -- ruining the money in your pocket. What should you do?

Obviously, you should save the child. This remains true even though the alternative -- were you willing to watch her drown in order to keep your money safe -- would have led to your later saving two lives instead. The rule of rescue bars us from making such tradeoffs. When we see great needs, right before our eyes, we morally must act. We can’t just sit back and coldly calculate for the greater good. If you share this moral judgment, then you, too, agree that charity begins at home. We cannot neglect the needs around us for the sake of some distant greater good.

What is the explanation for this? One way to get at this is to ask where one goes wrong in acting otherwise. Suppose your neighbour would watch the child drown, for the greater good. What would you think of this, and why? Well, most naturally, I think you would worry that your neighbour is a disturbingly callous person. What sort of person can just sit by and watch as a young child drowns right before his eyes? He would seem a kind of moral monster. If his reason is that he wants to save two lives instead, then that adds a complication. He isn’t obviously ill-meaning; he wants to do what’s best. But it’s still monstrous in the sense of being inhuman -- perhaps robotic, in this case, is the better description.

We think that part of what it takes to be a morally decent person, is to be sensitive to the needs and interests of those around you. To be willing to watch a child drown displays a special kind of moral insensitivity, even if it’s done for putatively moral reasons. Such an agent, we feel, fails to be sensitive to human suffering in the right way -- in the emotionally engaged kind of way that we think a properly sympathetic, decent human being, ought to be. Of course there remains a sense in which your robotic neighbour wants to minimize suffering, and that’s certainly a good aim as far as it goes. But one can’t help but feel that your robotic neighbour here is being more moved by math than by sympathy; they have an over-intellectualized concern for humanity in the abstract, but seem troublingly lacking in empathy for the concrete, individual child in front of them.

So -- need it be said -- this concretely misanthropic lover of abstract humanity should not be our moral ideal. To deny, in this way, that charity begins at home; to insist that we must watch a child drown before our eyes if the greater good calls for it, is a morality for robots, not for human beings.

Another way to get at this idea is to ask yourself: Don’t you think there’s moral value to empathy? To spontaneous expressions of love and concern for one’s fellow human beings? Would you wish to live in a world where such feelings had to be suppressed in favour of cold calculation? That you mustn’t reach out to the person in need beside you, if you could instead have served a greater good far away?

Again, to deny that charity begins at home is implicitly to advocate for a soulless -- if chillingly efficient -- society, where spontaneous caring and affection is frowned upon as a distraction from the greater good. Where your role, as a moral agent, is nothing more than to be a well-oiled cog in a utility-maximizing machine. To be the sort of inhuman machine part -- or moral robot -- who could watch a child drown before your eyes if the math called for it.

We argue that such a chillingly impersonal and inhuman approach to morality must be rejected. Morality stems from empathy, and empathy engages us -- in the first instance -- with our immediate surroundings. Charity begins at home. It’s common sense. Look after your friends; look after your family; support those in need in your local community. Rescue that poor drowning child. You’ve got to start there. You can’t ignore this stuff. But don’t stop there, either. By all means, expand your circle of concern to also help those in the most desperately poor regions of the world. We’re not opposing that at all.

All we’re opposing is the sort of moral extremism that tells you that philanthropy must be 100% efficient, and that your home -- your friends, your family -- are nothing special; that the needs all around you, that most easily engage your natural human sympathy, are nothing special; that the child drowning before your eyes is nothing special.

We argue, by contrast, that morality must contain at least some room for these special relationships to matter. It makes a difference if it’s your friends and family on the line. It makes a difference if you would have to suppress your empathetic impulses -- the very basis for moral behaviour in the first place! -- in order to serve the greater good. And so it makes a difference if the drowning child is right before your eyes. Don’t neglect those that are far away -- of course! But use your common sense. Charity begins at home.

Related Posts by Categories

30 comments:

It seems to me your scenario might be difficult to internalize - i.e., our intuitions might be disregarding parts of the scenario that seem particularly unrealistic. For example, if you jump - as you should -, you're not certainly failing to save two other lives. In fact, it seems to me that you can rescue the child, and the probability that you will save those other lives goes down only very slightly, given that:

1. You could just take the money off your pocket before jumping. That will take no more than 1 second, and 1 second makes a minimum difference in a drowning case (i.e., the chance of a successful rescue is almost identical). Granted, you might stipulate that you're in a place full of thieves so the money will likely be stolen, but that just makes the scenario less realistic. 2. Ruined money can be replaced in the US (if you live elsewhere, I can find other links). Granted, you could stipulate you're in Syria or North Korea or something like that, but I think that would be a problem for the whole scenario. 3. If everything fails, you can still start a petition on some social networks explaining what happened, and asking for money for the Malaria Foundation, saying that the foundation didn't get the money only because you jumped to rescue the other kid.

In addition to that, it's not guaranteed that two people will die of malaria - who would otherwise not die of malaria - if you don't give the money to the foundation. Even untreated malaria will often not kill, and it's hard to tell whether the money will save two people. On the other hand, the child is guaranteed to die (well, intuitively not really, despite what the scenario says, but almost certainly she will die) if you fail to rescue her. Granted, you stipulate that the foundation will save two lives for certain, but as I mentioned, I think stipulations like that might be difficult to internalize - i.e., one may still be making intuitive probabilistic assessment that conflict with what the scenario states.

Yeah, I guess there's always a risk with such thought experiments that we'll have trouble co-operating with the relevant stipulations. Though I think even if one modified it to make better sense of the stipulations (say the money is in a buttoned pocket, and I'm jittery and slow at opening buttons; or a fire imp is guarding the pond and will only let me through if I give him £4000 to burn, etc.), the intuition seems to remain. I think we (most people) really are moved by the salience and vivacity of the nearby child's need, and have difficulty being as moved by mere abstract thoughts of a greater number of people in need elsewhere. (See my coauthored 'Virtue and Salience' paper for related discussion.)

So a more promising response, I think, needs to tackle the intuition head-on. I see two main options here: (1) Argue that the pond intuition doesn't generalize: it tells us to prioritize visible/salient needs, but that's no reason to prefer an out-of-sight local charity to an out-of-sight global one (cf. dionissis' comment below), and/or (2) Argue that the intuition is morally corrupted: Yes the visible child's needs are more salient to us, but that doesn't make them any more important than those who are out of our sight, so surely we should try to overcome our psychological (including attentional) limitations when possible, and act as we would if we had full information (and could see the distant needy just as vividly, etc.)...

Personally, I think the "jittery and slow" scenario still leaves the option of having the money replaced, which is likely to be successful, and I'm not sure the fire imp isn't weird enough to mess with intuitions: for example, if I saw someone who looks like a fire imp, I wouldn't even recognize that (since I hadn't heard from fire imps before), but assuming he's showing his power, etc., I don't know whether I would just question the whole scenario (i.e., am I hallucinating somehow?, etc.), or consider the fire imp a massive threat (i.e., we live in a world with fire imps! and one is threatening me!), etc., I would not be inclined to say that someone who fails to rescue the child in such a situation behaves immorally - I would have to consider that person's state of mind, etc. Granted, one can try alternatives to fire imps, like robots, etc., but still the threat becomes at least as salient to me as the child, so I'm not sure this works.

In reality, I would blame the person who fails to rescue the child, but in reality, there is very little chance that rescuing the child will interfere with a person's ability to give the money to the charity in question.

But that aside and assuming the objections I've raised failed, it doesn't seem to be that the visible/non-visible distinction matters, given the following scenarios

a. Let's say that Alice is planning to give $10000 to an out of sight local charity, and they can save 10 lives with it. But she's on vacation, and she sees another charity (i.e., the building is right there, clearly recognizable, they explain what they do, etc.). She can give $10000 to that charity, and they can save a life with the $10000. Does she have a moral obligation to pick the second charity? I don't have any intuitions that she does. In fact, though I don't have an intuition that she has a moral obligation to do charity - that would depend on the case -, my assessment is that she doesn't have a moral obligation to help the second charity. In fact, even if they asked her directly, she may permissibly reply that she respects what they do and wishes them the best, but she's planning to give the money to the other charity. There's nothing wrong with that reply, even if she made no promise to the other charity.

b. The child is drowning in the pond, but a local charity's building is also visible. The intuition seems to remain (I'm assuming scenarios like those presented by Ricardo escape the objections I raised before, so I may use similar scenarios). Moreover, even if the people at the local charity has already send an email to Alice asking for money, the intuition seems to remain.

As for the salient/non-salient distinction, I'm not sure how you construe that (is it what's salient to our moral intuitions?), but regarding point (2), a counter reply is that even if one is fully informed, the intuition remains. How would one tell whether a moral intuition is corrupt or not? After all, in the end one can only make moral assessments by means of one moral intuition or another, so what's is the stronger intuition telling us not to rescue the child?

Ah, careful, my suggestion was about the visibility of the needs, not the visibility of the charity that would serve some (out of sight) needs.

re:2, I'm not sure that the intuition remains if you're fully informed in that you don't just know abstractly about the distant needy but that you can also see them just as vividly (say you have a "God's eye view" of the whole world). I think that'd be a lot more like the case where there's another pond just beyond the first, with ten children drowning in it, to which we think that one can (indeed should) let the one drown in order to save these other ten.

Regarding the visibility of the need, do you mean that the people in need are visible? (I'm not sure how else a need could be visible)

re2: I think I misunderstood "fully informed". I thought it was after philosophical consideration, or at most information that some people are in danger, but not actually looking at them. Even so, it seems to me that if I have a camera that shows me those other children in need at a distance (i.e., in need of a malaria drug in the long run, not immediately), the intuition remains, but then again, I can't really get out of my head that there is no way (intuitively) that failing to rescue the child in the pond will have a negative impact on them. Even if I get into the pond with the money, chances are most of it (or probably all) will be okay after it dries, and the rest can be changed (yes, I know, fire imp).

"Look after your friends; look after your family; support those in need in your local community."

My question is this: why should the defender of the "charity begins at home" meme leap to the idea that our local community (over and above family and friends) takes precedence over a distant community? It might make psychological sense to say that people who don't prioritize friends and (deservedly loved) parents might be so emotionally compromised that they won't succeed in becoming utility maximizers anyway. But why should a person not known to me take precedence over two other unknown persons merely on the grounds that the former is a compatriot, or a member of the local community? On the face of it, nothing of emotional value seems to hinge on such a predilection --actually, something of potential disvalue seems to come out, nationalism. In other words my question is: can the defender of the "charity begins at home thesis" ditch as unnecessary to her theory any calls for priority of locals over distant persons, and merely stick to the idea that friends and deservedly loved parents take priority over unknown persons, at least when it comes to a choice of protecting the former or the latter from grave comparable concurrent harms, no matter the number of unknown persons that will suffer? And can she ground all this on the utilitarian ground that diachronic and sustainable utility maximization requires a global society constituted of persons who are objectively utility maximizers, and that emotionally compromised persons cannot be as efficient in utility maximization as their friend-prioritizing counterparts? Can my evil sister be utilitarian (and cosmopolitan or even post-nationalist, instead of patriotic), while caring more for her children and friends?

P.S. Prof Chappell i am reading your brilliant "willpower satisficing" (really slowly because i am a non-philosopher and my digestion rhythm is way too slow compared to Phds in analytic philosophy, not to mention professional philosophers), so i will have more to ask about its relevance to your present post once i am done.

P.S.2:I submit that the intuition that not rescuing the drowning child is the omission of a horrible person could be alleviated if we posited that there are two other children,coming from another community and who are drowning in a pond a few blocks away, and have our hero save those two. The initial choice not to save the first child so as not to ruin the money that would be donated to the charity might seem coming from a callous person because there is plenty of time to save the other lives through the charity --the agent could save both the drowning child and the distant persons. But, it seems to me, if we tweak the thought experiment as i suggest, the choice to save the two non-local children (who are drowning locally) at the expense of the local child is an ethical no-brainer.

P.S.3 I spoke of deservedly loved parents, to exclude the case of a person who loves her abusive father from being a case of a well functioning personality. I would try to somehow anchor this notion of desert in the idea of the emotional responses of a well-functioning human personality, and try to anchor this well-functioning in turn to utility maximization --something along the lines that the universe is such that utility is maximized if people's personalities are thus-and-so --not that i know how to do it :)

Hi dionissis, yes, your main point here strikes me as exactly right: there's no obvious basis here for prioritizing unseen, anonymous strangers who happen to be part of one's "local community". At most, there's a case for prioritizing needs that one literally comes face-to-face with, but most -- even "local" -- needs are not like this. (And yes, I think one can make an "indirect utilitarian" case for encouraging loving relationships, despite the favouritism that inevitably goes along with this.)

I also agree with the point in your second postscript that it's clearly the locality of their drowning, rather than the locality of their national origin, that intuitively matters here. Not so sure if it's necessarily anything to do with "time", though, so much as factors to do with salience and vivacity as discussed in my reply to Angra, above.

On the basis of your scenario, you claim: "When we see great needs, right before our eyes, we morally must act. We can’t just sit back and coldly calculate for the greater good. If you share this moral judgment, then you, too, agree that charity begins at home. We cannot neglect the needs around us for the sake of some distant greater good."

I have some concerns about the scenario, as I explained before. But if we assume that the sort of scenario you're using is a good guide to moral knowledge, how about the following scenario?

Bob sees the child, drowning in the shallow pond (in the US). Bob is in good shape and the pond is shallow, so Bob can easily rescue the child, but he properly reckons it will take him at least 3 minutes, and maybe more. Alas, there is a switch 500 meters from Bob. If Bob fails to flip the switch within 3 minutes, then a 50-megaton nuke will explode in, say, Berlin. Does Bob have a moral obligation to rescue the child?

If you share the moral judgment that he does not, then it seems to me it's not the case that we shouldn't neglect the needs around us for the sake of some distant greater good (though "neglect" is a negatively loaded word).

Moreover, even if the switch is required to save the lives of two children 1000 kilometers away, tied up by a psychopath and about to being blown up by a bomb set up by him, it seems to me that Bob has no moral obligation to rescue the child in the pond.

Just consulted with Ricardo -- he's of the opinion that having part of the causal mechanism (the bomb switch) in your vicinity is enough to make you part of the "immediate situation" involving the distant children, despite your geographic distance. You are causally proximate to them, in a way that you aren't with most of the global poor.

(I can't say I'm personally convinced that this is a particularly principled notion that he's drawing on here, though...)

Okay, let's go with Ricardo's theory. Then, it seems the relevant factor is not the physical proximity of the people in need, or whether we can see them with our eyes, etc., but causal proximity. Now let's consider an out-of-sight local charity vs. an out-of-sight charity 15000 kilometers away. You can donate $4000 to either of them. The former can save 1 life, the latter can save 2 lives. In both cases, you can donate $4000 via an online transaction that takes less than 5 minutes and is equally easy for you to make. Then, it seems to me that there is no greater causal proximity in the case of the local charity. Generally speaking, the internet and online banking seems to eliminate causal proximity in the case of many if not most charities, and that seems to the part of Ricardo's case about the local community - though dionissis already made a strong case against Ricardo's argument for the local preference.

As for the other part of Ricardo's case - namely, about family and friends -, I actually think that it succeeds, but I think examples that focus on such cases are better suited to defend the claim. For example, in the family case, let's say that there are too shallow ponds, one 70 meters East, the other 70 meters West. Three 6-year-old children are drowning in the former, and one is drowning in the latter. Alice is in good shape, and run go to rescue the children without taking any significant risks. The expected number of rescued children is clearly the highest if she goes East first, and only if she manages to rescue all three (or if the rest died already), then go West. However, the three children in the pond to the East are strangers. The one in the other pond is Alice's daughter. What should Alice do? Or are both courses of action morally permissible? It seems intuitively clear to me that Alice should try to rescue her daughter first.

Yes, that all sounds right to me. (Except Ricardo would insist that causal proximity isn't the sole relevant factor, but just one of several possible sufficient conditions for morally-relevant "proximity", such that physical proximity of the people in need might be another. But I don't think this makes any significant difference to your response, since out-of-sight "locals" don't, intuitively, seem to be especially "proximate" in any morally relevant sense. It seems you really need to be in the immediate vicinity for pond-style intuitions to kick in.)

Hi again prof Chappell, hi Angra. A thought that occurred to me is that if the drowning child was drowning because of me (say, i pushed her inadvertently into the pool)then the intuition that rescuing her takes priority over rescuing the two distant children through the charity is even stronger -- i can hear in my head a Ricardian inspired superego deriding me with a "clean your own mess before you go out there to play moral superhero!"). Still, the hypothetical utilitarian injunction to save the two distant kids, if predicated only on the number of lives lost, seems to be missing any distinction between the two cases (i.e. my being responsible for the child's drowning vs my merely being in a position to rescue it).To preserve the essentials of my utilitarian theory intact i might want to argue that, unless the agent's disposition to correct her own mistakes is so great as to reach the level of akracia, she won't be as efficient in her everyday utility maximizing, and therefore the disposition should be cultivated and the community should acquiesce to the psychologically inevitable (but rarely called for) inefficient action aiming to make up for one's mistakes.

P.S.Prof Chappell i haven't yet read your coauthored "Virtue and Salience" (i just finished the "Willpower Satisficing" and i have questions waiting for an appropriate moment) but it seems to me that what counts as salient in a given situation should definitely make room for the agent's prior responsibility in the mess she is asked to ignore (by the way, could we say based on your criterion of permissibility in "Willpower Satisficing" that it could be permissible for an agent to save the kid she inadvertently threw into the pond because in such an eventuality the mental effort she would have to exert in order to bypass the urge to save the drowning kid would exceed the maximum effort required of her on your satisficing theory in the aforementioned paper?After all, she is motivated by a psychological mechanism that is distinctively associated with adequate moral concern --correcting the bad results she caused with prior actions. If so, I guess that we would need to make some additional stipulations for the agent's mindset at the time in order for the thought experiment to work as sanctioning this permissibility).

You might defend a utilitarian theory like that, but it seems to me - if I'm reading this right - that arguably someone who only in such an extremely odd case would save the two kids would not have her own everyday disposition to correct her own mistakes significantly affected (it's a one-time event), and she may actually have cultivated a strong utility-maximizing mentality, which usually includes correcting one's mistakes.

That would seem to make saving the two other kids permissible, at the cost of letting the one in the pond die.

Hi Angra. I had in mind an empirical claim (which i expressed rather obscurely!) to the effect that someone who is very much psychologically disposed to maximize utility must have already (as a matter of psychological necessity) internalized a very high degree of the disposition to first attend to making up for her own mistakes. It seems to me that possessing to a high degree this latter disposition (correcting one's mistakes) is a necessary first stage to acquiring the former (maximizing utility). Of course my empirical conjecture might be mistaken.I didn't have in mind that in case the agent ignored the kid she accidentally pushed into the pond she would then somehow jeopardize the continuing existence in her of an already existing strong tendency to correct her own mistakes. I was rather using her hypothetical omission to save the kid she pushed as evidence of an absence in her of such an own- mistake-correcting disposition (and, hence, as evidence of an absence of a proper utility-maximizing disposition--if my conjecture about the particular necessary interconnection of the two dispositions is true).Thanks for helping me to both clear my thoughts on the issue and express them more accurately --not that i am now fully clear about either of these, but now it's better :)

Thanks for clarifying your point. Maybe I haven't been clear enough myself, but I think the crux of my reply still applies, because regardless of what empirical evidence is available to other people (we may even stipulate that no one else is around, so if she sacrifices the kid, probably no one will ever know), there is a question as to whether sacrificing the kid in the pond is morally permissible. From the utilitarian perspective you seem to be suggesting, I don't see why it would be impermissible to sacrifice the kid.

Angra i agree with you that the presence of other people (who would then know what happened) is not required for my theory. The reason i am inclined to say that the omission to save the kid that i accidentally pushed in the pond would be wrong is that i speculate that if an agent is emotionally capable of ignoring her impulse to urgently rescue the kid in front of her that she herself caused to be endangered, then she is either Goddess (the one and only of monotheism!) or (more likely) a person who is objectively incapable to be a reliable utility maximizer because she is emotionally disturbed. What i had in mind was that asking of people in general to make such a sacrifice would be tantamount to asking them to be the sort of persons that are not reliable utility maximizers. If this particular agent is indeed disturbed and coldly walks away and goes on to donate to the charity, then this particular action might have been utility maximizing, but a world consisting of such persons wouldn't be (i speculate) a world with optimal utility levels--we could get much better universes in terms of utility if people were brought up with such emotional sensitivities that would make us incapable of walking away from the kid we pushed. I would argue that for this particular agent, if indeed she is disturbed and can overcome the impulse to rescue under the particular circumstances (or, more likely, if she doesn't feel the impulse at all), then it is permissible for her to ignore the drowning child. But i wouldn't say that such an action is morally required of people in general, in fact i would be more tempted to consider the event as an example of a morally failed agent who happened to perform a utility maximizing act.

dionissis, Goddess would save them both! (well, actually, I think she wouldn't create a world in which kids are drowning or strapped to bombs, etc.)

But seriously, I think an objection to your theory would be that it's not permissible even for the disturbed individual - an assessment we (or some of us, at least) make on intuitive grounds.

A similar objection would be as follows: let's say that the disturbed individual pushed the kid deliberately because he was angry because the kid made fun of him. In that case, would it be morally permissible for him to let the kid die in order , assuming that he is psychologically capable of it? As before, my answer would be "no". But on the utilitarian version in question, I'm not sure why it wouldn't be permissible.

Angra, concerning whether it is permissible for the disturbed agent to ignore the kid she accidentally pushed so as to donate to the charity, i merely granted the agent-relative permissibility for this particular disturbed agent, but considered both the agent and her action as exemplars of moral failure, in utilitarian terms.Like you said for yourself, I myself too base my present judgements on intuition. My intuitions here concerning the actions of the lady agent are hit by her moral unworthiness.Your example about someone pushing deliberately a kid in the pond, and then ignoring it so that he can go donate to the charity seems to me the case of someone who performed a wrong action (the deliberate pushing) and who later on (quite inexplicably) went on to do a utility maximizing act. Yes, it could be said that on utilitarian grounds the donation was more utility producing than the rescuing, but the first thing that comes to my utilitarian mind to say about the whole thing is that this person is a morally failed agent and that global utility is better procured if people don't become like him. I don’t see any problem for utilitarianism if it has to allow that some choices of actions may accidentally and temporarily maximize utility, but that they are choices that should be generally avoided, as should be the development of the personality traits that led to those actions.

I protest your implicit denial of the existence of Goddess! The problem of evil, to which i understand you alluded, has a perfectly simple explanation:She is a dominatrix at heart :)

Okay, I will grant that she might be a dominatrix. :DWith regard to the two examples, in the first case, I don't think I understand how the action was a moral failure in utilitarian terms, if it was morally permissible. In other words, in which sense was it a "moral failure"? Regardless, as long as the theory holds that it's not the case that the agent behaves immorally when it fails to rescue the kid, that seems to count against that theory in my assessment. Regarding the second example, I see it as a problem for the any theory that holds that after the initial morally wrong action (namely, pushing the kid), the choice to abandon the kid in order to save the two others is not morally wrong, even if it holds that such choices (or similar choices) should be "generally" avoided. On the other hand, if the theory holds that it is immoral to abandon the kid in order to rescue the other two, that looks like an exemption to the utilitarian principle you suggested to me.

Another objection might be a case in which the kid in the pond didn't fall because of anything the agent did, but is her daughter, whereas the two other kids are strangers. What does the utilitarian principle in question say in that case?

Hi Angra. I don't have an undisputed guiding principle! I hope i will learn from analytic philososphy how to better frame one in terms of utility, but my everyday moral thought is very much attuned to both what sort of actions have better consequences and, especially, to what sort of persons are more likely to eliminate global pain and promote welfare. My general question is something like "would the world be a better place (in utilitarian terms) if people were emotionally disposed to act like this or like that?In light of the omission of the agent who didn't rescue the child she pushed, i can't see a possible existing psychological constitution of the agent in question that doesn't spell future moral trouble (in terms of utility maximization). Someone so callous as to allow the kid she endangered to die doesn't seem to me a case whose dispositions can reliably be trusted to guide her in acting in a way that regularly maximizes utility. There isn't much utility to be expected from personalities like this particular agent, that's the gist of my everyday moral assessment concerning her (in light of her omission).I would say that she is a moral failure in the sense that the personality she revealed she has is an inefficient welfare producer. The quality of her will seems ill (or psychologically sick), the sort of quality of will that tends to make people around her cry in pain or in sadness, instead of being ecstatic, smiling, content etc. I grant that her action might have to be seen as permissible for her (or even morally required of her, as long as i cannot come up with a more refined principle than my current "maximize utility" motto) but this is not what would draw my attention if i witnessed her action. I would be drawn to saying how what she did shows a callous person who is likely to sow disutility in the future. I would find salient her tendency to produce disutility in the future (on account of her calloussness), not her utility maximizing act. Perhaps it is because i am theoretically unrefined, but right now i believe that utilitarianism gives me enough resources to be able to criticize her personality (and thus guide myself for future action), even if she happened to do a permissible for her (or even required of her) utility maximizing act. The act was an act of an immoral person, even though it so happened that it maximized utility, that's how i see the counterintuitiveness you suggest.

TWO PART COMMENT -- PART 1Let's imagine Liz the zombie. She is emotionally totally flat, she feels nothing at all and she doesn’t care the slightest about anyone. Let us assume the whole population of the Earth knows her and her condition, and that they all consider her a bad role model (so that there is no chance that whatever she might do will function as a moral exemplar). Let us also stipulate that in the following situation no one is watching Liz as she finds herself wondering whether she should save the kid she accidentally pushed in the pond, or the two distant kids she can save by donating to the charity. She is acting out of, say, an attenuated form of boredom, she doesn't really care about what she will finally decide, it just stuck in her mind to make this choice (Camus’ L’etranger comes to mind). She instinctively tosses a coin to motivate her choice, and it's heads, and this means that she has to save the two distant children, which she does. Now, why should this action of hers be considered impermissible? Saving the two kids instead of the one gives us a net gain in lives saved. There is no danger that she will impair the moral development of anyone else through her action ( because we stipulated that no one takes her seriously as a role model and that no one is watching her at the time of the choice anyway). We can also assume that Liz’s emotional emptiness (and lack of concern about anything) are beyond any possibility of reparation, that there is nothing anyone can do that could even infinitesimally improve her emotional sensitivities or her concern for others, that her moral development has come to an unfortunate end.

Part 2Let me then suggest the following: the rescuing-2-distant-children instead of rescuing-1-visible-child-we-inadvertently-pushed action is morally required only of people like Liz the zombie (because we still want utility maximization). But we believe that such an action probably presupposes a psychological constitution of the agent that has already turned her into a very inefficient utility maximizer (inefficient, because the degree of callousness and lack of concern for others that seem to be a necessary condition for being able to ignore the kid we inadvertently pushed are very probably personality traits whose combination will generally lead any agent possessing them to inefficient choices in terms of utility). So, guided by our utility maximization mindset, we also don’t want other people or ourselves to develop this combination of traits. We prefer to cultivate in us empathy and concern for others. But then it would be way too demanding to expect from people to be able to overcome their cultivated reflexive concern for the (visible) person they victimized. Some sensitive people may be psychologically ruined for life in case they forced themselves to ignore rescuing the child they inadvertently pushed (having for the rest of their lives nightmares of the child locking pleading-eyes with them as she drowns). Others might not be ruined, but would find it almost impossible to ignore the 1 child. I suggest we could say, taking a page from prof. Chappell’s “Willpower Satisficing”, that given that for all agents who are unlike Liz the zombie the action of ignoring requires an incredible amount of mental effort (an amount that exceeds a pre-established effort-threshold), it is permissible for the agents to not engage in the utility maximizing act of rescuing the 2 distant children because the Willpower Satisficing Consequentialism(WSC) i have just endorsed allows (I think!) for the permissibility of suboptimal acts that would require a mental effort exceeding the pre-established effort threshold). It is still morally required of all agents to ignore the 1 visible victimized child if they can do so without having to exceed the established effort-threshold(though we do know that only people like Liz the zombie would be able to do it within such effort-exerting ceiling limitations, and we caution people against cultivating these utility-diminishing personality traits that are characteristic of Liz. Still, the action is morally required of Liz, who doesn’t need to exert any mental effort to do it). But for the rest of us it is morally permissible (though suboptimal) to save at the expense of the two distant children the child we pushed –permissible, because it would be psychologically too demanding to make saving the 2 children morally required, given the assumption that we have cultivated in ourselves the aforementioned desirable personality traits that generally lead to overall utility maximization. What would be counterintuitive in such a result?

P.S. WSC seems to me to offer a great solution for the utilitarian moral permissibility (for certain persons) of ignoring Pedro’s offer. The mental effort that a decent pacifist would have to exert makes it too demanding to ask her to do the killing(and we can still praise her pacifist mindset, despite the occasional sub-optimality, on the grounds that overall it maximizes utility). But a murderous ex-sniper who doesn’t mind killing but is defying Pedro’s offer out of a mere alpha-male antagonistic whim (“Pedro buddy I didn’t like the way you asked, kill them all, I couldn’t care less”) is failing to perform an act that is morally required of him.

First, if Liz doesn't care about anyone as described, she seems to be a psychopath. But in that case, she wouldn't be inclined to make that choice. It's easier for them to just go home, not to save anyone. Second, and that aside, you ask why it would be that action of hers considered impermissible. I reckon intuitively it is impermissible, but I don't claim to know why. Trying to answering the "why" would be speculation on my part and would be in much less firm ground than the assessment (i.e., that her actions are immoral) itself. But this is not a problem in the Liz case scenario in particular. If you asked me why Bob or Alice has a moral obligation to save the kid in the pond, I would answer in the same manner. Third, given that you say that it's morally required of Liz to save the two other kids, wouldn't it be impermissible to toss the coin and decide based on the result? Fourth, let's say that there is just the kid in the pond. And let's say that Jack is a psychopath who threw the kid in the pond, just for fun. He doesn't care about anyone, but enjoys placing people in mortal danger and let them fight for their lives. He particularly enjoys it when they fail and die. There are no witnesses, so Jack is safe. We can also say that he's not a role model to anyone, he's beyond repair, etc. Would it be permissible for Jack to toss the coin and, depending on the result, save the two other kids? Would it be morally obligatory for Jack to save the two other kids, regardless of any coin tosses?My intuitive assessment is that Jack has a moral obligation to save the kid in the pond (i.e., his behavior is immoral). But your reply in the case of Liz seems to indicate otherwise. Would your theory hold that Jack has no moral obligation to save the kid he pushed, but rather the two other kids?

Concerning your first point, i.e. that the motivational structure I impute to Liz the zombie is inconsistent with her considering in the first place the choice of who to save, my excuse is that I tried my best to make things realistic by appealing to a famous fictional character of Albert Camus who was emotionally flat like Liz the zombie --the irony of it all: my trying to dispel the air of absurdity in my thought experiment by referring to a work of an author known among other things for his philosophy of the absurd! I guess I could try to make the illogical disappear by stipulating that although Liz the zombie would never have considered such a choice of who to rescue, an evil neuroscientist had planted in her a post hypnotic suggestion to the effect that Liz would feel an urge to choose if she experienced the appropriate stimuli. I hope this removes any hints of irrealism :) :) :)But if it doesn’t, let us not despair, characters (shady or respectable) in philosophy’s thought experiments have been known to be capable of much worse in terms of consistency of motivation –how about avaricious fire imps? :) :) :)Whatever, let us take for granted Liz’s absurd whim to entertain such a choice.Concerning your second point, I have nothing of interest to say.Concerning your third point ( i.e. that according to my tentative theory Liz’s tossing of the coin to decide who to save should be deemed morally impermissible because my theory casts Liz’s rescuing of the 2 children as a morally required action), my response is that you are absolutely correct, Liz was morally required to save the two children at the expense of the 1 child before tossing the coin, and therefore her tossing of the coin was morally wrong.

PART 2Concerning your fourth point (i.e. that my tentative theory seems to indicate, contra your intuitions, that your Jack the Reaper who pushed the kid deliberately in the pond is morally required to save the 2 kids at the expense of the 1 kid he pushed), you are again absolutely correct, my theory does indeed dictate that Jack do the utility-maximizing action, given the important background assumptions that you stipulated –assumptions that are identical with the ones that I posited for Liz, such as the impossibility of the action’s becoming a moral exemplar, or the impossibility of even a minor advancement (ever) in the agent’s moral development etc. Of course, according to the theory Jack should have refrained from the action of pushing into the pond the 1 child in the first place, his first action was morally wrong. But even after the pushing was completed, his moral obligation to save the 2 children outweighed his moral obligation to save the 1 child he pushed . I speculate that our intuitions that the moral requirement to save the 2 that the theory imposes on him is more counterintuitive than the similar requirement it imposes on Liz because we have introduced the element of evil intention, and evil intentions usually draw our attention (theoretical or practical) more than any other negative trait of the agent. It seems to me a person who feels at ease with intentionally inflicting harm triggers our fears, Jack’s tracking of evil scares us (or, if we are safe, triggers our disgust). But no matter what it is exactly about our psychology that makes evil intentions the epitome of what we find disagreeable, our intuitions cannot easily be extricated from this (and thank Goddess for this. This predilection to spot evil intentions helps us become better persons, aka more efficient utility maximizers, by becoming sensitive to a feature of the world that is a reliable enemy of utility maximization). I guess your thought experiment could cast the theory I proposed in a worse light if we imagined Jack killing the first child with the intention to subsequently use her organs to save the other 2 children.If all the background assumptions are in place (theoretically crafted so that the only consequences on global utility come down to a net gain in terms of lives), then the theory as it is delivers a moral requirement of killing the 1 to save the 2. It delivers a judgement to the effect that Jack (who will self-destruct in this thought experiment immediately after the killing&transplanting so that the action will be a one-off) should kill in order to transplant. But it applies this counterintuitive moral requirement only to Jack-the-hopelessly-ethical-monster (maybe even to Liz, if she can do it within the effort threshold of our WSC, but I doubt she would bother). For the rest of us humans (within our WSC framework), the moral requirement to first of all develop the personality traits that maximize utility in the long run guarantees that we are not morally required to act in ways that would sooner or later turn ourselves into moral monsters like Jack.

Thanks for the nice conversation!

Needless to tell you how unimpressed Liz is with Jack’s sadistic performance. She finds him so…boring :) :) :)

Thanks for the feedback too. I don't have further scenarios to ask, but I would like to ask you: how would you go about testing your theory?Generally, the way in which I test moral theories is by means of presenting hypothetical scenarios in which we have clear intuitions, and see whether our intuitive verdict matches the predictions. Based on that, your theory seems to me false. But if we leave aside such scenarios, how would you test it? From a different perspective, why do you think the theory is true? For example, others have other theories, like one version of deontology or another, or virtue theory. Why do you think utilitarianism is true, instead of one of those? (I'm not asking for a full explanation, but briefly, how would you go about making the assessment that utilitarianism is true; i.e., what method or methods you use to establish that).

Angra, It gets even worse for my theory if we imagine a mother who is about to die, and who is very callous and inconsiderate, but who wouldn't be beyond any possibility of improvement of her character if she were not about to die (i.e she is not as monstrous psychologically as Jack). Her desire to keep her 2 sick children alive, coupled with her lack of concern for others, makes her willing to kill the child next door to use her organs for her 2 children, and she is capable of doing this with a mental effort that exceeds the threshold (she needs the extra effort because she still has some caveats about harming innocents, she can't do the killing&transplanting as effortlessly as Jack). My theory as stated delivers the counterintuitive result that her act was supererogatory.I guess i could try to attribute the counterintuitiveness to the artificiality of the conditions that the thought experiments impose. I'll think about it.

Sketchily speaking, maybe the problem is this: in all the counterintuitive examples (let us not include Liz's case in those) we encounter characters who exhibit an intention to harm innocents. We tend to predict that people who are so acratic in getting things their own way as to be willing to harm innocents are going to be the sort of people that will (statistically speaking)bring about pain in the future. Yet we so craft our thought experiments that any chance of production of net pain in the long run is ex hypothesi out of the question (the agent dies, no one will know etc). But our initial intuitions have already been triggered by our prediction that the personality traits exhibited by the agents are utility-diminishing (pain-producing)in the long-run, that the Jacks of this world are dangerous. And we are left with the feeling that the agent is dangerous and liable to produce net pain (hence we find counterintuitive to cast their actions in good light by saying the acts were morally required of them, or they were supererogatory) but we are also left with stipulations in our thought experiments that artificially (and completely contrary to how we have seen reality unfolding in the past under normal circumstances) assure us that no net pain is going to be produced under the specific circumstances. Hence the tension, I speculate.I ease my utilitarian mind by thinking that we can still blame the individuals for the personality traits that were exhibited in the particular actions (as being utility-diminishing personality traits in the long run), and remind both ourselves and others that these particular cases of acts that atypically happened to produce a net gain in utility were artificial constructs, and that a utilitarian framework is still a good candidate for explaining everyday rightness and wrongness. Or so I think now!

Visitors: check my comments policy first.Non-Blogger users: If the comment form isn't working for you, email me your comment and I can post it on your behalf. (If your comment is too long, first try breaking it into two parts.)