Rules

(tap or hover for details)

Guidelines

This is not a forum for idle musings Give arguments, not opinions.

This is not a forum for idle questions If you are posing a question, make sure to state and argue for your position on it as well.

Discussions about Suicide Philosophical discussion of suicide is welcomed here, but if you are feeling suicidal please visit /r/suicidewatch - they'll be able to help far more than we will. If a user is acting as if they are suicidal, please direct them to the above sub-reddit; encouraging them will result in a ban.

Posting Rules

Questions Belong in /r/askphilosophyIf you are simply posing a question, whether it's for homework, self-directed study, or general curiosity, check out /r/askphilosophy as well as our Recommended Reading List. Questions are permissible here only if you clearly state and argue for a position of your own.

Links and Self-Posts Must Be On Topic Make sure you understand what kinds of topics are relevant here before you post! Read the FAQ for more information.

Arguments, Not Opinions Self posts, blog articles, etc. are encouraged, but must clearly state and argue for a position.

No Image Links Post images in the body of a self-post. Image links will always be removed regardless of content.

Mark PDFs in title; No link shorteners All links to PDFs must include a [PDF] in the title; submissions without are subject to removal. No link shorteners, regardless of content, are allowed in either posts or submissions.

Products, services, and fundraisers If you have a product, service, or fundraiser you want to promote -- anything soliciting money or personal information, regardless of purpose -- you must first contact the moderators for approval. Unapproved posts of this kind will be removed. Egregious or repeat offenders will be banned and reported to Reddit administrators.

Commenting Rules

Stay On Topic Comments which are unrelated to philosophy (read the FAQ for more information) may be removed. Top level comments that do not respond directly to the original post may be removed.

Argue Your Position Comments that solely make assertions or express opinions without presenting an argument may be removed.

Be Respectful Comments which do not contribute to the discussion and consist only of personal attacks may be removed.

I came to this question while pondering utilitarianism, which I support in theory. But in practice, it would surely be used to justify wholly immoral acts. It seems like it needs a meta-ethical system to cradle its implementation.

Although I don't have a particularly interesting answer to your question, I will point you to some literature on a related issue. Rawls believed that ethical theories should operate under a publicity condition, i.e. that the moral reasoning should be available to the agents who are expected to follow it. To quote the SEP article on Rawls:

Rawls also emphasizes publicity as an aspect of fairness. In what he calls a well-ordered society the principles that order the basic structure are publicly known to do so, and the justifications for these principles are knowable by and acceptable to all reasonable citizens. The idea behind publicity is that since the principles for the basic structure will be coercively enforced, they should stand up to public scrutiny. The publicity condition requires that a society's operative principles of justice be neither esoteric nor ideological screens for deeper power relations: that in “public political life, nothing need be hidden.

There's also two more related issues that commonly are said to face consequentialism, namely that some theories are self-effacing or self-defeating. To quote SEP again:

An ethical theory is self-effacing if, roughly, whatever it claims justifies a particular action, or makes it right, had better not be the agent's motive for doing it.

or alternatively a theory is self-effacing if the theory works best when the agent isn't actively using it as motivation.

On the other hand a theory is self-defeating if its truth would imply its falsity, e.g. in our case if consequentialism produced terrible consequences. That seems to be at least similar to your original question.

Most philosophers, especially those following Parfit since Reasons and Persons regard self-defeating theories as being bad or unadoptable. The debate on self-effacing theories however isn't as clean cut.

What OP is describing reminds me more of Parfit's indirectly self-defeating theories. He says of some theory, T:

T is (collectively) indirectly self-defeating if, when a group of people tries to achieve their T-given aims, those aims end up being worse achieved than if not every member of the group tried to achieve their T-given aims.

Relevant to OP's concern would be his section on how consequentialism is indirectly self-defeating. But I don't think that Parfit thinks a theory's being indirectly self-defeating presents a danger to that theory.

You might want to read up on rule consequentialism, which is partially a reaction to the issues of applying act consequentialism (which utilitarianism is an example of). The basic idea being that the obvious practical problems of act consequentialism can be avoided if we simply live by rules derived from considering the typical consequences of actions. That said, we typically would not say something is true because we find it useful, but then there are the pragmatists...

I've always seen rule utilitarianism as an acknowledgement that we're never going to be (even remotely) fully informed about the consequences of our actions. Given these epistemic problems, it's consequentially safer to undertake the course of action we think would in most cases increase general utility.

It's kind of suspicious, of course, that these align more comfortably with our general intuitions about morality than what a strict act utilitarianism might prescribe.

If it provides value to those who have it. If it provides value to no one, no matter what, then I'm inclined to say that it can't be an ethical position (unless it is the ethical position that there is no value), because the correct ethical theory, to me, is something that provides the maximum value (if ethical theories compete for providing value, and one provides more value than the other, then I do not see how we can say that they are truly equal ethical positions).

I think I understand what you're getting at. Utilitarianism concerns the greater good, but implicitly accepts the immoral treatment of certain individuals in order to achieve its ends. Considering this dimension, one must also beg the question: is such a practice morally unsound because it justifies the immoral treatment of a few so that the best interests of all can ultimately be served?

I've never given much thought to that idea. Indeed, from a theoretical standpoint, utilitarianism makes sense ethically. But what of the few who get the perverbial short end of the stick upon its implementation? Well we're then led to consider a second ethical dimension which works to sure up, or supplement, the unethical byproducts of utilitarianism. A tag-team of ethical codes. I'm not sure what that second code would be or what it would look like, but it seems to me that implementing a multi-layered ethical system would create a single code that is altogether distinguishable from utilitarianism.

But even upon implementation of this supplemental system of ethics, a different set of immoral considerations would invariably blossom. Then we would need to put in a third support system, and so on and so on. Not to mention that once multiple systems are in place, they are almost certain to contradict one another. In such a situation, which code are we to defer to? Which supersedes all the others? Who makes such decisions, and what is the criteria?

At the end of it all, we'd be right back at the beginning, and start asking the same questions which would lead us back to the implementation of a solitary code--say utilitarianism--beginning the entire debate anew.

There is no perfect combination of ethics that achieve the most morally sound outcome for everyone all the time.
As you can see, your inquiry raises more questions than answers, and may get to the point where the entire system collapses in on itself.

What is your basis for thinking an ethical theory is valid? Does that basis contradict with it being good for some people to not know the ethical theory?

I think it's pretty obvious that the truth of certain ethical claims is independant of whether they are (as explained elsewhere in thread) self defeating in certain ways. There's nothing in principle contradictory about saying "I'm a utilitarian, but I think we should lie about moral truth for the greater good"

What kind of system should it be if it is unable to spread at all? What do you mean by "unethical to teach" anyway, explain yourself.
If an ethics is something created and exists only in oneself, then of course it can be(but not necessarily) valid, why not?

There are many utilitarians who have worried about the status of teaching utilitarianism in terms of a utilitarian calculation. I'm pretty sure Sidgwick is a good source on this. There is, after all, no reason why teaching utilitarianism couldn't itself be analyzed in the utilitarian framework, and it's got to be an open empirical question whether the teaching itself turns out to be licensed by the framework.

You can imagine reasons why it wouldn't be. It is entirely plausible that, in trying to teach everyone such a delicate philosophical view, you end up confusing people and they perform the wrong calculations. So it might be better to go with what Mill calls "rules of thumb" - general moral guidelines that, although not right in all circumstances, might cut down considerably on mistaken applications.

Can you elaborate more? I'm trying to conceive how and why it could be unethical. I think you're getting at that some people might not be either smart enough or personally ethical enough to use the ethical system for good? Can you give an example?

I believe that cultural and political contexts can be so divergent that no system could ever address every situation that people might find themselves. I think general ethical goals may be held, but even they can be distorted beyond recognition. For example, if an ancient soldier wished nothing more than to die gloriously in battle, and marched off with thousands in a hopeless war, what system could you impose on him? In some sense, perhaps trying to instill a love for peace would actually take away the only "good" they ever saw in life. How do you codify "the good" across all time and places?

You sum up why ethics is so frustrating. Any absolutism doesn't work. You can't say for instance that stealing is wrong unconditionally and under every circumstance unless you want to condemn the starving orphan who steals an apple at the market, or some other extreme case. But it's also unsatisfying and offensive to our intuitive feelings about the notion of right and wrong to say that morality is entirely relative. Surely we should be able to formulate some kind of universal moral code, right?

I think this is why we still come back to Aristotle's ethics and virtue theory with his notion of the golden mean. When I read his definition of virtue:

“Virtue is (a) a state that decides, (b) consisting in a mean, (c) the mean relative to us, (d) which is defined by reference to reason, (e) i.e., to the reason by reference to which the intelligent person would define it. It is a mean between two vices, one of excess and one of deficiency.”

what jumped out to me was "relative to us." I don't want a system that makes morality relative, damn it! But consistent with his doctrine, even the golden mean is a golden mean between absolutism and relativity; virtues are both absolute and relative. They are absolute in that they are always, in every case, the mean between vices of deficiency and vices of excess. They are also relative in that they relate to our personal circumstances, and they are personal in that we must employ our reason and proper emotions towards any ethical circumstance.

This system justifies your example of the ancient soldier and likewise condemn a modern day equivalent as reckless without contradiction. Between the vices of recklessness and cowardice, an ancient soldier marching into probable demise to defend fellow citizens under siege exercises proper courage. Someone today going out in a "blaze of glory", if utterly unnecessarily would be deemed a reckless, dangerous fool.

I'm not sure if it's perfect but it gives us room to consider individual circumstances while still having an underlying moral principle. What do you think?

I agree that Aristotle's theory is about as close to a system as we can get - thanks for laying that our for me.

My example with the solider, however, was an attempt at an example that defies any notion of virtue as we would define it. An army marching recklessly into oblivion, with no other goal in mind other than glorious oblivion itself, values that recklessness itself. You and I can value a moral principle of moderation, but what good is our subtle system if they value the exact opposite of what we do?

Say a nurse poisons a patient at the hospital because she sees that the patient is utterly miserable and doesn't want to live. In turn, she severs the patient's chance to recover and live a better life, and she inadvertently traumatizes the patient's loved ones.

Another example would be an employee appealing to utilitarianism to justify murdering his unethical boss.

The nurse's poisoning was either the correct utilitarian decision or it was the wrong utilitarian decision.

If it was the correct one, then there is no concern here. Everything turned out well.

If it was the incorrect decision, then the nurse either justified her killing well or she justified it poorly.

In the case that she justified her killing well and her conclusion merely turned out to be false, it was not unethical to teach her utilitarianism. For example, let us suppose that a child is drowning in a river. Joe sees the child and remembers that his dad taught him to rescue any drowning child. Joe rescues the child from the river, but it turns out that the child was actually an alien no one knew about that dies as soon as its body leaves contact with water. In this case, Joe's justification was good, but his conclusion was ultimately false. False things can be justified well. In this scenario, I doubt that anyone would say Joe's father was unethical in teaching him to save a drowning child. The same goes for whoever taught the nurse utilitarianism.

But what if her justification was poor? Joe saw a child that was swimming leisurely and dragged him out of the water and cracked his ribs to perform CPR. He did not have good justification to follow his father's principle. Again, in this scenario, it does not seem as though the father was wrong in teaching Joe to save the lives of drowning children.

This is something that gets overlooked far too often. That certain beliefs of ours are justifiable, like saving the child who turns out to be an alien, doesn't necessarily mean that they're correct. They're only correct insofar as the available knowledge we have to make the decision.

Why would the truth make the justification itself any different if it was made upon the same information?

I think you're misreading what I wrote. That he did not have good justification is not a consequence of his ultimate actions, but merely something I'm building into my example. It's a premise, not a conclusion.

Ultimately you could argue that societal structure is more important than individual hedonism. Mills, for example, argued that liberty can be supported by a utilitarian view and we only need to extend that principle to other situations that allow society to function more fluidly. It's better, for instance, to promote certain rules and principles that yield a net positive to society as a whole, and respecting personal autonomy falls within that scope.

For instance, if people thought that doctors could decide whether you live or die without personal consent, then people would stop going to doctors as they'd be worried that they could be killed. And the same applies murdering one's immoral boss. Lending individuals the legitimacy to kill unethical people would result in too much distrust among the populace which yields an overall net negative for everyone.

Yeah, but the problem is that any ethical theory can be misused, misapplied, and exploited given certain situations. Should we teach Kantian ethics where we can't lie, even to people who would use the truth for unethical purposes? Or should we throw our hands up and say it's all relative? Should we espouse ethical egoism which can, at least on its face seem to justify a kind of Social Darwinism? You could even take the golden rule and be shown that it's ethically flawed to deal with every circumstance we could encounter. Because there are no easy answers I'd say it's much more important to teach many different opposing theories rather than omitting certain ones that have the potential to be harmful - as we'd be left teaching nothing.

These are topics that are undoubtedly under the umbrella of the utility monster.

Despite the repeated opposition of reddit to Sam Harris's version of scientific utilitarianism, I believe it has merit.

The consequence of a group of people wanting to murder their evil boss and getting a receptive culture is ultimately war between the powerful folks who fear being perceived as such a person and everyone else - and those people have power, so ultimately the consequences will be worse than more reasonable approaches.

I personally believe that western culture, despite its many flaws, is a reflection of reason and utilitarianism. The utility of any action must include all of its possible consequences.

That being said, those with the best predictive abilities will be able to use that to their advantage and gain more power.

I'm still a little confused about what you're asking. Is the nurse a person who was taught a utilitarian philosophy and used it improperly?

I'll go with your second example and say there are versions of utilitarianism thought that are much more nuanced than a majority rules, killing x person is okay because it makes everyone else so happy view. Jeremy Bentham's calculus for instance was a system that considered how any action could be maximized for pleasure while best minimizing the pain to other parties and considered other factors.

If you're worried about the negative effects of teaching utilitarianism to everyone, maybe what you actually have a problem with is utilitarianism itself though.

I am not saying this to disparage or discourage you. But what you said is completely nonsense. Please define the word theory and think about what you're saying. Also, a system of ethics is not valid or invalid based upon whether you can teach it. Furthermore, ethics is not something that is taught like math - it is a social system.

I can see your frustration, but maybe he doesn't have the same grasp of the definitions and core concepts of philosophy as you, but I still think it's possible to ascertain what he's getting at. In such the situation, one can make a few assumptions and then have a discussion considering those alternative definitions and ideas.

It's probably apparent, but I don't have any formal philosophical training. But his inquiry doesn't seem like nonsense at all. We have to start somewhere, right? Why not here?

I'm not agreeing with anyone here, but I just want to point out that his analogy of ethics to math seems very black and white. He isn't saying that ethics courses don't exist; he's saying ethics cannot be taught in one way like math. In math 2+2=4. In ethics it is no where like that, there are more 'answers' ,if you will, to situations.