How to Derive “Ought” from “Is”

by Bill Meacham on October 30th, 2010

Sorry to be a bit technical this time, but I want to dispel a pernicious misconception that has haunted western philosophy for nearly three hundred years, the idea that you cannot derive “ought” statements from “is” statements. In fact you can, quite easily.

In Book III of his Treatise of Human Nature, published in 1740, David Hume asserts that normative statements (saying that you ought or ought not to do something) cannot be derived from descriptive statements (saying that something is or is not the case).(1) For instance, from the mere fact that you had to have parents in order to exist, you cannot logically deduce that you ought to honor your father and mother. This has been known ever since as the “is-ought” problem. But actually it is easy to derive “ought” from “is”. The general form is what Kant calls a hypothetical imperative. Here is an example:

If you want to get along with people, then you ought to be honest and friendly.

We can spell this out logically as follows:

Premise: People who are honest and friendly get along with other people.
Premise: You want to get along with other people.
Conclusion: You ought to be honest and friendly.(2)

That’s not hard, right? We have just derived “ought” from “is” in a very easy and straightforward way. So why has this become such a bugbear over the last three centuries of moral philosophy?

It is because people – Hume included – confuse two meanings of “ought”: what is good and what is right. And then they get all hung up trying to figure out what is right when they would be much better off figuring out what is good.(3)

People who think in terms of what’s good think about the effects of what they do, focusing on what is beneficial and what is harmful. People who think in terms of what’s right think about duty and what conforms to moral rules. The two sometimes overlap, but they are really different concepts, and it is not helpful to mix them. And it is certainly not helpful to focus excessively on what is right.

The pure form of the rightness paradigm says we should not consider the consequences of what we do at all, but only whether our actions conform to moral law. Kant, the most famous proponent of this view, said we have a moral duty always to tell the truth. Someone objected, saying that if a murderer were pursuing our friend and our friend hid in a house, we should lie and tell the murderer that our friend went somewhere else. Kant said no, we should tell the truth even if it means our friend’s death!(4)

We instinctively think that’s wrong. And that very instinct reveals something important about our notion of right and wrong: its source in our moral intuitions.

Most of our moral decisions are not reasoned judgments. Instead, they are gut reactions, made automatically by hunch, habit or intuition. You did not have to ponder the matter to know that Kant is being really weird when he says we should tell the truth to the murderer. You just knew that it does not feel right. That feeling of discomfort – a moral intuition – is the source of our notion of rightness.

We are hard-wired to have moral intuitions, for good evolutionary reasons, Biologically, humans are ultra-social animals who live and thrive in groups of people and who cannot survive in isolation. No wonder we are highly attuned to social concerns: who is playing fair and who is cheating; who needs our help and who can help us; who outranks us and who we outrank in the social hierarchy. We make snap judgments in those areas all the time because our brains have evolved to do so. Those who survived better in groups had more offspring than those who didn’t, and we are their progeny.

These judgments take place below the level of consciousness. We don’t pay attention to how we make them; they just pop up. Phenomenologically, people’s behavior just appears appealing or repulsive, praiseworthy or blameworthy. And we take our judgments to be universal, applicable without exception. (Why? because those who stopped to question the group norms did not get along as well as those who didn’t. Hence, they had fewer offspring, just like those who stopped to think about whether they should flee a menacing animal.)

So the sense of right and wrong, including the sense that moral rules are universal, is a result of the evolution of humans in groups.

This explanation is descriptive, not prescriptive. It tells us where the moral sense comes from, but not what to do in any given situation nor what kind of person to try to become. And obviously it does not automatically make the moral rules of our culture the right ones. (This is the proper application of the is-ought dichotomy, by the way. That something is the case does not make it right. But it can easily make it good or bad.)

We certainly do have moral intuitions, but we still have to figure whether or not it makes sense to act on them. Sometimes we actually get to think about what to do, particularly when the instinctive rules conflict. What if it is not a murderer who is pursuing our friend but someone who has a legitimate complaint, that our friend owes him money? Shall we lie to protect our friend or tell the truth to allow a debt to be settled?

In making that decision we need to look at more than moral rules. We need to look at the consequences of our proposed actions and whether we expect them to have a good effect. To accept blindly our sense of morality without examining it is unworthy of an excellent human being.

If increasing your personal wealth were the only consequence of robbing the bank — let’s suppose the bank is a piggy bank washed up by a flood so you have no way to find its owner — then sure, go for it. But generally there are other consequences as well, such as harm to others, which (I would argue) actually causes harm to yourself; the chance of getting caught and punished; the harm to your own psyche (apropos my post last time — http://www.bmeacham.com/blog/?p=74); and probably others that I am not thinking of right now. The “ought” here is advice, not a moral commandment. In most cases the advice to rob a bank would be very bad advice indeed.

That is obviously wrong – you’re hiding an ought behind everything, that we ought to pursue utilitarianism. There is no reason whatsoever that we OUGHT to do that.

The whole argument is absurd. You present conditional oughts as though they somehow refute the idea of obtaining a moral ought. They don’t. Just because we want to hump and enjoy power doesn’t mean we OUGHT to enslave hot babes and kill our enemies. Sure, maybe conditionally as one means, but obviously that isn’t what Hume is talking about.

Why even mention morality at the same time as your examples? They have nothing to do with morality at all, they have to do with rote efficacy in achieving a goal, moral or otherwise.

Thanks for your comment. What you seem to mean by “morality” is what I call the Rightness paradigm. I have more to say on that subject in my essay “Ways To Say ‘Should'”. Please take a look at that one and get back to me. And I am not advocating utilitarianism, by the way, which is just another form of rules-based, Rightness-paradigm morality. My claim is that it makes more sense to think in terms of Goodness than Rightness.

Thanks for your clarifications but I don’t understand the purpose of your post when you begin by saying it’s easy to refute Hume but then fail to derive a moral ought as Hume speaks of.

>>Moral rules that promote well-being are worth following; moral rules that don’t, aren’t. The best duty is the commitment to find ways to live that promote the well-being of yourself, your community and your environment.

Hume isn’t wrong in how he sees morality because there is another understanding. Whether your consequentialist understanding of morality Is utilitarian (as it certainly appears from the quote above) or a teleological understanding of what is good for humans (good luck defending whatever definition of “good” you have there!), you can’t claim to have answered Hume simply by changing the definition of “moral” for some undefined notion of “good” instead.

Premise: People who are honest and friendly get along with other people.
Premise: You want to get along with other people.
Conclusion: You ought to be honest and friendly.

Here’s the form

Premise: A implies B
Premise: B is true
Conclusion: Therefore A

Not modes ponens but affirming the consequent – a logical fallacy

Credit Annine on this. She’s taking a course in logic, and after learning about the Aristotelian syllogism she identified “get along with other people” as an undistributed middle term in the argument above.

The problem is that there’s more than one way to get along with other people. You don’t have to be honest and friendly – you might only appear to be honest and friendly. Or maybe you ‘get along’ through charisma, or some form of authority, or some other form of power that’s not based on being ‘nice’…

My first comment was addressed narrowly to a point of logical form. Affirming the consequent is a better fit for the form of your argument than the form you cited, modes ponens.

While affirming the consequent isn’t deductively ironclad, that doesn’t make it useless. It’s used all the time in science – if this theory is true, then we should expect such-and-such experimental results – and lo, that’s what we see, so we have reason to believe the theory. But we must be careful, because there might be other theories that give the same predictions.

As you point out, your argument does not fit any classical form precisely. To get to something more like syllogistic logic, we need another premise to tie things together…

> If I want B to be true, then I should do what I can to make A true.

OK, try this example of the proposed principle…

If I want to be wealthy, should I rob a bank? Let’s suppose, for the sake of argument, that I could actually get away with it.

It’s not so easy to get from “is” to “ought”. We have certain moral intuitions about what means are admissible for the pursuit of our goals. How do we distinguish the right way from the wrong way?

If you think in terms of logical forms, you can see that this is analogous to choosing between multiple scientific theories that yield the same predictions about experimental results. A situation like that makes it hard to justify any one theory based purely on the experimental results.

I addressed this in a reply to an earlier comment. In the realm of ethical discourse that I am advocating, words like “should” and “ought” do not indicate moral commandments, but rather advice. If there were no other consequences of your robbing a bank than your getting rich, then you would be well advised to do it. But there are other consequences. Even if you get away with it, there are negative consequences for the harmony of your psyche, your sense of connection with other people, your sense of inner peace, the kind of character you are building for yourself, and so forth. So the best advice is, don’t do it.

Readily agreed. But if this is a question of “best advice”, may I presume that my original point, about logical form, is accepted? We’re not dealing with modes ponens or any other form of reasoning having deductive certainty. This is a judgement call, not a mathematical theorem. We should not pretend more certainty than we have.

> If there were no other consequences of your robbing a bank than your getting rich, then you would be well advised to do it.

I’m worried that the possible harmful consequences that you mention should be considered doesn’t include the harm to the people robbed – at least if the robber don’t happen to care a fig about other people, and thus suffers no mental anguish from bankrupting them.

I think that how much somebody happens to care about other people has very little relevance to the question of whether it would be wrong for that person to commit a crime. What’s far more morally relevant is the harm to others. Do you share this intuition?

I agree; the logic of advice is not as rigorous as the logic of propositions. It does not yield deductive certainty. The example about robbing the bank takes into account one consequence, but not many others. To make an informed choice you’d have to take into account as many foreseen consequences as you reasonably can. You’d have to do a risk-benefit analysis.

I do share the intuition that harm to others is morally relevant, and repugnant. But that’s just an intuition. What I’m after is a way for reasonable people to make ethical judgments without relying on intuitions about moral laws that turn out to be fiendishly difficult — my guess is, impossible — to validate. (And I recognize that in most cases intuitions work just fine, and alleviate us of the need for excessive cogitation. But when intuitions collide we need a way to decide which to follow.)

Even if the robber does not care about the harm to others — and I would guess that most robbers are in this category — I think harming others does harm the robber. My task in this case is to convince people of that, so that they will refrain from harming others because they recognize their own self interest.

> I do share the intuition that harm to others is morally relevant,
> and repugnant. But that’s just an intuition.

Or as Adam Smith, a contemporary of David Hume, might say, a “moral sentiment”. By the way, I admire both thinkers greatly.

> What I’m after is a way for reasonable people to make ethical
> judgments without relying on intuitions about moral laws
> that turn out to be fiendishly difficult
> — my guess is, impossible — to validate.

I think a more urgent problem is to figure out the most reasonable and justifiable way to deal with people who, unfortunately, are not reliably amenable to rational persuasion. This leads us quickly into issues like appropriate standards for jurisprudence.

> Even if the robber does not care about the harm to others —
> and I would guess that most robbers are in this category —
> I think harming others does harm the robber.
> My task in this case is to convince people of that, so that they
> will refrain from harming others because they recognize their
> own self interest.

I find the project of persuading robbers that they are hurting themselves … quixotic.

> I find the project of persuading robbers that they are hurting themselves … quixotic.

Well, yes. That’s why we have laws and police and courts and such. I would certainly not rely on persuasion as my only tactic to keep society safe from robbers. But maybe my ideas will be useful for people who aren’t so antisocial, but who want to figure out a good way to think about ethical issues.

Despite finding Ayn Rand more than worth the time, I think that enlightened rational self-interest is just not adequate as a comprehensive foundation for ethics.

Incidentally, Rand herself sees it as irrational to ignore the effects of one’s actions on other human beings – precisely because they are similar to ourselves. I think that seeing other humans as ethically relevant is a sentiment rather than a conclusion demanded by Reason, but I’m comfortable with that because I don’t reject sentiment as unjustified simply because I can’t derive it from something else.

Sentiments can and should be questioned, but I don’t think they can simply be ignored. That would not take pre-rational sentiments seriously enough – and these sentiments are the very data from which we later generalize the concept of “ought”. A theory of ethics that conflicts too strongly with pre-existing sentiments can fairly be criticized as being a theory of something else entirely, merely labeled “ethics”. Rand’s Virtue of Selfishness is flirting with this form of missing the point, and you seem to advocating going farther still.

Seeing others as ethically relevant is indeed a “moral sentiment” in Adam Smith’s sense. I do not reject it, nor do I ignore it. Sorry if I gave that impression. If we harm somebody else, most of us feel uncomfortable, if not out-and-out guilty, precisely because of the sentiment/judgment that lets us know that the other person is feeling pain. So, just to avoid that discomfort (and there are many other reasons as well) we should refrain from harming others.

Of course, if you don’t see the other person as human — I’m thinking of Texas Rangers vs. Comanches, but there are many other examples — then you don’t feel any compunction about harming them. But I assert that if you did see them as human, and were clever enough to avoid getting into a conflict, then you would be better off.

None of this is moral absolutism. It’s all contextual. Sometimes you might have to inflict damage on someone in self-defense. I’m just saying that by and large we’d all be better off if we could figure out ways to avoid having to do that.

The Texas Rangers who didn’t see the Comaches as human – were they wrong? I think you share my *sentiment* that they were? But if the Rangers reply that they don’t subscribe to our point of view, then appeal to their interest in their own peace of mind doesn’t give us any leverage on them. Something is missing. Ethical principles are missing.

On what basis would we be justified in defending the victim? If the proposed basis is “because it makes us feel better”, then I think that yes, this misses the point. Our sympathy for the victim is part of the data for building a theory of ethics, but our unanalyzed sentiments are not by themselves a theory of ethics or a justification for any choice. Why do we sympathize with one side and not the other? From our point of view, both sides are human, after all.

If I give you a large pile of financial and commodities market data, I haven’t given you a theory of economics. Theories start from data, but they need principles, generalizations that allow the particulars to be inferred, or at least generalizations that help make the particulars more coherent and comprehensible.

I submit that enlightened self interest is not, by itself, a sufficient principle to illuminate all the data supplied by our moral sentiments. If I told some Martian anthropologists about enlightened self interest, they would have a very poor understanding of contemporary human ethical thinking. Too much is missing. Sympathy cannot be left as unexplained data – its part of what the theory needs to cover.

Not only did the Texas Rangers not view the Comanches as human, the sentiment was entirely mutual. Comanches were ruthless marauders on the order of Genghis Khan, but the white people who broke treaty after treaty and invaded the Comanches’ hunting grounds were equally ruthless. (I am reading Empire of the Summer Moon, by S. C. Gwynne, which I recommend highly.) “Both coveted the same land, both wanted the other side to stop contesting it, and neither was willing to give anything meaningful in exchange.” (p. 157)

> I submit that enlightened self interest is not, by itself, a sufficient principle to illuminate all the data supplied by our moral sentiments. … Sympathy cannot be left as unexplained data – it’s part of what the theory needs to cover.

There is plenty of plausible evolutionary psychological explanation of why we have certain moral sentiments, including sympathy for others, and plenty of experimental evidence for how those sentiments operate in our everyday experience. But I suspect that is not what you are after. I suspect you want to know why we should treat others with respect, even if we don’t feel like it. You can answer that question by appealing to moral rules, and assert that it is right to do so and we ought to do what is right. Or you can answer it by appealing to observable evidence, that it is good (beneficial, helpful) for us to do so, that by and large it helps us function better, whether or not we feel better in any given instance. I think there are numerous difficulties with the Rightness paradigm, not least the difficulty of figuring out with any certainty what the moral rules are that define what is right. There are fewer difficulties with the Goodness paradigm. If we take as a premise that we are connected with our environment, physical and social, and that our environment nourishes us, then it makes sense to care for that environment because in so doing we care for ourselves. I mean that in a general, rule-of-thumb sort of way, not as a moral absolute. If someone is threatening your family with a knife and you have an opportunity to overpower them, it’s probably best for all concerned that you do so, even if it thwarts the assailant’s intention and denies them their dignity as a human being. You can always think of exceptions, but as a general rule, it is better to cooperate, be helpful and in general act in accord with generally-recognized ethical principles.

> … if the Rangers reply that they don’t subscribe to our point of view, then appeal to their interest in their own peace of mind doesn’t give us any leverage on them. Something is missing. Ethical principles are missing.

And what leads you to believe that appeal to ethical principles would have any greater effect?

> And what leads you to believe that appeal to ethical principles would have any greater effect?

It probably wouldn’t – but at least it would not miss the point. Asking whether appeal to principles would be pragmatic is trying to evaluate the “Rightness paradigm” by the standards of the “Goodness paradigm”. Naturally, Rightness is found wanting when measured this way.

I think the question of what is right is at the heart of what is generally meant by ethics. Pragmatism is a poor approximation. Even when pragmatic arguments get the right answers, they get them for the wrong reasons. You are now free to accuse me of measuring the “Goodness paradigm” by the standards of the “Rightness paradigm”.

Hi, interesting google result. I think I agree with others that the counterexample is flawed, but it’s interesting to think about, thanks.

Mainly, I take “descriptive” statements as being empirical, scientific statements – things that are verifiable in the “outside” world. I think it’s defensible to say that “some people get along with some other people” is an “is” statement, for some clear definition of “get along”.

But I don’t believe that “I want to get along with other people.” is an “is” statement. I think treating it as such relaxes the definition of an “is” statement. I think as soon as wants and desires are brought into play, you are talking about internal motivations and philosophies which are not empirically verifiable.

It may be possible to rephrase “I want to get along with other people” as an identically-meaning “ought” statement, resting on some other “ought” premise.

Interesting thought, though – still on my google hunt for an example deriving an ought from an is. 🙂

Conclusion [from the post http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3068523/%5D
Today, many philosophers still aim at establishing a normative system built on an unimpeachable foundation; or they demand such a foundation from others. At the same time, they refer to the naturalistic fallacy as a legitimate criticism against instantiations of scientific ethics, mostly evolutionary ethics. We have shown that both arguments when used together contradict each other; we argued that no fallacy is committed in the work they criticize. We also countered the assumption that ethics needs to be foundational and that science is not relevant for normative ethics. Though agreeing that science loses some of its relevance for foundational ethics, we claim that science is highly relevant in nonfoundational ethics. Crucially, we reasoned that scientific ethics is best conceived of as an instance of such a nonfoundational normative ethics. We believe the debate between proponents and opponents of scientific ethics would benefit from recognizing scientific ethics as nonfoundational. Much of the discussed disagreement occurred because nonfoundationalist proponents were debated within a foundationalist framework; therefore the discussion should be focused on this difference.
In the last sections, we discussed and argued for the nonfoundationalist view of ethics. Defenders of scientific ethics refer to naturalism to support their view. Naturalists take the implausibility of a foundational ethics at face value and endorse another approach. We proposed a slightly modified naturalistic reasoning in support of scientific ethics. Our approach does not aim at building a grand philosophical theory but suggests that a hands-on method for normative inquiry can give more direct success. Normative inquiry can be aimed at local and concrete problem solving, wherein a moral problem is never absolutely solved. It is thereby a challenging approach that demands regular reassessment of a moral problem while science proceeds and offers new information. As naturalists, analytic truth is not our aim and the search for first foundations is rejected in favour of conditional moral judgments that can be tested on their practical success.10

You used this as an example:
If [P] you want to get along with people, then [Q] you ought to be honest and friendly.

We can spell this out logically as follows:

Premise: People who are honest and friendly get along with other people.
Premise: You want to get along with other people.
Conclusion: You ought to be honest and friendly.

Would Modus Tollens be this?:
Premise: People who are honest and friendly get along with other people
Premise: Mary is not honest and friendly with others
Conclusion: Mary doesn’t get along with others

In any deductive syllogism IF the premises are true, THEN the conclusion must be infallibly true.

Would this syllogism be true?:
Premise: People who are honest and friendly get along with other people
Premise: Mary doesn’t get along with others
conclusion: Mary is not honest or friendly with others.

The conclusion doesn’t necessarily follow from the premises? It’s possible that the reason Mary doesn’t get along with others is because she is a bit of a pig, or perhaps she’s shy?

It seems that in the IS/Ought example we don’t have firm refutation of Kants Moral Law. Couldn’t we just as easily say this:

If [P] you want to get along with people, then [Q] you ought to follow the catagorical imperative. (Kants Moral Law)

With Kant, our motives are the key to our morality. According to Kant the forumula for Universal Law is Act only on that maxim whereby you can at the same time will that it should be a universal law.

Universalize the maxim on which you’re about to act. If everybody made promises they don’t keep, then nobody would believe such a promise. There would be no such thing as a promise. So there would be a contradiction. The maxim universalized would undermine itself. That’s the test. That’s how we know that the false promise is wrong. Is this consequentialist morality? Kant says no. What he’s saying is that this is the TEST, but it isn’t exactly the REASON. The REASON why you should Universalize to test the Maxim is to see whether you are privileging your particular needs and desires, over everyone elses. It’s a way of pointing to this demand of the categorical imperative , that your reasons shouldn’t depend, or their justification, on your needs, your interests, or your circumstances being more important than anybody else’s. That is the moral intuition, lying behind the universalization.

Kant distinguishes between persons on one hand, and things on the other. Rational beings are persons. They don’t just have a relative value for us, but if anything they have an absolute value…an intrinsic value. That is, rational beings have dignity. They’re worthy of reverence and respect. This line of reasoning leads Kant to the second formulation of the categorical imperative. > “Act in such a way, that you always treat humanity, whether in your own person, or in the person of any other never simply as a means, but always at the same time as an end.
That is the formula of Humanity as an END. The idea that human beings as rational beings are Ends in themselves, not open to use merely as means to my ends. When I make a false promise to you, I’m using you as a means to my ends. So I fail to respect YOUR dignity. I’m manipulating you.

I find no moral worth to that. I noticed another post here regarding Ayn Rands Virtue of Selfishness. I read that book a long time ago. I thought it was foolish.

Larry Brown asks
> Would Modus Tollens be this?:
> Premise: People who are honest and friendly get along with other people
> Premise: Mary is not honest and friendly with others
> Conclusion: Mary doesn’t get along with others

No. That is a fallacy called Denying the Antecedent. Maybe Mary gets along with others even though she is not honest and friendly. Maybe she is rich, and people hang out with her even though they don’t like her personality.

> Would this syllogism be true?:
> Premise: People who are honest and friendly get along with other people
> Premise: Mary doesn’t get along with others
> Conclusion: Mary is not honest or friendly with others.

That is a valid argument. In fact, it is Modus Tollens. It is valid because it is not possible for the premises to be true and the conclusion to be false.

Thanks for the reply. I am aware the Modus Tollens reveals the conclusion as false. That was actually the point I was trying to make.

The example you use was Modus Ponens.
You used this as an example:
If [P] you want to get along with people, then [Q] you ought to be honest and friendly.

We can spell this out logically as follows:

Premise: People who are honest and friendly get along with other people.
Premise: You want to get along with other people.
Conclusion: You ought to be honest and friendly.

What I was trying to show was that through Modus Tollens, the example doesn’t hold up as you point out.

> Would this syllogism be true?:
Premise: People who are honest and friendly get along with other people
Premise: Mary doesn’t get along with others
Conclusion: Mary is not honest or friendly with others.

“No. That is a fallacy called Denying the Antecedent. Maybe Mary gets along with others even though she is not honest and friendly. Maybe she is rich, and people hang out with her even though they don’t like her personality.”

I agree with you that when we apply modus tollens the syllogism reveals a fallacy. Mary may be the victim of prejudice. Maybe she’s obese or homely or aside from being honest and friendly, she’s a bit of a slob. Being honest and friendly doesn’t guarantee that she gets along with others. So I guess my question is, how does the use of modus polens show that ought is derived from is when modus tollens shows that as a fallacy?

I think the point is that in any deductive syllogism, IF the premises are true, THEN the conclusion MUST follow. In my example the conclusion doesn’t follow because one or more of the premises hasn’t been demonstrated as true.

Your first example syllogism is invalid because it is denying the antecedent. Your second is a valid example of Modus Tollens. I think you got them mixed up in your reply to my reply.

Syllogisms are neither true nor false. Truth and falsity apply to the premises and conclusion, not to the syllogism as a whole. The syllogism as a whole is either valid or invalid. So the question “Would this syllogism be true?” is not meaningful.

My ethical inference is not Modus Ponens. It is based on Modus Ponens, but it is not the same. Please see my newer post on the subject here:http://www.bmeacham.com/blog/?p=966

In any case, no matter how much of slob or homely Mary is, she will have a better chance of getting along with others if she is honest and friendly than if she is a liar and hostile.

“If you want to get along with people, then you ought to be honest and friendly.

We can spell this out logically as follows:

Premise: People who are honest and friendly get along with other people.
Premise: You want to get along with other people.
Conclusion: You ought to be honest and friendly.(2)”

The syllogism fails because the initial premise is not objectively true in the sense that it is a universal truth. That is so because you leave out of the equation the question of the individual personality. What if someone is over eager and too persistent in their being friendly? What if someone simply has a personality which irritates most people?