I'll be working on the assumption that the general scientific consensus is correct and that the evidence does, indeed, point inescapably towards the fact that we are primates with a well-developed frontal lobe and cerebrum for thinking abstractly and critically and that special creation is, for all intents and purposes, inaccurate at best.

I believe that objectively good and objectively bad moral "oughts" can be discerned through Science. The is/ought Hume's Law would seem to indicate that Science can only tell us what is, but if we define morality as "that which is best for the highest degree of human well-being," then I do believe that we can determine what is best for human well-being and subsequently that which we "ought" to do.

At any rate, if this were the case, would this be defined as an objective moral standard? Matt Dillahunty described it rather well, so allow me to paraphrase:Morality, like Chess, has objectively good and objectively bad moves if we define the objective as winning the game. We "ought" to do that which is objectively good an "ought not" do that which is objectively bad, according to the current rules as that is what gets us closer to our objective.

Now, if the rules of chess had been anything other than what they are, for example, the Objective was to lose you Queen as quickly as possible, there would still be objectively good and objectively bad movements, but they would be different than what they are today. Would this still be a form of objective morality?

My goal, here, is not to argue whether or not I'm correct in my views on morality. Thanks.

Interesting thoughts... but I'm not quite sure what the question is. I would tend to agree with your general assessment that we could still have objective definitions for good and bad if we believed there were objectives. This is called the function argument. Aristotle came up with it.
– virmaiorNov 18 '14 at 3:59

I'll go ahead and specify the question better: "Would this be a form of objective morality?" I think that a TRULY objective morality would exist irregardless of how we evolved or even if we removed all conscious minds from the Universe.
– GoodiesNov 18 '14 at 4:07

2

I think the technical term then for what you want to say universal rather than objective. i.e., "would that be a form of universal morality?"
– virmaiorNov 18 '14 at 5:00

ps, you can edit your original question rather than replying in the comment fields.
– virmaiorNov 18 '14 at 5:00

1

To explain universal vs. objective using your chess analogy, an objectively bad move is one that analysis shows is meritless. A universally bad move is one that always loses. So hanging your queen is objectively bad, but getting checkmated is universally bad.
– virmaiorNov 18 '14 at 5:03

5 Answers
5

This is the Sam Harris route to ignoring the difficulties with defining an objective morality (I assign it to him as he was, as far as I can tell, the most vocal and prominent early advocate of this position).

It's really easy to define an objective morality, actually. It's just really difficult to justify it. Here's an objective morality: that which takes humans further away from the center of the earth is good. That which takes them closer is bad. (So, obviously, we ought all live as high as we can on mountains, and treat
scuba diving as a grievous sin.)

There's no doubt that science is a wonderful tool for providing us with information about many things. That it would have a lot to say about well being of humans is unsurprising.

The problems come when you start asking why: why well-being instead of happiness? Why just humans? How can you quantify it in a way that is correct, not just easy / measurable? How do you combine scores from different humans?

This also ignores the point. Yes, there are easy cases, and they're already easy without this supposed framework for morality. Almost nobody seriously advocates for letting malaria run rampant or for spreading it. But there are other common problems, like increasing wealth disparities or the conflict between economic growth and environmental degradation or whether it is noble or evil to publicize the plight of starving children in Africa where you simply must answer many of these why questions.

So, science is an awesome tool, and we can apply it to help us answer questions of morality, but it doesn't tell us that the metric should be "human well-being" any more than it tells us it should be "distance from the earth's core".

It does however, tell us some things about morality that we tend to ignore. For example:

If morality is to be about humans at all, and if existence is better than non-existence, your morality had better not ever recommend a course of action that leads to extinction of humans.

As evolved creatures, many of our strongest drives are there because they are (or were) necessary or helpful in an evolutionary context: love of family, desire for sex, dislike of being enslaved, etc.. Elevating one of these to exalted status while ignoring the others is even more likely to be emotionally unworkable than something more comprehensive because they're all there for a reason. (To be determined: if keeping the underlying reason in mind is usually enough, or if you must always keep all the special cases in mind.)

We're big enough boys and girls now, technologically, that we can royally mess up our sandbox. Treating morality as self-centered interactions between individuals without considering our wider impact misses what is now a very important impact of humans on other humans. (To be determined: should that also fall under morality, or should there be a second set of rules? If a second set, how do you adjudicate when morality and conservation give different answers?)

Because of this, the closest thing to a scientifically objective morality is something like this: things are good to the extent that they maximize the chances for indefinite survival of human life (if not possible, fall back to other life in proportion to how closely related it is); things are bad to the extent to that they jeopardize it.

And it's not fully objective, either; that's just the behavior that the universe rewards with continued existence. Nor is it clear that it's enough to build a comprehensive morality. But, from what I've seen, it's about as far as one can get when leaning on science alone.

I agree with this. The phrasing that Harris consistently uses is that once you accept the premise that maximizing the well being of conscious creatures is a goal worth pursuing, "there are objective truths to be known" on how to go about doing it. I am not aware of him espousing the idea of an objective morality that exists outside of that context. I happen to agree that there is no objective way to verify that this is the best possible goal to pursue. However, in the absence of a strong alternative, it certainly seems to be the most noble...
– eclipz905Oct 30 '15 at 14:51

Additionally, few would argue that any moral precepts apply in the absence of conscious beings, so it is easy to imagine that morality is in some way directly tied to their experience.
– eclipz905Oct 30 '15 at 14:51

@eclipz905 - Well, did you not notice my point about it being hard to know how to compute it if there is more than one conscious being? It might not feel as "noble" but the evolutionarily-inspired version I stated above does not suffer from that problem. (Personally I find that a hard limit on conscious beings in a universe where lineages of conscious beings may arise, over sufficiently long timescales, from non-conscious ones to be more than a bit dubious.)
– Rex KerrOct 30 '15 at 15:55

@RexKing I use 'conscious' liberally, to encompass all beings even minutely capable of experiencing joy and suffering. This brand of utilitarianism affords moral consideration proportional to each being's range of experience. The most consideration is given to the beings with the broadest range, scaling down to zero consideration for entities believed to have no range of experience. Your post raises the point that this phrasing may need refinement. I'd not claim single-celled organisms experience joy and suffering, but I'd say we have a greater ethical obligation to them than we do to rocks.
– eclipz905Oct 30 '15 at 19:27

The 'How on earth would you define "best"' angle is obvious, and already taken. And some form of that is the right answer.

But even if you defined best, in the best possible way, and we all agreed, there are still two huge obstacles left.

1) Kuhn: Science is a succession of models, those models will continue to change. Things that seemed to be best, given the same standard, under one model, may turn out to be ill-conceived when the model changes. And you cannot predict how the model will change.

Feyerabend: Beyond that, you cannot know what parts of science are stable by looking at the current state or the historical trend. So you cannot really guess what science is safe to use. You need some meta-decision about how to measure the risks and rewards of relying upon science without genuine probabilities available. "Act on what you believe at the moment" is just a religious decision -- a faith that the universe is somehow honest with you.

2) Neitsche: What is good for the mass is seldom good for the species. If human morality works like evolution, then it relies for progress on some individual making new models out of what is currently a disadvantage. Favoring the whole, or even favoring each individual, will limit our uniqueness by removing disadvantages, and restricts the menu of disadvantages that might be the right basis for forward thinking. "Choose your delusion" requires some really unhappy schizophrenics from time to time, to point out just how stupid sanity occasionally becomes.

So what is good for the present humans may be bad for future humans, and you need another meta-decision making process to balance the forward movement of the process against its present stability.

There are then, three layers of fuzzy you cannot clear up (which I will attribute as follows to Isaac Asimov):

The Robot question -- How to judge 'the good of humanity'

The Waldos question -- How to decide what science to rely on to what degree, and

I don't understand point (2) -- I'm inferring that it has something to do with the conflict that can arise between each agent acting in their interests vs. what is in the interest of the group as a whole.
– DaveNov 18 '14 at 16:01

@Dave I think the idea is that what is good to the masses is not necessarily what is ultimately good for the masses. This is why the value of democracy is not that it produces the best possible governments/decisions (it obviously doesn't); the value is that it makes a pattern of the worst decisions less likely. So Nietzsche is sort of saying democracy is very unlikely to save humanity from itself. The same thing is implied by any objective morality that doesn't define "good" as the will of the majority (which would be a dubious form of objective morality).
– selfConceivedAsEvilNov 18 '14 at 19:05

@goldilocks I think that is true, but not where I was going. (I think that is all rolled up in the first question, which I skipped.) I am looking more straight on at the Genealogy of Morals. For example: Would we decide to keep slums if it was a proven great men from humble means played some very productive Archetypal role in human advancement? Depends on whether we value the real or the potential. So not something objective.
– jobermarkNov 18 '14 at 23:17

Morality, like Chess, has objectively good and objectively bad moves if we define the objective as winning the game. We "ought" to do that which is objectively good an "ought not" do that which is objectively bad, according to the current rules as that is what gets us closer to our objective.

A common theistic-objective response to this (CS Lewis is the first to come to mind) goes something like: someone created the rules of chess, so who created these rules of morality? Either there are no rules, or you are saying that we as a society made the rules, in which case what you've proposed is not an objective morality but a subjective morality. It is subjective because different cultures have different norms for how to behave/achieve the greatest good/avoid causing embarrassment/etc.

Hayek wrote about something similar to what you mention (he referred to these organically defined rules as laws, and the rules created by design by the government he called legislation). But he was also very explicit that these rules could (indeed, should!) change over time, and thus are subjective.

There's the example from Herodotus about the one culture whose "current rules" got them "closer to [their] objective" by burning their dead, the other culture who did likewise by eating their dead, and who found each other's ideas of morality regarding treatment of the dead abhorrent. Quite clearly, this could not be an objective morality.

This is not objective at all. Would what makes the sadist and the child molester and the necrophiliac "Well" be worthy of doing?. It is based on the subjective and arbitrary standard of wellness but as I said wellness and morality do not automatically equate to each other.

You may think that you have a good grasp on what wellness a person should have but there is many instances of truly evil people doing bad things though thinking they are doing good. This will not lead to a greater tomorrow it will just enable evil to decide on any arbitrary view of well being to justify any evil act.

This is even if we avoid the many problems and pitfalls you may encounter with the views of the first paragraph.

It could be objective if he's willing to assert some good or set of goods as best for all humans. You're reducing that to subjective interest in your answer -- which does lead to the contradictions you suppose.
– virmaiorNov 18 '14 at 12:53

By "highest degree of human well-being," I am referring to what is essentially the "greatest amount of well-being" for all humans. If the majority of humans, or, perhaps, even a sizable percentage, were BDSM/pedophiles/necrophiles, your objection would be both valid and relevant. If the vast majority of people were masochists (i.e. enjoy pain), I believe morality would have evolved differently, but wouldn't there still be objectively good decisions that would cause the most amount of general happiness?
– GoodiesNov 18 '14 at 14:57

"there is many instances of truly evil people doing bad things though thinking they are doing good" -> Seems oxymoronic. Truly evil people do bad things thinking they are doing evil. Why would an evil person do something thinking they were doing good? That would be counter to their ends.
– selfConceivedAsEvilNov 18 '14 at 19:09

The is/ought distinction tells you that you cannot define a normative term without recursively refering to an other normative term.
Here you do not escape the problem: you eliminate an "ought" for a "best", but what is best?
Similarly, in chess, you have to specify the goal and the rules.

The point of naturalizing morality is not that once the norms are given, the means can be objective (everyone agrees with that).
A moral is objective if the goals and norms are specified objectively.

The is/ought distinction only applies to modern senses of morality. See GEM Anscombe's "Modern Moral Theories" articles. It may be possible to specify oughts based on is-es using genetic data. E.g., you ought to feed a cat meat follows not from any prior normative claim but from what it is to be a cat.
– virmaiorNov 18 '14 at 12:54

It seems to me that the sentence can be paraphrased "you ought to feed a cat meat if you want to feed a cat" (which is an implicit norm).
– Quentin RuyantNov 18 '14 at 15:24

1

Regarding why I don't post answers. First, I think the question is not yet formulated clearly enough vis-a-vis confusion about universal and objective. Second, I don't really have the time lying around to do so. Third, I'm more than willing to leave questions others can answer to others.
– virmaiorNov 18 '14 at 15:40

1

Regarding the implicit norm claim, the claim cats need meat doesn't seem to be norm-dependent. Yes, there's a question of whether I should feed cats meat (I actually don't like cats personally). But the "norming" seems to happen insofar as cats and meat form a pair not merely on whether I want to own a cat. If the world has objective features like that, they can at least on occasion become moral features in ways that are not contingent on my whims.
– virmaiorNov 18 '14 at 15:42

1

Clearly all morality hinges on when it matters to us, because morality is generally about questions of what we should do. But the objectivity of morality hinges on whether these things fit with other features of reality. The is/ought problem primarily occurs for Kantians and is primarily trumpeted by consequentialists.
– virmaiorNov 18 '14 at 15:43